Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Genetics

Rss Feed Group items tagged

Weiye Loh

Rethinking the gene » Scienceline - 0 views

  • Currently, the public views genes primarily as self-contained packets of information that come from parents and are distinct from the environment. “The popular notion of the gene is an attractive idea—it’s so magical,” said Mark Blumberg, a developmental biologist at the University of Iowa in Iowa City. But it ignores the growing scientific understanding of how genes and local environments interplay, he said.
  • With the rise of molecular biology in the 1930s and genomics (the study of entire genomes) in the 1970s, scientists have developed a much more dynamic and complex picture of this interplay. The simplistic notion of the gene has been replaced with gene-environment interactions and developmental influences—nature and nurture as completely intertwined.
  • But the public hasn’t quite kept up. There remains a “huge chasm” between the way scientists understand genetics and the way the public understands it, said David Shenk, an author who has written extensively on genetics and intelligence.
  • ...8 more annotations...
  • the public still thinks of genes as blueprints, providing precise instructions for each individual.
  • “The elegant simplicity of the idea is so powerful,” said Shenk. Unfortunately, it is also false. The blueprint metaphor is fundamentally deceptive, he said, and “leads people to believe that any difference they see can be tied back to specific genes.”
  • Instead, Shenk advocates the metaphor of a giant mixing board, in which genes are a multitude of knobs and switches that get turned on and off depending on various factors in their environment. Interaction is key, though it goes against how most people see genetics: the classic, but inaccurate, dichotomies of nature versus nurture, innate versus acquired and genes versus environment.
  • Belief in those dichotomies is hard to eliminate because people tend to understand genetics through the two separate “tracks” of genes and the environment, according to speech communication expert Celeste Condit from the University of Georgia in Athens. Condit suggests that, whenever possible, explanations of genetics—by scientists, authors, journalists, or doctors—should draw connections between the two tracks, effectively merging them into one. “We need to link up the gene and environment tracks,” she said, “so that [people] can’t think of one without thinking of the other.”
  • Part of what makes these concepts so difficult lies in the language of genetics itself. A recent study by Condit in the September issue of Clinical Genetics found that when people hear the word genetics, they primarily think of heredity, or the quality of being heritable (passed from one generation to the next). Unfortunately, the terms heredity and heritable are often confused with heritability, which has a very different meaning.
  • heritability has single-handedly muddled the discourse of genetics to such a degree that even experts can’t keep it straight, argues historian of science Evelyn Fox Keller at the Massachusetts Institute of Technology in her recent book, The Mirage of a Space Between Nature and Nurture. Keller describes how heritability (in the technical literature) refers to how much of the variation in a trait is due to genetic explanation. But the term has seeped out into the general public and is, understandably, taken to mean heritable, or ability to be inherited. These concepts are fundamentally different, but often hard to grasp.
  • For example, let’s say that in a population with people of different heights, 60 percent of the variation in height is attributable to genes (as opposed to nutrition). The heritability of height is 60 percent. This does not mean, however, that 60 percent of an individual’s height comes from her genes, and 40 percent from what she ate growing up. Heritability refers to causes of variations (between people), not to causes of traits themselves (in each particular individual). The conflation of crucially different terms like traits and variations has wreaked havoc on the public understanding of genetics.
  • The stakes are high. Condit emphasizes how important a solid understanding of genetics is for making health decisions. Whether people see diabetes or lung cancer as determined by family history or responsive to changes in behavior depends greatly on how they understand genetics. Policy decisions about education, childcare, or the workplace are all informed by a proper understanding of the dynamic interplay of genes and the environment, and this means looking beyond the Mendelian lens of heredity. According to Shenk, everyone in the business of communicating these issues “needs to bend over backwards to help people understand.”
Weiye Loh

Skepticblog » ADHD and Genetics - 0 views

  • The main problem with media reporting is that they tend to oversimplify the concept of a genetic disorder. The worst offenders speak of “the gene” for whatever is being discussed, like ADHD. There are purely genetic disorders that are the result of a mutation in a single gene, but more often diseases and disorders that have a genetic component are the complex result of multiple genes and their interaction with the environment. Therefore there is no single gene for ADHD, autism, migraines, obesity or other complex condition.
  • What this study shows is an increased risk of copy number variants (CNVs) in people with a confirmed diagnosis of ADHD. A CNV is either a deletion or duplication of genetic material. The researchers found that 78 out of 1047 control had such CNVs (7%), while 57 out of 366 subjects with ADHD did (15%). This was a statistically significant increase. Further, CNV were more likely to occur on genes previous associated with both autism and schizophrenia (and therefore likely to be involved in brain development).
  • The authors conclude: “Our findings provide genetic evidence of an increased rate of large CNVs in individuals with ADHD and suggest that ADHD is not purely a social construct.”
  • ...1 more annotation...
  • they are saying that this study is evidence that ADHD is not purely social. They are not saying that it is purely or even mostly genetic. In fact, only 15% of subjects with ADHD demonstrated increased CNVs. This study is a proof of concept more than anything, demonstrating that genetic makeup can contribute, at least in some cases, to the clinical syndrome of ADHD.
  •  
    ADHD AND GENETICS
Weiye Loh

Genetic Sequencing Will Have to Wait: Links Between Genes and Behavior Still Largely Un... - 0 views

  • A recent article in The New York Times reported that over 100 studies show a relationship between genes and criminality but that the environment plays a key role in the effects of this relationship: “Kevin Beaver, an associate professor at Florida State University’s College of Criminology and Criminal Justice, said genetics may account for, say, half of a person’s aggressive behavior, but that 50 percent comprises hundreds or thousands of genes that express themselves differently depending on the environment. He has tried to measure which circumstances — having delinquent friends, living in a disadvantaged neighborhood — influence whether a predisposition to violence surfaces. After studying twins and siblings, he came up with an astonishing result: In boys not exposed to the risk factors, genetics played no role in any of their violent behavior. The positive environment had prevented the genetic switches — to use Mr. Pinker’s word — that affect aggression from being turned on. In boys with eight or more risk factors, however, genes explained 80 percent of their violence. Their switches had been flipped.”
  • “This idea that if something is genetic it’s deterministic is a misconception that we have to get over because saying that genes are involved in depression does not necessarily mean that someone who has certain genetic variants is doomed to become depressed, it just means that under certain circumstances, he or she may have to do certain things to help alleviate it, but it’s not unchangeable. You can change your brain, you can change your brain in many different ways and genetics is just one of many of these ways.”
  • In fact, environment plays the same crucial role for criminality as it does for obesity and depression. In an interview I did for a story in The Michigan Daily on depression research, Dr. Margit Burmeister, a professor of human genetics and a researcher in the Molecular and Biological Neuroscience Institute at the University of Michigan, explained the dangers the public oversimplifying the link between genetics and depression:
Weiye Loh

Epiphenom: The evolution of dissent - 0 views

  • Genetic evolution in humans occurs in an environment shaped by culture - and culture, in turn is shaped by genetics.
  • If religion is a virus, then perhaps the spread of religion can be understood through the lens of evolutionary theory. Perhaps cultural evolution can be modelled using the same mathematical tools applied to genetic evolution.
  • Michael Doebli and Iaroslav Ispolatov at the University of  British Columbia
  • ...6 more annotations...
  • set out to model was the development of religious schisms. Such schisms are a recurrent feature of religion, especially in the West. The classic example is the fracturing of Christianity that occured after the reformation.
  • Their model made two simple assumptions. Firstly, that religions that are highly dominant actually induce some people to want to break away from them. When a religion becomes overcrowded, then some individuals will lose their religion and take up another.
  • Second, they assume that every religion has a value to the individual that is composed of it's costs and benefits. That value varies between religion, but is the same for all individuals. It's a pretty simplistic assumption, but even so they get some interesting results.
  • Now, this is a very simple model, and so the results shouldn't be over-interpreted. But it's a fascinating result for a couple of reasons. It shows how new religious 'species' can come into being in a mixed population - no need for geographical separation. That's such a common feature of religion - from the Judaeo-Christian religions to examples from Papua New Guinea - that it's worth trying to understand what drives it. What's more, this is the first time that anyone has attempted to model the transmission of religious ideas in evolutionary terms. It's a first step, to be sure, but just showing that it can be done is a significant achievement.
  • The value comes because it shifts the focus from thinking about how culture benefits the host, and instead asks how the cultural trait is adaptive in it's own right. What is important is not whether or not the human host benefits from the trait, but rather whether the trait can successfully transmit and reproducing itself (see Bible Belter for an example of how this could work).
  • Even more intriguing is the implications for understanding cultural-genetic co-evolution. After all, we know that viruses and their hosts co-evolve in a kind of arms race - sometimes ending up in a relationship that benefits both.
  •  
    Genetic evolution in humans occurs in an environment shaped by culture - and culture, in turn is shaped by genetics
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Glowing trees could light up city streets - environment - 25 November 2010 - New Scientist - 0 views

  • If work by a team of undergraduates at the University of Cambridge pans out, bioluminescent trees could one day be giving our streets this dreamlike look. The students have taken the first step on this road by developing genetic tools that allow bioluminescence traits to be easily transferred into an organism.
  • Nature is full of glow-in-the-dark critters, but their shine is feeble - far too weak to read by, for example. To boost this light, the team, who were participating in the annual International Genetically Engineered Machines competition (iGEM), modified genetic material from fireflies and the luminescent marine bacterium Vibrio fischeri to boost the production and activity of light-yielding enzymes. They then made further modifications to create genetic components or "BioBricks" that can be inserted into a genome.
  • So are glowing trees coming soon to a street near you? It's unlikely, says Alexandra Daisy Ginsberg, a designer and artist who advised the Cambridge team. "We already have light bulbs," she says. "We're not going to spend our money and time engineering a replacement for something that works very well." However, she adds that "bio-light" has a distinctive allure. "There's something much more visceral about a living light. If you have to feed the light and look after it, then it becomes more precious."
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
Weiye Loh

Bad Health Habits Blamed on Genetics - Newsweek - 0 views

  • A new study shows just how alluring “My DNA did it!” is to some people.
  • here are serious scientific concerns about the reliability and value of many of the genes linked to disease. And now we have another reason why the hype is worrisome: people who engage in the riskiest-for-health behaviors, and who therefore most need to change, are more likely to blame their genes for their diseases, finds a new study published online in the journal Annals of Behavioral Medicine.
  • Worse, the more behavioral risk factors people have—smoking and eating a high-fat diet and not exercising, for instance—the less likely they are to be interested in information about living healthier.
  • ...1 more annotation...
  • The unhealthier people’s habits were, the more they latched on to genetic explanations for diseases
  •  
    My Alleles Made Me Do It: The Folly of Blaming Bad Behavior on Wonky DNA
Weiye Loh

Alzheimer's Studies Find New Genetic Links - NYTimes.com - 0 views

  • The two largest studies of Alzheimer’s disease have led to the discovery of no fewer than five genes that provide intriguing new clues to why the disease strikes and how it progresses.
  • For years, there have been unproven but persistent hints that cholesterol and inflammation are part of the disease process. People with high cholesterol are more likely to get the disease. Strokes and head injuries, which make Alzheimer’s more likely, also cause brain inflammation. Now, some of the newly discovered genes appear to bolster this line of thought, because some are involved with cholesterol and others are linked to inflammation or the transport of molecules inside cells.
  • By themselves, the genes are not nearly as important a factor as APOE, a gene discovered in 1995 that greatly increases risk for the disease: by 400 percent if a person inherits a copy from one parent, by 1,000 percent if from both parents.
  • ...7 more annotations...
  • In contrast, each of the new genes increases risk by no more than 10 to 15 percent; for that reason, they will not be used to decide if a person is likely to develop Alzheimer’s. APOE, which is involved in metabolizing cholesterol, “is in a class of its own,” said Dr. Rudolph Tanzi, a neurology professor at Harvard Medical School and an author of one of the papers.
  • But researchers say that even a slight increase in risk helps them in understanding the disease and developing new therapies. And like APOE, some of the newly discovered genes appear to be involved with cholesterol.
  • The other paper is by researchers in Britain, France and other European countries with contributions from the United States. They confirmed the genes found by the American researchers and added one more gene.
  • The American study got started about three years ago when Gerard D. Schellenberg, a pathology professor at the University of Pennsylvania, went to the National Institutes of Health with a complaint and a proposal. Individual research groups had been doing their own genome studies but not having much success, because no one center had enough subjects. In an interview, Dr. Schellenberg said that he had told Dr. Richard J. Hodes, director of the National Institute on Aging, the small genomic studies had to stop, and that Dr. Hodes had agreed. These days, Dr. Hodes said, “the old model in which researchers jealously guarded their data is no longer applicable.”
  • So Dr. Schellenberg set out to gather all the data he could on Alzheimer’s patients and on healthy people of the same ages. The idea was to compare one million positions on each person’s genome to determine whether some genes were more common in those who had Alzheimer’s. “I spent a lot of time being nice to people on the phone,” Dr. Schellenberg said. He got what he wanted: nearly every Alzheimer’s center and Alzheimer’s geneticist in the country cooperated. Dr. Schellenberg and his colleagues used the mass of genetic data to do an analysis and find the genes and then, using two different populations, to confirm that the same genes were conferring the risk. That helped assure the investigators that they were not looking at a chance association. It was a huge effort, Dr. Mayeux said. Many medical centers had Alzheimer’s patients’ tissue sitting in freezers. They had to extract the DNA and do genome scans.
  • “One of my jobs was to make sure the Alzheimer’s cases really were cases — that they had used some reasonable criteria” for diagnosis, Dr. Mayeux said. “And I had to be sure that people who were unaffected really were unaffected.”
  • Meanwhile, the European group, led by Dr. Julie Williams of the School of Medicine at Cardiff University, was engaged in a similar effort. Dr. Schellenberg said the two groups compared their results and were reassured that they were largely finding the same genes. “If there were mistakes, we wouldn’t see the same things,” he added. Now the European and American groups are pooling their data to do an enormous study, looking for genes in the combined samples. “We are upping the sample size,” Dr. Schellenberg said. “We are pretty sure more stuff will pop out.”
  •  
    Gene Study Yields
Weiye Loh

Discovered: the genetic secret of a happy life - Science, News - The Independent - 0 views

  •  
    The finding is the first to demonstrate a link between the gene, called 5-HTT, and satisfaction. People with the long version are more likely to be cheerful while sulkiness is the default position of those with the short version. Knowing which version of the gene they carry may help people improve their mood.
Weiye Loh

Skepticblog » Flaws in Creationist Logic - 0 views

  • making a false analogy here by confusing the origin of life with the later evolution of life. The watch analogy was specifically offered to say that something which is complex and displays design must have been created and designed by a creator. Therefore, since we see complexity and design in life it too must have had a creator. But all the life that we know – that life which is being pointed to as complex and designed – is the result of a process (evolution) that has worked over billions of years. Life can grow, reproduce, and evolve. Watches cannot – so it is not a valid analogy.
  • Life did emerge from non-living matter, but that is irrelevant to the point. There was likely a process of chemical evolution – but still the non-living precursors to life were just chemicals, they did not display the design or complexity apparent in a watch. Ankur’s attempt to rescue this false analogy fails. And before someone has a chance to point it out – yes, I said that life displays design. It displays bottom-up evolutionary design, not top-down intelligent design. This refers to another fallacy of creationists – the assumption that all design is top down. But nature demonstrates that this is a false assumption.
  • An increase in variation is an increase in information – it takes more information to describe the greater variety. By any actual definition of information – variation increases information. Also, as I argued, when you have gene duplication you are physically increasing the number of information carrying units – that is an increase in information. There is simply no way to avoid the mountain of genetic evidence that genetic information has increased over evolutionary time through evolutionary processes.
  •  
    FLAWS IN CREATIONIST LOGIC
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Is There a Liberal Gene? : Discovery News - 0 views

  • Is political ideology derived from a person's social environment or is it a result of genetic predisposition?
  • It's an interaction of both, according to a recent study on our political leanings that boosts both sides of the nature versus nurture debate.
  • Scientists at the University of California San Diego and Harvard University determined that people who carry a variant of the DRD4 gene are more likely to be liberals as adults, depending on the number of friendships they had during high school. They published their study in a recent issue of The Journal of Politics.
  • ...2 more annotations...
  • The 7R variant of DRD4, a dopamine receptor gene, had previously been associated with novelty seeking. The researchers theorized novelty seeking would be related to openness, a psychological trait that has been associated with political liberalism.
  • However, social environment was critical. The more friends gene carriers have in high school, the more likely they are to be liberals as adults. The authors write, "Ten friends can move a person with two copies of 7R allele almost halfway from being a conservative to moderate or from being moderate to liberal."
  •  
    IS THERE A LIBERAL GENE?
Weiye Loh

Do We Still Think Race is a Social Construct? » Sociological Images - 0 views

  • new genetic information about human evolution has required that scientists re-think the biological reality of race.  In this 6-minute video, sociologist Alondra Nelson describes this re-thinking:

Weiye Loh

Is it a boy or a girl? You decide - Prospect Magazine « Prospect Magazine - 0 views

  • The only way to guarantee either a daughter or son is to undergo pre-implantation genetic diagnosis: a genetic analysis of an embryo before it is placed in the womb. This is illegal in Britain except for couples at risk of having a child with a life-threatening gender-linked disorder.
  • It’s also illegal for clinics to offer sex selection methods such as MicroSort, that sift the slightly larger X chromosome-bearing (female) sperm from their weedier Y chromosome-bearing (male) counterparts, and then use the preferred sperm in an IVF cycle. With a success rate hovering around 80-90 per cent, it’s better than Mother Nature’s odds of conception, but not immaculate.
  • Years ago I agreed with this ban on socially motivated sex selection. But I can’t defend that stance today. My opposition was based on two worries: the gender balance being skewed—look at China—and the perils of letting society think it’s acceptable to prize one sex more than the other. Unlike many politicians, however, I think it is only right and proper to perform an ideological U-turn when presented with convincing opposing evidence.
  • ...4 more annotations...
  • A 2003 survey published in the journal Human Reproduction showed that few British adults would be concerned enough about their baby’s gender to use the technology, and most adults wanted the same number of sons as daughters
  • Bioethics specialist Edgar Dahl of the University of Geissen found that 68 per cent of Britons craved an equal number of boys and girls; 6 per cent wanted more boys; 4 per cent more girls; 3 per cent only boys; and 2 per cent only girls. Fascinatingly, even if a baby’s sex could be decided by simply taking a blue pill or a pink pill, 90 per cent of British respondents said they wouldn’t take it.
  • What about the danger of stigmatising the unwanted sex if gender selection was allowed? According to experts on so-called “gender disappointment,” the unwanted sex would actually be male.
  • I may think it is old-fashioned to want a son so that he can inherit the family business, or a daughter to have someone to go shopping with. But how different is that from the other preferences and expectations we have for our children, such as hoping they will be gifted at mathematics, music or sport? We all nurture secret expectations for our children: I hope that mine will be clever, beautiful, witty and wise. Perhaps it is not the end of the world if we allow some parents to add “female” or “male” to the list.
  •  
    Is it a boy or a girl? You decide ANJANA AHUJA   28th April 2010  -  Issue 170 Choosing the sex of an unborn child is illegal, but would it harm society if it wasn't?
Weiye Loh

Rationally Speaking: A new eugenics? - 0 views

  • an interesting article I read recently, penned by Julian Savulescu for the Practical Ethics blog.
  • Savulescu discusses an ongoing controversy in Germany about genetic testing of human embryos. The Leopoldina, Germany’s equivalent of the National Academy of Sciences, has recommended genetic testing of pre-implant embryos, to screen for serious and incurable defects. The German Chancellor, Angela Merkel, has agreed to allow a parliamentary vote on this issue, but also said that she personally supports a ban on this type of testing. Her fear is that the testing would quickly lead to “designer babies,” i.e. to parents making choices about their unborn offspring based not on knowledge about serious disease, but simply because they happen to prefer a particular height or eye color.
  • He infers from Merkel’s comments (and many similar others) that people tend to think of selecting traits like eye color as eugenics, while acting to avoid incurable disease is not considered eugenics. He argues that this is exactly wrong: eugenics, as he points out, means “well born,” so eugenicists have historically been concerned with eliminating traits that would harm society (Wendell Holmes’ “three generation of imbeciles”), not with simple aesthetic choices. As Savulescu puts it: “[eugenics] is selecting embryos which are better, in this context, have better lives. Being healthy rather than sick is ‘better.’ Having blond hair and blue eyes is not in any plausible sense ‘better,’ even if people mistakenly think so.”
  • ...9 more annotations...
  • And there is another, related aspect of discussions about eugenics that should be at the forefront of our consideration: what was particularly objectionable about American and Nazi early 20th century eugenics is that the state, not individuals, were to make decisions about who could reproduce and who couldn’t. Savulescu continues: “to grant procreative liberty is the only way to avoid the objectionable form of eugenics that the Nazis practiced.” In other words, it makes all the difference in the world if it is an individual couple who decides to have or not have a baby, or if it is the state that imposes a particular reproductive choice on its citizenry.
  • but then Savulescu expands his argument to a point where I begin to feel somewhat uncomfortable. He says: “[procreative liberty] involves the freedom to choose a child with red hair or blond hair or no hair.”
  • Savulescu has suddenly sneaked into his argument for procreative liberty the assumption that all choices in this area are on the same level. But while it is hard to object to action aimed at avoiding devastating diseases, it is not quite so obvious to me what arguments favor the idea of designer babies. The first intervention can be justified, for instance, on consequentialist grounds because it reduces the pain and suffering of both the child and the parents. The second intervention is analogous to shopping for a new bag, or a new car, which means that it commodifies the act of conceiving a baby, thus degrading its importance. I’m not saying that that in itself is sufficient to make it illegal, but the ethics of it is different, and that difference cannot simply be swept under the broad rug of “procreative liberty.”
  • designing babies is to treat them as objects, not as human beings, and there are a couple of strong philosophical traditions in ethics that go squarely against that (I’m thinking, obviously, of Kant’s categorical imperative, as well as of virtue ethics; not sure what a consequentialist would say about this, probably she would remain neutral on the issue).
  • Commodification of human beings has historically produced all sorts of bad stuff, from slavery to exploitative prostitution, and arguably to war (after all, we are using our soldiers as means to gain access to power, resources, territory, etc.)
  • And of course, there is the issue of access. Across-the-board “procreative liberty” of the type envisioned by Savulescu will cost money because it requires considerable resources.
  • imagine that these parents decide to purchase the ability to produce babies that have the type of characteristics that will make them more successful in society: taller, more handsome, blue eyed, blonde, more symmetrical, whatever. We have just created yet another way for the privileged to augment and pass their privileges to the next generation — in this case literally through their genes, not just as real estate or bank accounts. That would quickly lead to an even further divide between the haves and the have-nots, more inequality, more injustice, possibly, in the long run, even two different species (why not design your babies so that they can’t breed with certain types of undesirables, for instance?). Is that the sort of society that Savulescu is willing to envision in the name of his total procreative liberty? That begins to sounds like the libertarian version of the eugenic ideal, something potentially only slightly less nightmarish than the early 20th century original.
  • Rich people already have better choices when it comes to their babies. Taller and richer men can choose between more attractive and physically fit women and attractive women can choose between more physically fit and rich men. So it is reasonable to conclude that on average rich and attractive people already have more options when it comes to their offspring. Moreover no one is questioning their right to do so and this is based on a respect for a basic instinct which we all have and which is exactly why these people would choose to have a DB. Is it fair for someone to be tall because his daddy was rich and married a supermodel but not because his daddy was rich and had his DNA resequenced? Is it former good because its natural and the latter bad because its not? This isn't at all obvious to me.
  • Not to mention that rich people can provide better health care, education and nutrition to their children and again no one is questioning their right to do so. Wouldn't a couple of inches be pretty negligible compared to getting into a good school? Aren't we applying double standards by objecting to this issue alone? Do we really live in a society that values equal opportunities? People (may) be equal before the law but they are not equal to each other and each one of us is tacitly accepting that fact when we acknowledge the social hierarchy (in other words, every time we interact with someone who is our superior). I am not crazy about this fact but that's just how people are and this has to be taken into account when discussing this.
Weiye Loh

Happiness: Do we have a choice? » Scienceline - 0 views

  • “Objective choices make a difference to happiness over and above genetics and personality,” said Bruce Headey, a psychologist at Melbourne University in Australia. Headey and his colleagues analyzed annual self-reports of life satisfaction from over 20,000 Germans who have been interviewed every year since 1984. He compared five-year averages of people’s reported life satisfaction, and plotted their relative happiness on a percentile scale from 1 to 100. Heady found that as time went on, more and more people recorded substantial changes in their life satisfaction. By 2008, more than a third had moved up or down on the happiness scale by at least 25 percent, compared to where they had started in 1984.
  • Headey’s findings, published in the October 19th issue of Proceedings of the National Academy of Sciences, run contrary to what is known as the happiness set-point theory — the idea that even if you win the lottery or become a paraplegic, you’ll revert back to the same fixed level of happiness within a year or two. This psychological theory was widely accepted in the 1990s because it explained why happiness levels seemed to remain stable over the long term: They were mainly determined early in life by genetic factors including personality traits.
  • But even this dynamic choice-driven picture does not fully capture the nuance of what it means to be happy, said Jerome Kagan, a Harvard University developmental psychologist. He warns against conflating two distinct dimensions of happiness: everyday emotional experience (an assessment of how you feel at the moment) and life evaluation (a judgment of how satisfied you are with your life). It’s the difference between “how often did you smile yesterday?” and “how does your life compare to the best possible life you can imagine?”
  • ...4 more annotations...
  • Kagan suggests that we may have more choice over the latter, because life evaluation is not a function of how we currently feel — it is a comparison of our life to what we decide the good life should be.
  • Kagan has found that young children differ biologically in the ease with which they can feel happy, or tense, or distressed, or sad — what he calls temperament. People establish temperament early in life and have little capacity to change it. But they can change their life evaluation, which Kagan describes as an ethical concept synonymous with “how good of a life have I led?” The answer will depend on individual choices and the purpose they create for themselves. A painter who is constantly stressed and moody (unhappy in the moment) may still feel validation in creating good artwork and may be very satisfied with his life (happy as a judgment).
  • when it comes to happiness, our choices may matter — but it depends on what the choices are about, and how we define what we want to change.
  • Graham thinks that people may evaluate their happiness based on whichever dimension — happiness at the moment, or life evaluation — they have a choice over.
  •  
    Instead of existing as a stable equilibrium, Headey suggests that happiness is much more dynamic, and that individual choices - about one's partner, working hours, social participation and lifestyle - make substantial and permanent changes to reported happiness levels. For example, doing more or fewer paid hours of work than you want, or exercising regularly, can have just as much impact on life satisfaction as having an extroverted personality.
1 - 20 of 45 Next › Last »
Showing 20 items per page