Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Progressive

Rss Feed Group items tagged

Weiye Loh

Approaching the cliffs of time - Plane Talking - 0 views

  • have you noticed how the capacity of the media to explain in lay terms such matters as quantum physics, or cosmology, is contracting faster than the universe is expanding? The more mind warping the discoveries the less opportunity there is to fit them into 30 seconds in a news cast, or 300 words in print.
  • There has been a long running conspiracy of convenience between science reporters and the science being reported to leave out inconvenient time and space consuming explanations, and go for the punch line that best suits the use of the media to lobby for more project funding.
  • Almost every space story I have written over 50 years has been about projects claiming to ‘discover the origins of the solar system/life on earth/life on Mars/discover the origins of the universe, or recover parts of things like comets because they are as old as the sun, except that we have discovered they aren’t ancient at all.’ None of them were ever designed to achieved those goals. They were brilliant projects, brilliantly misrepresented by the scientists and the reporters because an accurate story would have been incomprehensible to 99.9% of readers or viewers.
  • ...3 more annotations...
  • this push to abbreviate and banalify the more esoteric but truly intriguing mysteries of the universe has lurched close to parody yet failed to be as thoughtfully funny as Douglas Adams was with the Hitchhiker’s Guide to the Galaxy
  • Our most powerful telescopes are approaching what Columbia physicist and mathematician Brian Greene recently called the cliffs of time,  beyond which an infinitely large yet progressively emptier universe lies forever invisible to us and vice versa, since to that universe, we also lie beyond the cliffs of time. This capturing of images from the start of time is being done by finding incredibly faint and old light using computing power and forensic techniques not even devised when Hubble was assembled on earth. In this instance Hubble has found the faint image of an object that emitted light a mere 480 million years after the ‘big bang’ 13.7 billion years ago. It is, thus, nearly as old as time itself.
  • The conspiracy of over simplification has until now kept the really gnarly principles involved in big bang theory out of the general media because nothing short of a first class degree in theoretical and practical physics is going to suffice for a reasonable overview. Plus a 100,000 word article with a few thousand diagrams.
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Bankers, Buyouts & Billionaires: Why Big Herba's Research Deficit Isn't About... - 0 views

  • A skeptic challenges a natural health product for the lack of an evidentiary base.  A proponent of that product responds that the skeptic has made a logical error – an absence of evidence is not evidence of absence, and in such a scenario it’s not unreasonable to rely on patient reporting and traditional uses as a guide. The skeptic chimes back with a dissertation on the limits of anecdotal evidence and arguments from antiquity — especially when the corresponding pharma products have a data trail supporting their safety and efficacy. The proponent responds that it’s unfair to hold natural health products to the same evidentiary standard, because only pharma has the money to fund proper research, and they only do so for products they can patent. You can’t patent nature, so no research into natural health products gets done.
  • look here, here, and here for recent examples
  • natural health industry isn’t rich enough to sustain proper research.  Is that true? Natural health, by the numbers On the surface, it certainly wouldn’t appear so. While the industry can be difficult to get a bead on – due both to differing definitions of what it includes (organic foods? natural toothpaste?), and the fact that many of the key players are private companies that don’t report revenues – by any measure it’s sizable. A survey by the University of Guelph  references KPMG estimates that the Natural Health Products sector in Canada grew from $1.24B in 2000 to $1.82B in 2006 – a growth rate that would bring the market to about $2.5B today.   Figures from the Nutrition Business Journal quoted in the same survey seem to agree, suggesting Canada is 3% of a global “supplements” (herbal, homeopathy, vitamins) market that was $68B globally in 2006 and growing at 5% a year – bringing it to perhaps $85B today. Figures from various sources quoted in a recent Health Canada report support these estimates.
  • ...4 more annotations...
  • While certainly not as big as the ($820B) pharmaceutical industry, $85B is still an awful lot of money, and it’s hard to imagine it not being enough to carve out a research budget from. Yet research isn’t done by entire industries, but by one tier of the value chain — the companies that manufacture and distribute the products.  If they’re not big enough to fund the type of research skeptics are looking for, it won’t be done, so let’s consider some of the bigger players before we make that call.
  • French giant Boiron (EPA:BOI) is by far the largest distributor of natural health products in Canada – they’re responsible for nearly 4000 (15%) of the 26,000 products approved by Health Canada’s Natural Health Products Directorate. They’re also one of largest natural health products companies globally, with 2010 revenues of €520M ($700M CAD) – a size achieved not just through the success of killer products like Oscillococcinum, but also through acquisitions. In recent years, the company has acquired both its main French rival Dolisos (giving them 90% of the French homeopathy market) and the largest homeopathy company in Belgium, Unda. So this is a big company that’s prepared to spend money to get even bigger. What about spending some of that money on research?  Well ostensibly it’s a priority: “Since 2005, we have devoted a growing level of resources to develop research,” they proclaim in the opening pages of their latest annual report, citing 70 in-progress research projects. Yet the numbers tell a different story – €4.2M in R&D expenditures in 2009, just 0.8% of revenues.
  • To put that in perspective, consider that in the same year, GlaxoSmithKline spent 14% of its revenues on R&D, Pfizer spent 15%, and Merck spent a whopping 21%.
  • But if Boiron’s not spending like pharma on research, there’s one line item where they do go toe to toe: Marketing. The company spent €114M – a full 21% of revenues on marketing in 2009. By contrast, GSK, Pfizer and Merck reported 33%, 29%, and 30% of revenues respectively on their “Selling, General, and Administrative” (SG&A) line – which includes not just sales & marketing expenses, but also executive salaries, support staff, legal, rent, utilities, and other overhead costs. Once those are subtracted out, it’s likely that Boiron spends at least as much of its revenues on marketing as Big Pharma.
Weiye Loh

Do Fights Over Climate Communication Reflect the End of 'Scientism'? - NYTimes.com - 0 views

  • climate (mis)communication. Two sessions explored a focal point of this blog, the interface of climate science and policy, and the roles of scientists and the media in fostering productive discourse. Both discussions homed in on an uncomfortable reality — the erosion of a longstanding presumption that scientific information, if communicated more effectively, will end up framing policy choices.
  • First I sat in on a symposium on the  future of climate communication in a world where traditional science journalism is a shrinking wedge of a growing pie of communication options. The discussion didn’t really provide many answers, but did reveal the persistent frustrations of some scientists with the way the media cover their field.
  • Sparks flew between Kerry Emanuel, a climatologist long focused on hurricanes and warming, and Seth Borenstein, who covers climate and other science for the Associated Press. Borenstein spoke highly of a Boston Globe dual profile of Emanuel and his colleague at the Massachusetts Institute of Technology,  Richard Lindzen. To Emanuel, the piece was a great example of what he described as “he said, he said” coverage of science. Borenstein replied that this particular piece was not centered on the science, but on the men — in the context of their relationship, research and worldviews. (It’s worth noting that Emanuel, whom I’ve been interviewing on hurricanes and climate since 1988, describes himself as  a conservative and, mainly, Republican voter.)
  • ...11 more annotations...
  • Keith Kloor, blogging on the session  at Collide-a-Scape, included a sobering assessment of the scientist-journalist tensions over global warming from Tom Rosensteil, a panelist and long-time journalist who now heads up Pew’s Project for Excellence in Journalism: If you’re waiting for the press to persuade the public, you’re going to lose. The press doesn’t see that as its job.
  • scientists have  a great opportunity, and responsibility, to tell their own story more directly, as some are doing occasionally through Dot Earth “ Post Cards” and The Times’ Scientist at Work blog.
  • Naomi Oreskes, a political scientist at the University of California, San Diego, and co-author of “Merchants of Doubt“: Of Mavericks and Mules Gavin Schmidt of NASA’s Goddard Institute for Space Studies and Realclimate.org: Between Sound Bites and the Scientific Paper: Communicating in the Hinterland Thomas Lessl, a scholar at the University of Georgia focused on the cultural history of science: Reforming Scientific Communication About Anthropogenic Climate Change
  • I focused on two words in the title of the session — diversity and denial. The diversity of lines of inquiry in climate science has a two-pronged impact. It helps build a robust overall picture of a growing human influence on a complex system. But for many of the most important  pixel points in that picture, there is robust, durable and un-manufactured debate. That debate can then be exploited by naysayers eager to cast doubt on the enterprise, when in fact — as I’ve written here before — it’s simply the (sometimes ugly) way that science progresses.
  • My denial, I said, lay in my longstanding presumption, like that of many scientists and journalists, that better communication of information will tend to change people’s perceptions, priorities and behavior. This attitude, in my view, crested for climate scientists in the wake of the 2007 report from the Intergovernmental Panel on Climate Change.
  • In his talk, Thomas Lessl said much of this attitude is rooted in what he and some other social science scholars call “scientism,” the idea — rooted in the 19th century — that scientific inquiry is a “distinctive mode of inquiry that promises to bring clarity to all human endeavors.” [5:45 p.m. | Updated Chris Mooney sent an e-mail noting how the discussion below resonates with "Do Scientists Understand the Public," a report he wrote last year for the American Academy of Arts and Sciences and explored here.]
  • Scientism, though it is good at promoting the recognition that scientific knowledge is the only kind of knowledge, also promotes communication behavior that is bad for the scientific ethos. By this I mean that it turns such communication into combat. By presuming that scientific understanding is the only criterion that matters, scientism inclines public actors to treat resistant audiences as an enemy: If the public doesn’t get the science, shame on the public. If the public rejects a scientific claim, it is either because they don’t get it or because they operate upon some sinister motive.
  • Scientific knowledge cannot take the place of prudence in public affairs.
  • Prudence, according to Robert Harriman, “is the mode of reasoning about contingent matters in order to select the best course of action. Contingent events cannot be known with certainty, and actions are intelligible only with regard to some idea of what is good.”
  • Scientism tends to suppose a one-size-fits-all notion of truth telling. But in the public sphere, people don’t think that way. They bring to the table a variety of truth standards: moral judgment, common-sense judgment, a variety of metaphysical perspectives, and ideological frameworks. The scientists who communicate about climate change may regard these standards as wrong-headed or at best irrelevant, but scientists don’t get to decide this in a democratic debate. When scientists become public actors, they have stepped outside of science, and they are obliged to honor the rules of communication and thought that govern the rest of the world. This might be different, if climate change was just about determining the causes of climate change, but it never is. Getting from the acceptance of ACC to acceptance of the kinds of emissions-reducing policies that are being advocated takes us from one domain of knowing into another.
  • One might object by saying that the formation of public policy depends upon first establishing the scientific bases of ACC, and that the first question can be considered independently of the second. Of course that is right, but that is an abstract academic distinction that does not hold in public debates. In public debates a different set of norms and assumptions apply: motive is not to be casually set aside as a nonfactor. Just because scientists customarily bracket off scientific topics from their policy implications does not mean that lay people do this—or even that they should be compelled to do so. When scientists talk about one thing, they seem to imply the other. But which is the motive force? Are they advocating for ACC because they subscribe to a political worldview that supports legal curtailments upon free enterprise? Or do they support such a political worldview because they are convinced of ACC? The fact that they speak as scientists may mean to other scientists that they reason from evidence alone. But the public does not necessarily share this assumption. If scientists don’t respect this fact about their audiences, they are bound to get in trouble. [Read the rest.]
Weiye Loh

The secret to a long life isn't what you think - USATODAY.com - 0 views

  • Researchers Howard Friedman and Leslie Martin report their conclusions in a new book, The Longevity Project. "Everybody has the ideas — don't stress, don't worry, don't work so hard, retire and go play golf," says Friedman, a psychology professor at University of California-Riverside. "We did not find these patterns to exist in people who thrived."
  • At the core of their 20 years of research is a study started by Stanford University psychologist Lewis Terman in 1921. Terman died in 1956, but other researchers carried on the study. One participant was biologist Ancel Keys, whose life-long work helped popularize the Mediterranean diet. He died in 2004 at age 100. He enjoyed gardening as an activity much of his life.
  • if your activities rise or stay high in middle age, you definitely stay healthier and live longer," says Martin, a research psychologist at University of California-Riverside.
  • ...11 more annotations...
  • The participants who lived long, happy lives "were not cynical rebels and loners" but accomplished people who were satisfied with their lives. Many knew that worrying is sometimes a good thing. The authors also looked at a study of Medicare patients that found that "neuroticism was health-protective."
  • spending 30 minutes at least four times a week expending energy at a moderate to intense level is "good up-to-date medical advice but poor practical advice."
  • Being active in middle age was most important to health and longevity in the study. But rather than vow to do something to get in shape (like jogging) and then hate it and not stick with it, find something you like to do."We looked at those who stayed active," Friedman says. "It wasn't the kids on sports teams. It's the ones who had activities at one point and had the pattern of keeping them ... They were doing stuff that got them out of the chair ... whether it was gardening, walking the dog or going to museums."
  • One of the best childhood personality predictors of longevity was conscientiousness — "qualities of a prudent, persistent, well-organized person, like a scientist or professor — somewhat obsessive and not at all carefree,"
  • the most obvious reason "is that conscientious people do more things to protect their health and engage in fewer activities that are risky."
  • "What characterized the people who thrived is a combination of their own persistence and dependability and the help of other people," Friedman says. The young adults who were thrifty, persistent, detail-oriented and responsible lived the longest.
  • Those with the most career success were the least likely to die young. Those who moved from job to job without a clear progression were less likely to have long lives than those with increasing responsibilities.
  • "continually productive men and women lived much longer than the laid-back comrades. ... This production orientation mattered more than their social relationships or their sense of happiness or well-being."
  • those who were most engaged in pursuing their goals.
  • a sexually satisfying and happy marriage is a very good indicator of future health and long life," but being single for a woman can be just as healthy as being in a marriage, especially if she has other fulfilling social relationships.
  • The married men in the study lived the longest. Single men outlived remarried men but didn't live as long as married men. Among women, the number who divorced their husbands and stayed single lived nearly as long as steadily married women."Being divorced was much less harmful to a women's health," the authors say.
  •  
    The idea that your job or your boss is leading you to an early grave is one of several myths debunked in an analysis of a 90-year study that followed 1,528 Americans. Among other myths: be optimistic, get married, go to church, eat broccoli and get a gym membership.
Weiye Loh

A lesson in citing irrelevant statistics | The Online Citizen - 0 views

  • Statistics that are quoted, by themselves, may be quite meaningless, unless they are on a comparative basis. To illustrate this, if we want to say that Group A (poorer kids) is not significantly worse off than Group B (richer kids), then it may be pointless to just cite the statistics for Group A, without Group B’s.
  • “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” “One in six scores in the top one-third at PSLE” What we need to know for comparative purposes, is the percentage of richer kids who scores in the top two-thirds too.
  • “… one in five scores in the top 30% at O and A levels… One in five goes to university and polys” What’s the data for richer kids? Since the proportion of the entire population going to university and polys has increased substantially, this clearly shows that poorer kids are worse off!
  • ...4 more annotations...
  • The Minister was quoted as saying: “My  parents had six children.  My first home as a young boy was a rental flat in Zion Road.  We shared it as tenants with other families” Citing individuals who made it, may be of no “statistical” relevance, as what we need are the statistics as to the proportion of poorer kids to richer kids, who get scholarships, proportional to their representation in the population.
  • “More spent on primary and secondary/JC schools.  This means having significantly more and better teachers, and having more programmes to meet children’s specific needs” What has spending more money, which what most countries do, got to do with the argument whether poorer kids are disadvantaged?
  • Straits Times journalist, Li XueYing put the crux of the debate in the right perspective: “Dr Ng had noted that ensuring social mobility “cannot mean equal outcomes, because students are inherently different”, But can it be that those from low-income families are consistently “inherently different” to such an extent?”
  • Relevant statistics Perhaps the most damning statistics that poorer kids are disadvantaged was the chart from the Ministry of Education (provided by the Straits Times), which showed that the percentage of Primary 1 pupils who lived in 1 to 3-room HDB flats and subsequently progressed to University and/or Polytechnic, has been declining since around 1986.
Weiye Loh

Science-Based Medicine » Skepticism versus nihilism about cancer and science-... - 0 views

  • I’m a John Ioannidis convert, and I accept that there is a lot of medical literature that is erroneous. (Just search for Dr. Ioannidis’ last name on this blog, and you’ll find copious posts praising him and discussing his work.) In fact, as I’ve pointed out, most medical researchers instinctively know that most new scientific findings will not hold up to scrutiny, which is why we rarely accept the results of a single study, except in unusual circumstances, as being enough to change practice. I also have pointed out many times that this is not necessarily a bad thing. Replication is key to verification of scientific findings, and more often than not provocative scientific findings are not replicated. Does that mean they shouldn’t be published?
  • As for pseudoscience, I’m half tempted to agree with Dr. Spector, but just not in the way he thinks. Unfortunately, over the last 20 years or so, there has been an increasing amount of pseudoscience in the medical literature in the form of “complementary and alternative medicine” (CAM) studies of highly improbable remedies or even virtually impossible ones (i.e., homeopathy). However, that does not appear to be what Dr. Spector is talking about, which is why I looked up his references. The second reference is to an SI article from 2009 entitled Science and Pseudoscience in Adult Nutrition Research and Practice. There, and only there, did I find out just what it is that Dr. Spector apparently means by “pseudoscience”: By pseudoscience, I mean the use of inappropriate methods that frequently yield wrong or misleading answers for the type of question asked. In nutrition research, such methods also often misuse statistical evaluations.
  • Dr. Spector doesn’t really know the difference between inadequately rigorous science and pseudoscience! Now, don’t get me wrong. I know that it’s not always easy to distinguish science from pseudoscience, especially at the fringes, but in general bad science has to go a lot further than Dr. Spector thinks to merit the the term “pseudoscience.” It is clear (to me, at least) from his articles that Dr. Spector throws around the term “pseudoscience” around rather more loosely than he should, using it as a pejorative for any clinical science less rigorous than a randomized, double-blind, placebo-controlled trial that meets FDA standards for approval of a drug (his pharma background coming to the fore, no doubt). Pseudoscience, Dr. Spector. You keep using that word. I do not think it means what you think it means. Indeed, I almost get the impression from his articles that Dr. Spector views any study that doesn’t reach FDA-level standards for drug approval to be pseudoscience.
  • ...4 more annotations...
  • Medical science, when it works well, tends to progress from basic science, to small pilot studies, to larger randomized studies, and then–only then–to those big, rigorous, insanely expensive randomized, double-blind, placebo-controlled trials. Dr. Spector mentions hierarchies of evidence, but he seems to fall into a false dichotomy, namely that if it’s not Level I evidence, it’s crap. The problem is, as Mark pointed out, in medicine we often don’t have Level I evidence for many questions. Indeed, for some questions, we will never have Level I evidence. Clinical medicine involves making decisions in the midst of uncertainty, sometimes extreme uncertainty.
  • Dr. Spector then proceeds to paint a picture of reckless physicians proceeding on crappy studies to pump women full of hormones. Actually, it was more than a bit more complicated on than that. That was the time when I was in my medical training, and I remember the discussions we had regarding the strength (or lack thereof) of the epidemiological data and the lack of good RCTs looking at HRT. I also remember that nothing works as well to relieve menopausal symptoms as HRT, an observation we have been reminded of again since 2003, which is the year when the first big study came out implicating HRT in increasing the risk of breast cancer (more later).
  • I found a rather fascinating editorial in the New England Journal of Medicine from more than 20 years ago that discussed the state of the evidence back then with regard to estrogen and breast cancer: Evidence that estrogen increases the risk of breast cancer has been surprisingly difficult to obtain. Clinical and epidemiologic studies and studies in animals strongly suggest that endogenous estrogen plays a part in causing breast cancer. If so, exogenous estrogen should be a potent promoter of breast cancer. Although more than 20 case–control and prospective studies of the relation of breast cancer and noncontraceptive estrogen use have failed to demonstrate the expected association, relatively few women in these studies used estrogen for extended periods. Studies of the use of diethylstilbestrol and oral contraceptives suggest that a long exposure or latency may be necessary to show any association between hormone use and breast cancer. In the Swedish study, only six years of follow-up was needed to demonstrate an increased risk of breast cancer with the postmenopausal use of estradiol. It should be noted, however, that half the women in the subgroup that provided detailed data on the duration of hormone use had taken estrogen for many years before their base-line prescription status was defined. The duration of estrogen exposure in these women before the diagnosis of breast cancer was probably seriously underestimated; a short latency cannot be attributed to estradiol on the basis of these data. Other recent studies of the use of noncontraceptive estrogen suggest a slightly increased risk of breast cancer after 15 to 20 years’ use.
  • even now, the evidence is conflicting regarding HRT and breast cancer, with the preponderance of evidence suggesting that mixed HRT (estrogen and progestin) significantly increases the risk of breast cancer, while estrogen-alone HRT very well might not increase the risk of breast cancer at all or (more likely) only very little. Indeed, I was just at a conference all day Saturday where data demonstrating this very point were discussed by one of the speakers. None of this stops Dr. Spector from categorically labeling estrogen as a “carcinogen that causes breast cancers that kill women.” Maybe. Maybe not. It’s actually not that clear. The problem, of course, is that, consistent with the first primary reports of WHI results, the preponderance of evidence finding health risks due to HRT have indicted the combined progestin/estrogen combinations as unsafe.
Weiye Loh

The Ashtray: The Ultimatum (Part 1) - NYTimes.com - 0 views

  • “Under no circumstances are you to go to those lectures. Do you hear me?” Kuhn, the head of the Program in the History and Philosophy of Science at Princeton where I was a graduate student, had issued an ultimatum. It concerned the philosopher Saul Kripke’s lectures — later to be called “Naming and Necessity” — which he had originally given at Princeton in 1970 and planned to give again in the Fall, 1972.
  • Whiggishness — in history of science, the tendency to evaluate and interpret past scientific theories not on their own terms, but in the context of current knowledge. The term comes from Herbert Butterfield’s “The Whig Interpretation of History,” written when Butterfield, a future Regius professor of history at Cambridge, was only 31 years old. Butterfield had complained about Whiggishness, describing it as “…the study of the past with direct and perpetual reference to the present” – the tendency to see all history as progressive, and in an extreme form, as an inexorable march to greater liberty and enlightenment. [3] For Butterfield, on the other hand, “…real historical understanding” can be achieved only by “attempting to see life with the eyes of another century than our own.” [4][5].
  • Kuhn had attacked my Whiggish use of the term “displacement current.” [6] I had failed, in his view, to put myself in the mindset of Maxwell’s first attempts at creating a theory of electricity and magnetism. I felt that Kuhn had misinterpreted my paper, and that he — not me — had provided a Whiggish interpretation of Maxwell. I said, “You refuse to look through my telescope.” And he said, “It’s not a telescope, Errol. It’s a kaleidoscope.” (In this respect, he was probably right.) [7].
  • ...9 more annotations...
  • I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ” [8] ¶He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.” ¶And then I added, “…except for someone who imagines himself to be God.” ¶It was at this point that Kuhn threw the ashtray at me.
  • I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.
  • That's the problem with relativism: Who's to say who's right and who's wrong? Somehow I'm not surprised to hear Kuhn was an ashtray-hurler. In the end, what other argument could he make?
  • For us to have a conversation and come to an agreement about the meaning of some word without having to refer to some outside authority like a dictionary, we would of necessity have to be satisfied that our agreement was genuine and not just a polite acknowledgement of each others' right to their opinion, can you agree with that? If so, then let's see if we can agree on the meaning of the word 'know' because that may be the crux of the matter. When I use the word 'know' I mean more than the capacity to apprehend some aspect of the world through language or some other represenational symbolism. Included in the word 'know' is the direct sensorial perception of some aspect of the world. For example, I sense the floor that my feet are now resting upon. I 'know' the floor is really there, I can sense it. Perhaps I don't 'know' what the floor is made of, who put it there, and other incidental facts one could know through the usual symbolism such as language as in a story someone tells me. Nevertheless, the reality I need to 'know' is that the floor, or whatever you may wish to call the solid - relative to my body - flat and level surface supported by more structure then the earth, is really there and reliably capable of supporting me. This is true and useful knowledge that goes directly from the floor itself to my knowing about it - via sensation - that has nothing to do with my interpretive system.
  • Now I am interested in 'knowing' my feet in the same way that my feet and the whole body they are connected to 'know' the floor. I sense my feet sensing the floor. My feet are as real as the floor and I know they are there, sensing the floor because I can sense them. Furthermore, now I 'know' that it is 'I' sensing my feet, sensing the floor. Do you see where I am going with this line of thought? I am including in the word 'know' more meaning than it is commonly given by everyday language. Perhaps it sounds as if I want to expand on the Cartesian formula of cogito ergo sum, and in truth I prefer to say I sense therefore I am. It is my sensations of the world first and foremost that my awareness, such as it is, is actively engaged with reality. Now, any healthy normal animal senses the world but we can't 'know' if they experience reality as we do since we can't have a conversation with them to arrive at agreement. But we humans can have this conversation and possibly agree that we can 'know' the world through sensation. We can even know what is 'I' through sensation. In fact, there is no other way to know 'I' except through sensation. Thought is symbolic representation, not direct sensing, so even though the thoughtful modality of regarding the world may be a far more reliable modality than sensation in predicting what might happen next, its very capacity for such accurate prediction is its biggest weakness, which is its capacity for error
  • Sensation cannot be 'wrong' unless it is used to predict outcomes. Thought can be wrong for both predicting outcomes and for 'knowing' reality. Sensation alone can 'know' reality even though it is relatively unreliable, useless even, for making predictions.
  • If we prioritize our interests by placing predictability over pure knowing through sensation, then of course we will not value the 'knowledge' to be gained through sensation. But if we can switch the priorities - out of sheer curiosity perhaps - then we can enter a realm of knowledge through sensation that is unbelievably spectacular. Our bodies are 'made of' reality, and by methodically exercising our nascent capacity for self sensing, we can connect our knowing 'I' to reality directly. We will not be able to 'know' what it is that we are experiencing in the way we might wish, which is to be able to predict what will happen next or to represent to ourselves symbolically what we might experience when we turn our attention to that sensation. But we can arrive at a depth and breadth of 'knowing' that is utterly unprecedented in our lives by operating that modality.
  • One of the impressions that comes from a sustained practice of self sensing is a clearer feeling for what "I" is and why we have a word for that self referential phenomenon, seemingly located somewhere behind our eyes and between our ears. The thing we call "I" or "me" depending on the context, turns out to be a moving point, a convergence vector for a variety of images, feelings and sensations. It is a reference point into which certain impressions flow and out of which certain impulses to act diverge and which may or may not animate certain muscle groups into action. Following this tricky exercize in attention and sensation, we can quickly see for ourselves that attention is more like a focused beam and awareness is more like a diffuse cloud, but both are composed of energy, and like all energy they vibrate, they oscillate with a certain frequency. That's it for now.
  • I loved the writer's efforts to find a fixed definition of “Incommensurability;” there was of course never a concrete meaning behind the word. Smoke and mirrors.
Weiye Loh

m.guardian.co.uk - 0 views

  • perhaps the reason stem cells managed to lodge themselves so deep in the public psyche was not just because of their awesome scientific potential, or their ability to turn into the treatments of the future.
  • For years, stem cells dominated all other science stories in newspaper headlines because they framed an ethical conundrum – to get to the most versatile stem cells meant destroying human embryos.
  • Research on stem cells became a political football, leading to delays in funding for scientists, particularly in the US. Not that the work itself was straightforward – the process of extracting stem cells from embryos is difficult and there is a very limited supply of material. Inevitable disappointment followed the years of headlines – where were the promised treatments? Was it all over-hyped?
  • ...2 more annotations...
  • Key to this is the discovery, in the past few years, of a way to make stem cells that do not require the destruction of embryos. In one move, these induced pluripotent stem (iPS) cells remove the ethical roadblocks faced by embryonic stem cells and, because they are so much easier to make, give scientists an inexhaustible supply of material, bringing them ever closer to those hoped-for treatments.
  • Stem cells are the body's master cells, the raw material from which we are built. Unlike normal body cells, they can reproduce an indefinite number of times and, when prodded in the right way, can turn themselves into any type of cell in the body. The most versatile stem cells are those found in the embryo at just a few days old – this ball of a few dozen embryonic stem (ES) cells eventually goes on to form everything that makes up a person.
Weiye Loh

Let There Be More Efficient Light - NYTimes.com - 0 views

  • LAST week Michele Bachmann, a Republican representative from Minnesota, introduced a bill to roll back efficiency standards for light bulbs, which include a phasing out of incandescent bulbs in favor of more energy-efficient bulbs. The “government has no business telling an individual what kind of light bulb to buy,” she declared.
  • But this opposition ignores another, more important bit of American history: the critical role that government-mandated standards have played in scientific and industrial innovation.
  • inventions alone weren’t enough to guarantee progress. Indeed, at the time the lack of standards for everything from weights and measures to electricity — even the gallon, for example, had eight definitions — threatened to overwhelm industry and consumers with a confusing array of incompatible choices.
  • ...5 more annotations...
  • This wasn’t the case everywhere. Germany’s standards agency, established in 1887, was busy setting rules for everything from the content of dyes to the process for making porcelain; other European countries soon followed suit. Higher-quality products, in turn, helped the growth in Germany’s trade exceed that of the United States in the 1890s. America finally got its act together in 1894, when Congress standardized the meaning of what are today common scientific measures, including the ohm, the volt, the watt and the henry, in line with international metrics. And, in 1901, the United States became the last major economic power to establish an agency to set technological standards. The result was a boom in product innovation in all aspects of life during the 20th century. Today we can go to our hardware store and choose from hundreds of light bulbs that all conform to government-mandated quality and performance standards.
  • Technological standards not only promote innovation — they also can help protect one country’s industries from falling behind those of other countries. Today China, India and other rapidly growing nations are adopting standards that speed the deployment of new technologies. Without similar requirements to manufacture more technologically advanced products, American companies risk seeing the overseas markets for their products shrink while innovative goods from other countries flood the domestic market. To prevent that from happening, America needs not only to continue developing standards, but also to devise a strategy to apply them consistently and quickly.
  • The best approach would be to borrow from Japan, whose Top Runner program sets energy-efficiency standards by identifying technological leaders in a particular industry — say, washing machines — and mandating that the rest of the industry keep up. As technologies improve, the standards change as well, enabling a virtuous cycle of improvement. At the same time, the government should work with businesses to devise multidimensional standards, so that consumers don’t balk at products because they sacrifice, say, brightness and cost for energy efficiency.
  • This is not to say that innovation doesn’t bring disruption, and American policymakers can’t ignore the jobs that are lost when government standards sweep older technologies into the dustbin of history. An effective way forward on light bulbs, then, would be to apply standards only to those manufacturers that produce or import in large volume. Meanwhile, smaller, legacy light-bulb producers could remain, cushioning the blow to workers and meeting consumer demand.
  • Technologies and the standards that guide their deployment have revolutionized American society. They’ve been so successful, in fact, that the role of government has become invisible — so much so that even members of Congress should be excused for believing the government has no business mandating your choice of light bulbs.
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

China used prisoners in lucrative internet gaming work | World news | guardian.co.uk - 0 views

  • "Prison bosses made more money forcing inmates to play games than they do forcing people to do manual labour," Liu told the Guardian. "There were 300 prisoners forced to play games. We worked 12-hour shifts in the camp. I heard them say they could earn 5,000-6,000rmb [£470-570] a day. We didn't see any of the money. The computers were never turned off."
  • "If I couldn't complete my work quota, they would punish me physically. They would make me stand with my hands raised in the air and after I returned to my dormitory they would beat me with plastic pipes. We kept playing until we could barely see things," he said.
  • "gold farming", the practice of building up credits and online value through the monotonous repetition of basic tasks in online games such as World of Warcraft. The trade in virtual assets is very real, and outside the control of the games' makers. Millions of gamers around the world are prepared to pay real money for such online credits, which they can use to progress in the online games.The trading of virtual currencies in multiplayer games has become so rampant in China that it is increasingly difficult to regulate. In April, the Sichuan provincial government in central China launched a court case against a gamer who stole credits online worth about 3000rmb.
  • ...2 more annotations...
  • lack of regulations has meant that even prisoners can be exploited in this virtual world for profit.
  • The emergence of gold farming as a business in China – whether in prisons or sweatshops could raise new questions over the exporting of goods real or virtual from the country."Prison labour is still very widespread – it's just that goods travel a much more complex route to come to the US these days. And it is not illegal to export prison goods to Europe, said Nicole Kempton from the Laogai foundation, a Washington-based group which opposes the forced labour camp system in China.
Weiye Loh

CultureLab: Thoughts within thoughts make us human - 0 views

  • Corballis reckons instead that the thought processes that made language possible were non-linguistic, but had recursive properties to which language adapted: "Where Chomsky views thought through the lens of language, I prefer to view language though the lens of thought." From this, says Corballis, follows a better understanding of how humans actually think - and a very different perspective on language and its evolution.
  • So how did recursion help ancient humans pull themselves up by their cognitive bootstraps? It allowed us to engage in mental time travel, says Corballis, the recursive operation whereby we recall past episodes into present consciousness and imagine future ones, and sometimes even insert fictions into reality.
  • theory of mind is uniquely highly developed in humans: I may know not only what you are thinking, says Corballis, but also that you know what I am thinking. Most - but not all - language depends on this capability.
  • ...3 more annotations...
  • Corballis's theories also help make sense of apparent anomalies such as linguist and anthropologist Daniel's Everett's work on the Pirahã, an Amazonian people who hit the headlines because of debates over whether their language has any words for colours, and, crucially, numbers. Corballis now thinks that the Pirahã language may not be that unusual, and cites the example of other languages from oral cultures, such as the Iatmul language of New Guinea, which is also said to lack recursion.
  • The emerging point is that recursion developed in the mind and need not be expressed in a language. But, as Corballis is at pains to point out, although recursion was critical to the evolution of the human mind, it is not one of those "modules" much beloved of evolutionary psychologists, many of which are said to have evolved in the Pleistocene. Nor did it depend on some genetic mutation or the emergence of some new neuron or brain structure. Instead, he suggests it came of progressive increases in short-term memory and capacity for hierarchical organisation - all dependent in turn on incremental increases in brain size.
  • But as Corballis admits, this brain size increase was especially rapid in the Pleistocene. These incremental changes can lead to sudden more substantial jumps - think water boiling or balloons popping. In mathematics these shifts are called catastrophes. So, notes Corballis, wryly, "we may perhaps conclude that the emergence of the human mind was catastrophic". Let's hope that's not too prescient.
  •  
    His new book, The Recursive Mind: The origins of human language, thought, and civilization, is a fascinating and well-grounded exposition of the nature and power of recursion. In its ultra-reasonable way, this is quite a revolutionary book because it attacks key notions about language and thought. Most notably, it disputes the idea, argued especially by linguist Noam Chomsky, that thought is fundamentally linguistic - in other words, you need language before you can have thoughts.
Weiye Loh

The Failure of Liberal Bioethics - NYTimes.com - 1 views

  • There are three broad camps in contemporary debates over bioethics. In the name of human rights and human dignity, “bio-conservatives” tend to support restricting, regulating and stigmatizing the technologies that allow us to create, manipulate and destroy embryonic life. In the name of scientific progress and human freedom, “bio-libertarians” tend to oppose any restrictions on what individuals, doctors and researchers are allowed to do. Then somewhere in between are the anguished liberals, who are uncomfortable with what they see as the absolutism of both sides, and who tend to argue that society needs to decide where to draw its bioethical lines not based on some general ideal (like “life” or “choice”), but rather case by case by case — accepting this kind of abortion but not that kind; this use of embryos but not that use; existing developments in genetic engineering but not, perhaps, the developments that await us in the future.
  • at least in the United States, the liberal effort to (as the Goodman of 1980 put it) “monitor” and “debate” and “control” the development of reproductive technologies has been extraordinarily ineffectual. From embryo experimentation to selective reduction to the eugenic uses of abortion, liberals always promise to draw lines and then never actually manage to draw them. Like Dr. Evans, they find reasons to embrace each new technological leap while promising to resist the next one — and then time passes, science marches on, and they find reasons why the next moral compromise, too, must be accepted for the greater good, or at least tolerated in the name of privacy and choice. You can always count on them to worry, often perceptively, about hypothetical evils, potential slips down the bioethical slope. But they’re either ineffectual or accommodating once an evil actually arrives. Tomorrow, they always say — tomorrow, we’ll draw the line. But tomorrow never comes.
  •  
    The Failure of Liberal Bioethics; http://t.co/6QrUPkl
Meenatchi

Nanotechnology: A risky frontier? - 1 views

http://www.startribune.com/business/67823902.html?page=4&c=y The article discusses about the advantages and risks involved in nanotechnology. The article also states that companies don't face an i...

online ethics nanotechnology rights

started by Meenatchi on 03 Nov 09 no follow-up yet
Weiye Loh

Rationally Speaking: Don't blame free speech for the murders in Afghanistan - 0 views

  • The most disturbing example of this response came from the head of the U.N. Assistance Mission in Afghanistan, Staffan de Mistura, who said, “I don't think we should be blaming any Afghan. We should be blaming the person who produced the news — the one who burned the Koran. Freedom of speech does not mean freedom of offending culture, religion, traditions.” I was not going to comment on this monumentally inane line of thought, especially since Susan Jacoby, Michael Tomasky, and Mike Labossiere have already done such a marvelous job of it. But then I discovered, to my shock, that several of my liberal, progressive American friends actually agreed that Jones has some sort of legal and moral responsibility for what happened in Afghanistan
  • I believe he has neither. Here is why. Unlike many countries in the Middle East and Europe that punish blasphemy by fine, jail or death, the U.S., via the First Amendment and a history of court decisions, strongly protects freedom of speech and expression as basic and fundamental human rights. These include critiquing and offending other citizens’ culture, religion, and traditions. Such rights are not supposed to be swayed by peoples' subjective feelings, which form an incoherent and arbitrary basis for lawmaking. In a free society, if and when a person is offended by an argument or act, he or she has every right to argue and act back. If a person commits murder, the answer is not to limit the right; the answer is to condemn and punish the murderer for overreacting.
  • Of course, there are exceptions to this rule. Governments have an interest in condemning certain speech that provokes immediate hatred of or violence against people. The canonical example is yelling “fire!” in a packed room when there in fact is no fire, since this creates a clear and imminent danger for those inside the room. But Jones did not create such an environment, nor did he intend to. Jones (more precisely, Wayne Sapp) merely burned a book in a private ceremony in protest of its contents. Indeed, the connection between Jones and the murders requires many links in-between. The mob didn’t kill those accountable, or even Americans.
  • ...3 more annotations...
  • But even if there is no law prohibiting Jones’ action, isn’t he morally to blame for creating the environment that led to the murders? Didn’t he know Muslims would riot, and people might die? It seems ridiculous to assume that Jones could know such a thing, even if parts of the Muslim world have a poor track record in this area. But imagine for a moment that Jones did know Muslims would riot, and people would die. This does not make the act of burning a book and the act of murder morally equivalent, nor does it make the book burner responsible for reactions to his act. In and of itself, burning a book is a morally neutral act. Why would this change because some misguided individuals think book burning is worth the death penalty? And why is it that so many have automatically assumed the reaction to be respectable? To use an example nearer to some of us, recall when PZ Myers desecrated a communion wafer. If some Christian was offended, and went on to murder the closest atheist, would we really blame Myers? Is Myers' offense any different than Jones’?
  • the deep-seated belief among many that blasphemy is wrong. This means any reaction to blasphemy is less wrong, and perhaps even excused, compared to the blasphemous offense. Even President Obama said that, "The desecration of any holy text, including the Koran, is an act of extreme intolerance and bigotry.” To be sure, Obama went on to denounce the murders, and to state that burning a holy book is no excuse for murder. But Obama apparently couldn’t condemn the murders without also condemning Jones’ act of religious defiance.
  • As it turns out, this attitude is exactly what created the environment that led to murders in the first place. The members of the mob believed that religious belief should be free from public critical inquiry, and that a person who offends religious believers should face punishment. In the absence of official prosecution, they took matters into their own hands and sought anyone on the side of the offender. It didn’t help that Afghan leaders stoked the flames of hatred — but they only did so because they agreed with the mob’s sentiment to begin with. Afghan President Hamid Karzai said the U.S. should punish those responsible, and three well-known Afghan mullahs urged their followers to take to the streets and protest to call for the arrest of Jones
Weiye Loh

Alzheimer's Studies Find New Genetic Links - NYTimes.com - 0 views

  • The two largest studies of Alzheimer’s disease have led to the discovery of no fewer than five genes that provide intriguing new clues to why the disease strikes and how it progresses.
  • For years, there have been unproven but persistent hints that cholesterol and inflammation are part of the disease process. People with high cholesterol are more likely to get the disease. Strokes and head injuries, which make Alzheimer’s more likely, also cause brain inflammation. Now, some of the newly discovered genes appear to bolster this line of thought, because some are involved with cholesterol and others are linked to inflammation or the transport of molecules inside cells.
  • By themselves, the genes are not nearly as important a factor as APOE, a gene discovered in 1995 that greatly increases risk for the disease: by 400 percent if a person inherits a copy from one parent, by 1,000 percent if from both parents.
  • ...7 more annotations...
  • In contrast, each of the new genes increases risk by no more than 10 to 15 percent; for that reason, they will not be used to decide if a person is likely to develop Alzheimer’s. APOE, which is involved in metabolizing cholesterol, “is in a class of its own,” said Dr. Rudolph Tanzi, a neurology professor at Harvard Medical School and an author of one of the papers.
  • But researchers say that even a slight increase in risk helps them in understanding the disease and developing new therapies. And like APOE, some of the newly discovered genes appear to be involved with cholesterol.
  • The other paper is by researchers in Britain, France and other European countries with contributions from the United States. They confirmed the genes found by the American researchers and added one more gene.
  • The American study got started about three years ago when Gerard D. Schellenberg, a pathology professor at the University of Pennsylvania, went to the National Institutes of Health with a complaint and a proposal. Individual research groups had been doing their own genome studies but not having much success, because no one center had enough subjects. In an interview, Dr. Schellenberg said that he had told Dr. Richard J. Hodes, director of the National Institute on Aging, the small genomic studies had to stop, and that Dr. Hodes had agreed. These days, Dr. Hodes said, “the old model in which researchers jealously guarded their data is no longer applicable.”
  • So Dr. Schellenberg set out to gather all the data he could on Alzheimer’s patients and on healthy people of the same ages. The idea was to compare one million positions on each person’s genome to determine whether some genes were more common in those who had Alzheimer’s. “I spent a lot of time being nice to people on the phone,” Dr. Schellenberg said. He got what he wanted: nearly every Alzheimer’s center and Alzheimer’s geneticist in the country cooperated. Dr. Schellenberg and his colleagues used the mass of genetic data to do an analysis and find the genes and then, using two different populations, to confirm that the same genes were conferring the risk. That helped assure the investigators that they were not looking at a chance association. It was a huge effort, Dr. Mayeux said. Many medical centers had Alzheimer’s patients’ tissue sitting in freezers. They had to extract the DNA and do genome scans.
  • “One of my jobs was to make sure the Alzheimer’s cases really were cases — that they had used some reasonable criteria” for diagnosis, Dr. Mayeux said. “And I had to be sure that people who were unaffected really were unaffected.”
  • Meanwhile, the European group, led by Dr. Julie Williams of the School of Medicine at Cardiff University, was engaged in a similar effort. Dr. Schellenberg said the two groups compared their results and were reassured that they were largely finding the same genes. “If there were mistakes, we wouldn’t see the same things,” he added. Now the European and American groups are pooling their data to do an enormous study, looking for genes in the combined samples. “We are upping the sample size,” Dr. Schellenberg said. “We are pretty sure more stuff will pop out.”
  •  
    Gene Study Yields
Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a "culture war of fact." In other words, paradoxically, you don't lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.
  • Kahan's work at Yale. In one study, he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—"Scientific Panel Recommends Anti-Pollution Solution to Global Warming" and "Scientific Panel Recommends Nuclear Solution to Global Warming"—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.
  • If you want someone to accept new evidence, make sure to present it to them in a context that doesn't trigger a defensive, emotional reaction.
  • ...1 more annotation...
  • All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?
« First ‹ Previous 41 - 60 of 67 Next ›
Showing 20 items per page