Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Reliability

Rss Feed Group items tagged

joanne ye

TJC Stomp Scandal - 34 views

This is a very interesting topic. Thanks, Weiman! From the replies for this topic, I would say two general questions surfaced. Firstly, is STOMP liable for misinformation? Secondly, is it right for...

Weiye Loh

Rationally Speaking: Should non-experts shut up? The skeptic's catch-22 - 0 views

  • You can read the talk here, but in a nutshell, Massimo was admonishing skeptics who reject the scientific consensus in fields in which they have no technical expertise - the most notable recent example of this being anthropogenic climate change, about which venerable skeptics like James Randi and Michael Shermer have publicly expressed doubts (though Shermer has since changed his mind).
  • I'm totally with Massimo that it seems quite likely that anthropogenic climate change is really happening. But I'm not sure I can get behind Massimo's broader argument that non-experts should defer to the expert consensus in a field.
  • First of all, while there are strong incentives for a researcher to find errors in other work in the field, there are strong disincentives for her to challenge the field's foundational assumptions. It will be extremely difficult for her to get other people to agree with her if she tries, and if she succeeds, she'll still be taking herself down along with the rest of the field.
  • ...7 more annotations...
  • Second of all, fields naturally select for people who accept their foundational assumptions. People who don't accept those assumptions are likely not to have gone into that field in the first place, or to have left it already.
  • Sometimes those foundational assumptions are simple enough that an outsider can evaluate them - for instance, I may not be an expert in astrology or theology, but I can understand their starting premises (stars affect human fates; we should accept the Bible as the truth) well enough to confidently dismiss them, and the fields that rest on them. But when the foundational assumptions get more complex - like the assumption that we can reliably model future temperatures - it becomes much harder for an outsider to judge their soundness.
  • we almost seem to be stuck in a Catch-22: The only people who are qualified to evaluate the validity of a complex field are the ones who have studied that field in depth - in other words, experts. Yet the experts are also the people who have the strongest incentives not to reject the foundational assumptions of the field, and the ones who have self-selected for believing those assumptions. So the closer you are to a field, the more biased you are, which makes you a poor judge of it; the farther away you are, the less relevant knowledge you have, which makes you a poor judge of it. What to do?
  • luckily, the Catch-22 isn't quite as stark as I made it sound. For example, you can often find people who are experts in the particular methodology used by a field without actually being a member of the field, so they can be much more unbiased judges of whether that field is applying the methodology soundly. So for example, a foundational principle underlying a lot of empirical social science research is that linear regression is a valid tool for modeling most phenomena. I strongly recommend asking a statistics professor about that. 
  • there are some general criteria that outsiders can use to evaluate the validity of a technical field, even without “technical scientific expertise” in that field. For example, can the field make testable predictions, and does it have a good track record of predicting things correctly? This seems like a good criterion by which an outsider can judge the field of climate modeling (and "predictions" here includes using your model to predict past data accurately). I don't need to know how the insanely-complicated models work to know that successful prediction is a good sign.
  • And there are other more field-specific criteria outsiders can often use. For example, I've barely studied postmodernism at all, but I don't have to know much about the field to recognize that the fact that they borrow concepts from complex disciplines which they themselves haven't studied is a red flag.
  • the issue with AGW is less the science and all about the political solutions. Most every solution we hear in the public conversation requires some level of sacrifice and uncertainty in the future.Politicians, neither experts in climatology nor economics, craft legislation to solve the problem through the lens of their own political ideology. At TAM8, this was pretty apparent. My honest opinion is that people who are AGW skeptics are mainly skeptics of the political solutions. If AGW was said to increase the GDP of the country by two to three times, I'm guessing you'd see a lot less climate change skeptics.
  •  
    WEDNESDAY, JULY 14, 2010 Should non-experts shut up? The skeptic's catch-22
Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

Epiphenom: Suicide, age and poison - 0 views

  • Since then many studies reinforced this theory, showing that Catholicism, and indeed religion in general, seems to protect against suicide. Unfortunately, almost all these studies have been flawed - most often because they looked at average suicide rates and average religious beliefs across particular societies. They didn't look at the individual characteristics of those people who commit suicide.
  • Three new studies have addressed this problem. Each of them them takes advantage of new data to explore in some detail the link between religion and reduced suicide.
  • Matthias Egger, at the University of Bern in Switzerland, has cleverly linked census data to death records - not at all as straightforward as you might imagine. What that gives, for the first time, is a large database with reliable records of individual's religious affiliation in the last few years before they took their life.
  • ...4 more annotations...
  • as Durkeheim found when looking at Swiss data a century earlier, Catholics had the lowest suicide rate and Protestants higher. What's more, Egger found that the unaffiliated had the highest of all.
  • ne thing that jumped out was that the gap was much bigger for older people. At ages 35-44, there was essentially no difference. The gap grows gradually with age: in the oldest group (aged 85-94), Protestants are twice as likely as Catholics to commit suicide, and the unaffiliated four times as likely.
  • Strangely enough, the effect was particularly strong for death by poisoning. That's a perplexing result, until you remember that Switzerland is one of the few countries where assisted suicide is legal (so long as the motive is not selfish). There are several societies in Switzerland that provide assisted dying, with the usual method being an injection of barbiturates. On the death record, that's recorded as a death by poisoning.
  • That's not to say that Durkheim was wrong about religion. Social integration is important in reducing suicide, and that may well have contributed to the differences seen. Egger found that married people, and those living with others, also had lower suicide rates. But these data couldn't show that religion affected social integration.
  •  
    sociologist Émile Durkheim made an important discovery: across Europe, Protestant regions had a higher suicide rate that Catholic regions. This, he said, was because Catholicism created more integrated societies. In today's parlance, Catholicism generates more social capital.
Weiye Loh

The Data-Driven Life - NYTimes.com - 0 views

  • Humans make errors. We make errors of fact and errors of judgment. We have blind spots in our field of vision and gaps in our stream of attention.
  • These weaknesses put us at a disadvantage. We make decisions with partial information. We are forced to steer by guesswork. We go with our gut.
  • Others use data.
  • ...3 more annotations...
  • Others use data. A timer running on Robin Barooah’s computer tells him that he has been living in the United States for 8 years, 2 months and 10 days. At various times in his life, Barooah — a 38-year-old self-employed software designer from England who now lives in Oakland, Calif. — has also made careful records of his work, his sleep and his diet.
  • A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration. Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.
  • “People have such very poor sense of time,” Barooah says, and without good time calibration, it is much harder to see the consequences of your actions. If you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Let's make science metrics more scientific : Article : Nature - 0 views

  • Measuring and assessing academic performance is now a fact of scientific life.
  • Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use1
  • Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
  • ...15 more annotations...
  • narrow or biased measures of scientific achievement can lead to narrow and biased science.
  • Global demand for, and interest in, metrics should galvanize stakeholders — national funding agencies, scientific research organizations and publishing houses — to combine forces. They can set an agenda and foster research that establishes sound scientific metrics: grounded in theory, built with high-quality data and developed by a community with strong incentives to use them.
  • Scientists are often reticent to see themselves or their institutions labelled, categorized or ranked. Although happy to tag specimens as one species or another, many researchers do not like to see themselves as specimens under a microscope — they feel that their work is too complex to be evaluated in such simplistic terms. Some argue that science is unpredictable, and that any metric used to prioritize research money risks missing out on an important discovery from left field.
    • Weiye Loh
       
      It is ironic that while scientists feel that their work are too complex to be evaluated in simplistic terms or matrics, they nevertheless feel ok to evaluate the world in simplistic terms. 
  • It is true that good metrics are difficult to develop, but this is not a reason to abandon them. Rather it should be a spur to basing their development in sound science. If we do not press harder for better metrics, we risk making poor funding decisions or sidelining good scientists.
  • Metrics are data driven, so developing a reliable, joined-up infrastructure is a necessary first step.
  • We need a concerted international effort to combine, augment and institutionalize these databases within a cohesive infrastructure.
  • On an international level, the issue of a unique researcher identification system is one that needs urgent attention. There are various efforts under way in the open-source and publishing communities to create unique researcher identifiers using the same principles as the Digital Object Identifier (DOI) protocol, which has become the international standard for identifying unique documents. The ORCID (Open Researcher and Contributor ID) project, for example, was launched in December 2009 by parties including Thompson Reuters and Nature Publishing Group. The engagement of international funding agencies would help to push this movement towards an international standard.
  • if all funding agencies used a universal template for reporting scientific achievements, it could improve data quality and reduce the burden on investigators.
    • Weiye Loh
       
      So in future, we'll only have one robust matric to evaluate scientific contribution? hmm...
  • Importantly, data collected for use in metrics must be open to the scientific community, so that metric calculations can be reproduced. This also allows the data to be efficiently repurposed.
  • As well as building an open and consistent data infrastructure, there is the added challenge of deciding what data to collect and how to use them. This is not trivial. Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes4 and even the production of YouTube videos.
  • Perhaps publications in these different media should be weighted differently in different fields.
  • There needs to be a greater focus on what these data mean, and how they can be best interpreted.
  • This requires the input of social scientists, rather than just those more traditionally involved in data capture, such as computer scientists.
  • An international data platform supported by funding agencies could include a virtual 'collaboratory', in which ideas and potential solutions can be posited and discussed. This would bring social scientists together with working natural scientists to develop metrics and test their validity through wikis, blogs and discussion groups, thus building a community of practice. Such a discussion should be open to all ideas and theories and not restricted to traditional bibliometric approaches.
  • Far-sighted action can ensure that metrics goes beyond identifying 'star' researchers, nations or ideas, to capturing the essence of what it means to be a good scientist.
  •  
    Let's make science metrics more scientific Julia Lane1 Top of pageAbstract To capture the essence of good science, stakeholders must combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity, says Julia Lane.
Weiye Loh

Times Higher Education - Unconventional thinkers or recklessly dangerous minds? - 0 views

  • The origin of Aids denialism lies with one man. Peter Duesberg has spent the whole of his academic career at the University of California, Berkeley. In the 1970s he performed groundbreaking work that helped show how mutated genes cause cancer, an insight that earned him a well-deserved international reputation.
  • in the early 1980s, something changed. Duesberg attempted to refute his own theories, claiming that it was not mutated genes but rather environmental toxins that are cancer's true cause. He dismissed the studies of other researchers who had furthered his original work. Then, in 1987, he published a paper that extended his new train of thought to Aids.
  • Initially many scientists were open to Duesberg's ideas. But as evidence linking HIV to Aids mounted - crucially the observation that ARVs brought Aids sufferers who were on the brink of death back to life - the vast majority concluded that the debate was over. Nonetheless, Duesberg persisted with his arguments, and in doing so attracted a cabal of supporters
  • ...12 more annotations...
  • In 1999, denialism secured its highest-profile advocate: Thabo Mbeki, who was then president of South Africa. Having studied denialist literature, Mbeki decided that the consensus on Aids sounded too much like a "biblical absolute truth" that couldn't be questioned. The following year he set up a panel of advisers, nearly half of whom were Aids denialists, including Duesberg. The resultant health policies cut funding for clinics distributing ARVs, withheld donor medication and blocked international aid grants. Meanwhile, Mbeki's health minister, Manto Tshabalala-Msimang, promoted the use of alternative Aids remedies, such as beetroot and garlic.
  • In 2007, Nicoli Nattrass, an economist and director of the Aids and Society Research Unit at the University of Cape Town, estimated that, between 1999 and 2007, Mbeki's Aids denialist policies led to more than 340,000 premature deaths. Later, scientists Max Essex, Pride Chigwedere and other colleagues at the Harvard School of Public Health arrived at a similar figure.
  • "I don't think it's hyperbole to say the (Mbeki regime's) Aids policies do not fall short of a crime against humanity," says Kalichman. "The science behind these medications was irrefutable, and yet they chose to buy into pseudoscience and withhold life-prolonging, if not life-saving, medications from the population. I just don't think there's any question that it should be looked into and investigated."
  • In fairness, there was a reason to have faint doubts about HIV treatment in the early days of Mbeki's rule.
  • some individual cases had raised questions about their reliability on mass rollout. In 2002, for example, Sarah Hlalele, a South African HIV patient and activist from a settlement background, died from "lactic acidosis", a side-effect of her drugs combination. Today doctors know enough about mixing ARVs not to make the same mistake, but at the time her death terrified the medical community.
  • any trial would be futile because of the uncertainties over ARVs that existed during Mbeki's tenure and the fact that others in Mbeki's government went along with his views (although they have since renounced them). "Mbeki was wrong, but propositions we had established then weren't as incontestably established as they are now ... So I think these calls (for genocide charges or criminal trials) are misguided, and I think they're a sideshow, and I don't support them."
  • Regardless of the culpability of politicians, the question remains whether scientists themselves should be allowed to promote views that go wildly against the mainstream consensus. The history of science is littered with offbeat ideas that were ridiculed by the scientific communities of the time. Most of these ideas missed the textbooks and went straight into the waste-paper basket, but a few - continental drift, the germ basis of disease or the Earth's orbit around the Sun, for instance - ultimately proved to be worth more than the paper they were written on. In science, many would argue, freedom of expression is too important to throw away.
  • Such an issue is engulfing the Elsevier journal Medical Hypotheses. Last year the journal, which is not peer reviewed, published a paper by Duesberg and others claiming that the South African Aids death-toll estimates were inflated, while reiterating the argument that there is "no proof that HIV causes Aids". That prompted several Aids scientists to complain to Elsevier, which responded by retracting the paper and asking the journal's editor, Bruce Charlton, to implement a system of peer review. Having refused to change the editorial policy, Charlton faces the sack
  • There are people who would like the journal to keep its current format and continue accepting controversial papers, but for Aids scientists, Duesberg's paper was a step too far. Although it was deleted from both the journal's website and the Medline database, its existence elsewhere on the internet drove Chigwedere and Essex to publish a peer-reviewed rebuttal earlier this year in AIDS and Behavior, lest any readers be "hoodwinked" into thinking there was genuine debate about the causes of Aids.
  • Duesberg believes he is being "censored", although he has found other outlets. In 1991, he helped form "The Group for the Scientific Reappraisal of the HIV/Aids Hypothesis" - now called Rethinking Aids, or simply The Group - to publicise denialist information. Backed by his Berkeley credentials, he regularly promotes his views in media articles and films. Meanwhile, his closest collaborator, David Rasnick, tells "anyone who asks" that "HIV drugs do more harm than good".
  • "Is academic freedom such a precious concept that scientists can hide behind it while betraying the public so blatantly?" asked John Moore, an Aids scientist at Cornell University, on a South African health news website last year. Moore suggested that universities could put in place a "post-tenure review" system to ensure that their researchers act within accepted bounds of scientific practice. "When the facts are so solidly against views that kill people, there must be a price to pay," he added.
  • Now it seems Duesberg may have to pay that price since it emerged last month that his withdrawn paper has led to an investigation at Berkeley for misconduct. Yet for many in the field, chasing fellow scientists comes second to dealing with the Aids pandemic.
  •  
    6 May 2010 Aids denialism is estimated to have killed many thousands. Jon Cartwright asks if scientists should be held accountable, while overleaf Bruce Charlton defends his decision to publish the work of an Aids sceptic, which sparked a row that has led to his being sacked and his journal abandoning its raison d'etre: presenting controversial ideas for scientific debate
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Rationally Speaking | Official Podcast of New York City Skeptics - Current Ep... - 0 views

  • Massimo and Julia sit down in front of a live audience at the Jefferson Market Library in New York City for a conversation about science, non-science, and pseudo-science. Based on Massimo's book: "Nonsense on Stilts: How to Tell Science from Bunk" the topics they cover include whether the qualitative sciences are less reliable than quantitative ones, the re-running of the tape of life, and who is smarter: physicists, biologists, or psychologists? Also, why are evolutionary psychologist so fixated on sex?
Weiye Loh

Rationally Speaking: Are Intuitions Good Evidence? - 0 views

  • Is it legitimate to cite one’s intuitions as evidence in a philosophical argument?
  • appeals to intuitions are ubiquitous in philosophy. What are intuitions? Well, that’s part of the controversy, but most philosophers view them as intellectual “seemings.” George Bealer, perhaps the most prominent defender of intuitions-as-evidence, writes, “For you to have an intuition that A is just for it to seem to you that A… Of course, this kind of seeming is intellectual, not sensory or introspective (or imaginative).”2 Other philosophers have characterized them as “noninferential belief due neither to perception nor introspection”3 or alternatively as “applications of our ordinary capacities for judgment.”4
  • Philosophers may not agree on what, exactly, intuition is, but that doesn’t stop them from using it. “Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” Brian Talbot says.5 Typically, the way this works is that a philosopher challenges a theory by applying it to a real or hypothetical case and showing that it yields a result which offends his intuitions (and, he presumes, his readers’ as well).
  • ...16 more annotations...
  • For example, John Searle famously appealed to intuition to challenge the notion that a computer could ever understand language: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output)… If the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” Should we take Searle’s intuition that such a system would not constitute “understanding” as good evidence that it would not? Many critics of the Chinese Room argument argue that there is no reason to expect our intuitions about intelligence and understanding to be reliable.
  • Ethics leans especially heavily on appeals to intuition, with a whole school of ethicists (“intuitionists”) maintaining that a person can see the truth of general ethical principles not through reason, but because he “just sees without argument that they are and must be true.”6
  • Intuitions are also called upon to rebut ethical theories such as utilitarianism: maximizing overall utility would require you to kill one innocent person if, in so doing, you could harvest her organs and save five people in need of transplants. Such a conclusion is taken as a reductio ad absurdum, requiring utilitarianism to be either abandoned or radically revised – not because the conclusion is logically wrong, but because it strikes nearly everyone as intuitively wrong.
  • British philosopher G.E. Moore used intuition to argue that the existence of beauty is good irrespective of whether anyone ever gets to see and enjoy that beauty. Imagine two planets, he said, one full of stunning natural wonders – trees, sunsets, rivers, and so on – and the other full of filth. Now suppose that nobody will ever have the opportunity to glimpse either of those two worlds. Moore concluded, “Well, even so, supposing them quite apart from any possible contemplation by human beings; still, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would."7
  • Although similar appeals to intuition can be found throughout all the philosophical subfields, their validity as evidence has come under increasing scrutiny over the last two decades, from philosophers such as Hilary Kornblith, Robert Cummins, Stephen Stich, Jonathan Weinberg, and Jaakko Hintikka (links go to representative papers from each philosopher on this issue). The severity of their criticisms vary from Weinberg’s warning that “We simply do not know enough about how intuitions work,” to Cummins’ wholesale rejection of philosophical intuition as “epistemologically useless.”
  • One central concern for the critics is that a single question can inspire totally different, and mutually contradictory, intuitions in different people.
  • For example, I disagree with Moore’s intuition that it would be better for a beautiful planet to exist than an ugly one even if there were no one around to see it. I can’t understand what the words “better” and “worse,” let alone “beautiful” and “ugly,” could possibly mean outside the domain of the experiences of conscious beings
  • If we want to take philosophers’ intuitions as reason to believe a proposition, then the existence of opposing intuitions leaves us in the uncomfortable position of having reason to believe both a proposition and its opposite.
  • “I suspect there is overall less agreement than standard philosophical practice presupposes, because having the ‘right’ intuitions is the entry ticket to various subareas of philosophy,” Weinberg says.
  • But the problem that intuitions are often not universally shared is overshadowed by another problem: even if an intuition is universally shared, that doesn’t mean it’s accurate. For in fact there are many universal intuitions that are demonstrably false.
  • People who have not been taught otherwise typically assume that an object dropped out of a moving plane will fall straight down to earth, at exactly the same latitude and longitude from which it was dropped. What will actually happen is that, because the object begins its fall with the same forward momentum it had while it was on the plane, it will continue to travel forward, tracing out a curve as it falls and not a straight line. “Considering the inadequacies of ordinary physical intuitions, it is natural to wonder whether ordinary moral intuitions might be similarly inadequate,” Princeton’s Gilbert Harman has argued,9 and the same could be said for our intuitions about consciousness, metaphysics, and so on.
  • We can’t usually “check” the truth of our philosophical intuitions externally, with an experiment or a proof, the way we can in physics or math. But it’s not clear why we should expect intuitions to be true. If we have an innate tendency towards certain intuitive beliefs, it’s likely because they were useful to our ancestors.
  • But there’s no reason to expect that the intuitions which were true in the world of our ancestors would also be true in other, unfamiliar contexts
  • And for some useful intuitions, such as moral ones, “truth” may have been beside the point. It’s not hard to see how moral intuitions in favor of fairness and generosity would have been crucial to the survival of our ancestors’ tribes, as would the intuition to condemn tribe members who betrayed those reciprocal norms. If we can account for the presence of these moral intuitions by the fact that they were useful, is there any reason left to hypothesize that they are also “true”? The same question could be asked of the moral intuitions which Jonathan Haidt has classified as “purity-based” – an aversion to incest, for example, would clearly have been beneficial to our ancestors. Since that fact alone suffices to explain the (widespread) presence of the “incest is morally wrong” intuition, why should we take that intuition as evidence that “incest is morally wrong” is true?
  • The still-young debate over intuition will likely continue to rage, especially since it’s intertwined with a rapidly growing body of cognitive and social psychological research examining where our intuitions come from and how they vary across time and place.
  • its resolution bears on the work of literally every field of analytic philosophy, except perhaps logic. Can analytic philosophy survive without intuition? (If so, what would it look like?) And can the debate over the legitimacy of appeals to intuition be resolved with an appeal to intuition?
Weiye Loh

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
Weiye Loh

Rationally Speaking: Studying folk morality: philosophy, psychology, or what? - 0 views

  • in the magazine article Joshua mentions several studies of “folk morality,” i.e. of how ordinary people think about moral problems. The results are fascinating. It turns out that people’s views are correlated with personality traits, with subjects who score high on “openness to experience” being reliably more relativists than objectivists about morality (I am not using the latter term in the infamous Randyan meaning here, but as Knobe does, to indicate the idea that morality has objective bases).
  • Other studies show that people who are capable of considering multiple options in solving mathematical puzzles also tend to be moral relativists, and — in a study co-authored by Knobe himself — the very same situation (infanticide) was judged along a sliding scale from objectivism to relativism depending on whether the hypothetical scenario involved a fellow American (presumably sharing our same general moral values), the member of an imaginary Amazonian tribe (for which infanticide was acceptable), and an alien from the planet Pentar (belonging to a race whose only goal in life is to turn everything into equilateral pentagons, and killing individuals that might get in the way of that lofty objective is a duty). Oh, and related research also shows that young children tend to be objectivists, while young adults are usually relativists — but that later in life one’s primordial objectivism apparently experiences a comeback.
  • This is all very interesting social science, but is it philosophy? Granted, the differences between various disciplines are often not clear cut, and of course whenever people engage in truly inter-disciplinary work we should simply applaud the effort and encourage further work. But I do wonder in what sense, if any, the kinds of results that Joshua and his colleagues find have much to do with moral philosophy.
  • ...6 more annotations...
  • there seems to me the potential danger of confusing various categories of moral discourse. For instance, are the “folks” studied in these cases actually relativist, or perhaps adherents to one of several versions of moral anti-realism? The two are definitely not the same, but I doubt that the subjects in question could tell the difference (and I wouldn’t expect them to, after all they are not philosophers).
  • why do we expect philosophers to learn from “folk morality” when we do not expect, say, physicists to learn from folk physics (which tends to be Aristotelian in nature), or statisticians from people’s understanding of probability theory (which is generally remarkably poor, as casino owners know very well)? Or even, while I’m at it, why not ask literary critics to discuss Shakespeare in light of what common folks think about the bard (making sure, perhaps, that they have at least read his works, and not just watched the movies)?
  • Hence, my other examples of stat (i.e., math) and literary criticism. I conceive of philosophy in general, and moral philosophy in particular, as more akin to a (science-informed, to be sure) mix between logic and criticism. Some moral philosophy consists in engaging an “if ... then” sort of scenario, akin to logical-mathematical thinking, where one begins with certain axioms and attempts to derive the consequences of such axioms. In other respects, moral philosophers exercise reflective criticism concerning those consequences as they might be relevant to practical problems.
  • For instance, we may write philosophically about abortion, and begin our discussion from a comparison of different conceptions of “person.” We might conclude that “if” one adopts conception X of what a person is, “then” abortion is justifiable under such and such conditions; while “if” one adopts conception Y of a person, “then” abortion is justifiable under a different set of conditions, or not justifiable at all. We could, of course, back up even further and engage in a discussion of what “personhood” is, thus moving from moral philosophy to metaphysics.
  • Nowhere in the above are we going to ask “folks” what they think a person is, or how they think their implicit conception of personhood informs their views on abortion. Of course people’s actual views on abortion are crucial — especially for public policy — and they are intrinsically interesting to social scientists. But they don’t seem to me to make much more contact with philosophy than the above mentioned popular opinions on Shakespeare make contact with serious literary criticism. And please, let’s not play the cheap card of “elitism,” unless we are willing to apply the label to just about any intellectual endeavor, in any discipline.
  • There is one area in which experimental philosophy can potentially contribute to philosophy proper (as opposed to social science). Once we have a more empirically grounded understanding of what people’s moral reasoning actually is, then we can analyze the likely consequences of that reasoning for a variety of societal issues. But now we would be doing something more akin to political than moral philosophy.
  •  
    My colleague Joshua Knobe at Yale University recently published an intriguing article in The Philosopher's Magazine about the experimental philosophy of moral decision making. Joshua and I have had a nice chat during a recent Rationally Speaking podcast dedicated to experimental philosophy, but I'm still not convinced about the whole enterprise.
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Roger Pielke Jr.'s Blog: It Is Always the Media's Fault - 0 views

  • Last summer NCAR issued a dramatic press release announcing that oil from the Gulf spill would soon be appearing on the beaches of the Atlantic ocean.  I discussed it here. Here are the first four paragraphs of that press release: BOULDER—A detailed computer modeling study released today indicates that oil from the massive spill in the Gulf of Mexico might soon extend along thousands of miles of the Atlantic coast and open ocean as early as this summer. The modeling results are captured in a series of dramatic animations produced by the National Center for Atmospheric Research (NCAR) and collaborators. he research was supported in part by the National Science Foundation, NCAR’s sponsor. The results were reviewed by scientists at NCAR and elsewhere, although not yet submitted for peer-review publication. “I’ve had a lot of people ask me, ‘Will the oil reach Florida?’” says NCAR scientist Synte Peacock, who worked on the study. “Actually, our best knowledge says the scope of this environmental disaster is likely to reach far beyond Florida, with impacts that have yet to be understood.” The computer simulations indicate that, once the oil in the uppermost ocean has become entrained in the Gulf of Mexico’s fast-moving Loop Current, it is likely to reach Florida's Atlantic coast within weeks. It can then move north as far as about Cape Hatteras, North Carolina, with the Gulf Stream, before turning east. Whether the oil will be a thin film on the surface or mostly subsurface due to mixing in the uppermost region of the ocean is not known.
  • A few weeks ago NCAR's David Hosansky who presumably wrote that press release, asks whether NCAR got it wrong.  His answer?  No, not really: During last year’s crisis involving the massive release of oil into the Gulf of Mexico, NCAR issued a much-watched animation projecting that the oil could reach the Atlantic Ocean. But detectable amounts of oil never made it to the Atlantic, at least not in an easily visible form on the ocean surface. Not surprisingly, we’ve heard from a few people asking whether NCAR got it wrong. These events serve as a healthy reminder of a couple of things: *the difference between a projection and an actual forecast *the challenges of making short-term projections of natural processes that can act chaotically, such as ocean currents
  • What then went wrong? First, the projection. Scientists from NCAR, the Department of Energy’s Los Alamos National Laboratory, and IFM-GEOMAR in Germany did not make a forecast of where the oil would go. Instead, they issued a projection. While there’s not always a clear distinction between the two, forecasts generally look only days or hours into the future and are built mostly on known elements (such as the current amount of humidity in the atmosphere). Projections tend to look further into the future and deal with a higher number of uncertainties (such as the rate at which oil degrades in open waters and the often chaotic movements of ocean currents). Aware of the uncertainties, the scientific team projected the likely path of the spill with a computer model of a liquid dye. They used dye rather than actual oil, which undergoes bacterial breakdown, because a reliable method to simulate that breakdown was not available. As it turned out, the oil in the Gulf broke down quickly due to exceptionally strong bacterial action and, to some extent, the use of chemical dispersants.
  • ...3 more annotations...
  • Second, the challenges of short-term behavior. The Gulf's Loop Current acts as a conveyor belt, moving from the Yucatan through the Florida Straits into the Atlantic. Usually, the current curves northward near the Louisiana and Mississippi coasts—a configuration that would have put it on track to pick up the oil and transport it into open ocean. However, the current’s short-term movements over a few weeks or even months are chaotic and impossible to predict. Sometimes small eddies, or mini-currents, peel off, shifting the position and strength of the main current. To determine the threat to the Atlantic, the research team studied averages of the Loop Current’s past behavior in order to simulate its likely course after the spill and ran several dozen computer simulations under various scenarios. Fortunately for the East Coast, the Loop Current did not behave in its usual fashion but instead remained farther south than usual, which kept it far from the Louisiana and Mississippi coast during the crucial few months before the oil degraded and/or was dispersed with chemical treatments.
  • The Loop Current typically goes into a southern configuration about every 6 to 19 months, although it rarely remains there for very long. NCAR scientist Synte Peacock, who worked on the projection, explains that part of the reason the current is unpredictable is “no two cycles of the Loop Current are ever exactly the same." She adds that the cycles are influenced by such variables as how large the eddy is, where the current detaches and moves south, and how long it takes for the current to reform. Computer models can simulate the currents realistically, she adds. But they cannot predict when the currents will change over to a new cycle. The scientists were careful to explain that their simulations were a suite of possible trajectories demonstrating what was likely to happen, but not a definitive forecast of what would happen. They reiterated that point in a peer-reviewed study on the simulations that appeared last August in Environmental Research Letters. 
  • So who was at fault?  According to Hosansky it was those dummies in the media: These caveats, however, got lost in much of the resulting media coverage.Another perspective is that having some of these caveats in the press release might have been a good idea.
Weiye Loh

Freakonomics » How Advancements in Neuroscience Will Influence the Law - 0 views

  • as new technologies emerge to better reveal people’s experiences, the law ought to do more to take these experiences into account. In tort and criminal law, we often ignore or downplay the importance of subjective experience. This is no surprise. During the hundreds of years in which these bodies of law developed, we had very poor methods of making inferences about the experiences of others. As we get better at measuring experiences, however, I make the normative claim that we ought to change fundamental aspects of the law to take better account of people’s experiences.
  • Researchers are trying to develop more accurate methods of detecting deception using brain imaging.    While many in the scientific community doubt that current brain-based methods of lie detection are sufficiently accurate and reliable to use in forensic contexts, that has stopped neither companies from marketing fMRI lie detection services to the public, nor litigants from trying to introduce such evidence in court.
  • Given the substantial possibility that we will develop reasonably accurate lie detectors within the next thirty years, our current secretive behaviors have already become harder to hide.
  •  
    A new article published in the Emory Law Journal (full version here) entitled "The Experiential Future of the Law," by Brooklyn Law School professor Adam Kolber, looks at how these advancements will continue over the next 30 years (to the point of near mind-reading), and how they'll inevitably lead to changes in the law.
Weiye Loh

Don't dumb me down | Science | The Guardian - 0 views

  • Science stories usually fall into three families: wacky stories, scare stories and "breakthrough" stories.
  • these stories are invariably written by the science correspondents, and hotly followed, to universal jubilation, with comment pieces, by humanities graduates, on how bonkers and irrelevant scientists are.
  • A close relative of the wacky story is the paradoxical health story. Every Christmas and Easter, regular as clockwork, you can read that chocolate is good for you (www.badscience.net/?p=67), just like red wine is, and with the same monotonous regularity
  • ...19 more annotations...
  • At the other end of the spectrum, scare stories are - of course - a stalwart of media science. Based on minimal evidence and expanded with poor understanding of its significance, they help perform the most crucial function for the media, which is selling you, the reader, to their advertisers. The MMR disaster was a fantasy entirely of the media's making (www.badscience.net/?p=23), which failed to go away. In fact the Daily Mail is still publishing hysterical anti-immunisation stories, including one calling the pneumococcus vaccine a "triple jab", presumably because they misunderstood that the meningitis, pneumonia, and septicaemia it protects against are all caused by the same pneumococcus bacteria (www.badscience.net/?p=118).
  • people periodically come up to me and say, isn't it funny how that Wakefield MMR paper turned out to be Bad Science after all? And I say: no. The paper always was and still remains a perfectly good small case series report, but it was systematically misrepresented as being more than that, by media that are incapable of interpreting and reporting scientific data.
  • Once journalists get their teeth into what they think is a scare story, trivial increases in risk are presented, often out of context, but always using one single way of expressing risk, the "relative risk increase", that makes the danger appear disproportionately large (www.badscience.net/?p=8).
  • he media obsession with "new breakthroughs": a more subtly destructive category of science story. It's quite understandable that newspapers should feel it's their job to write about new stuff. But in the aggregate, these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data
  • Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases. Often, a front page science story will emerge from a press release alone, and the formal academic paper may never appear, or appear much later, and then not even show what the press reports claimed it would (www.badscience.net/?p=159).
  • there was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false (www.tinyurl.com/ceq33). It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis that we'll see later. Scientists know how to read a paper. That's what they do for a living: read papers, pick them apart, pull out what's good and bad.
  • Scientists never said that tenuous small new findings were important headline news - journalists did.
  • there is no useful information in most science stories. A piece in the Independent on Sunday from January 11 2004 suggested that mail-order Viagra is a rip-off because it does not contain the "correct form" of the drug. I don't use the stuff, but there were 1,147 words in that piece. Just tell me: was it a different salt, a different preparation, a different isomer, a related molecule, a completely different drug? No idea. No room for that one bit of information.
  • Remember all those stories about the danger of mobile phones? I was on holiday at the time, and not looking things up obsessively on PubMed; but off in the sunshine I must have read 15 newspaper articles on the subject. Not one told me what the experiment flagging up the danger was. What was the exposure, the measured outcome, was it human or animal data? Figures? Anything? Nothing. I've never bothered to look it up for myself, and so I'm still as much in the dark as you.
  • Because papers think you won't understand the "science bit", all stories involving science must be dumbed down, leaving pieces without enough content to stimulate the only people who are actually going to read them - that is, the people who know a bit about science.
  • Compare this with the book review section, in any newspaper. The more obscure references to Russian novelists and French philosophers you can bang in, the better writer everyone thinks you are. Nobody dumbs down the finance pages.
  • Statistics are what causes the most fear for reporters, and so they are usually just edited out, with interesting consequences. Because science isn't about something being true or not true: that's a humanities graduate parody. It's about the error bar, statistical significance, it's about how reliable and valid the experiment was, it's about coming to a verdict, about a hypothesis, on the back of lots of bits of evidence.
  • science journalists somehow don't understand the difference between the evidence and the hypothesis. The Times's health editor Nigel Hawkes recently covered an experiment which showed that having younger siblings was associated with a lower incidence of multiple sclerosis. MS is caused by the immune system turning on the body. "This is more likely to happen if a child at a key stage of development is not exposed to infections from younger siblings, says the study." That's what Hawkes said. Wrong! That's the "Hygiene Hypothesis", that's not what the study showed: the study just found that having younger siblings seemed to be somewhat protective against MS: it didn't say, couldn't say, what the mechanism was, like whether it happened through greater exposure to infections. He confused evidence with hypothesis (www.badscience.net/?p=112), and he is a "science communicator".
  • how do the media work around their inability to deliver scientific evidence? They use authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. "Scientists today said ... scientists revealed ... scientists warned." And if they want balance, you'll get two scientists disagreeing, although with no explanation of why (an approach at its most dangerous with the myth that scientists were "divided" over the safety of MMR). One scientist will "reveal" something, and then another will "challenge" it
  • The danger of authority figure coverage, in the absence of real evidence, is that it leaves the field wide open for questionable authority figures to waltz in. Gillian McKeith, Andrew Wakefield, Kevin Warwick and the rest can all get a whole lot further, in an environment where their authority is taken as read, because their reasoning and evidence is rarely publicly examined.
  • it also reinforces the humanities graduate journalists' parody of science, for which we now have all the ingredients: science is about groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality: they do work that is either wacky, or dangerous, but either way, everything in science is tenuous, contradictory and, most ridiculously, "hard to understand".
  • This misrepresentation of science is a direct descendant of the reaction, in the Romantic movement, against the birth of science and empiricism more than 200 years ago; it's exactly the same paranoid fantasy as Mary Shelley's Frankenstein, only not as well written. We say descendant, but of course, the humanities haven't really moved forward at all, except to invent cultural relativism, which exists largely as a pooh-pooh reaction against science. And humanities graduates in the media, who suspect themselves to be intellectuals, desperately need to reinforce the idea that science is nonsense: because they've denied themselves access to the most significant developments in the history of western thought for 200 years, and secretly, deep down, they're angry with themselves over that.
  • had a good spirited row with an eminent science journalist, who kept telling me that scientists needed to face up to the fact that they had to get better at communicating to a lay audience. She is a humanities graduate. "Since you describe yourself as a science communicator," I would invariably say, to the sound of derisory laughter: "isn't that your job?" But no, for there is a popular and grand idea about, that scientific ignorance is a useful tool: if even they can understand it, they think to themselves, the reader will. What kind of a communicator does that make you?
  • Science is done by scientists, who write it up. Then a press release is written by a non-scientist, who runs it by their non-scientist boss, who then sends it to journalists without a science education who try to convey difficult new ideas to an audience of either lay people, or more likely - since they'll be the ones interested in reading the stuff - people who know their way around a t-test a lot better than any of these intermediaries. Finally, it's edited by a whole team of people who don't understand it. You can be sure that at least one person in any given "science communication" chain is just juggling words about on a page, without having the first clue what they mean, pretending they've got a proper job, their pens all lined up neatly on the desk.
Weiye Loh

Roger Pielke Jr.'s Blog: Blind Spots in Australian Flood Policies - 0 views

  • better management of flood risks in Australia will depend up better data on flood risk.  However, collecting such data has proven problematic
  • As many Queenslanders affected by January’s floods are realising, riverine flood damage is commonly excluded from household insurance policies. And this is unlikely to change until councils – especially in Queensland – stop dragging their feet and actively assist in developing comprehensive data insurance companies can use.
  • ? Because there is often little available information that would allow an insurer to adequately price this flood risk. Without this, there is little economic incentive for insurers to accept this risk. It would be irresponsible for insurers to cover riverine flood without quantifying and pricing the risk accordingly.
  • ...8 more annotations...
  • The first step in establishing risk-adjusted premiums is to know the likelihood of the depth of flooding at each address. This information has to be address-specific because the severity of flooding can vary widely over small distances, for example, from one side of a road to the other.
  • A litany of reasons is given for withholding data. At times it seems that refusal stems from a view that insurance is innately evil. This is ironic in view of the gratuitous advice sometimes offered by politicians and commentators in the aftermath of extreme events, exhorting insurers to pay claims even when no legal liability exists and riverine flood is explicitly excluded from policies.
  • Risk Frontiers is involved in jointly developing the National Flood Information Database (NFID) for the Insurance Council of Australia with Willis Re, a reinsurance broking intermediary. NFID is a five year project aiming to integrate flood information from all city councils in a consistent insurance-relevant form. The aim of NFID is to help insurers understand and quantify their risk. Unfortunately, obtaining the base data for NFID from some local councils is difficult and sometimes impossible despite the support of all state governments for the development of NFID. Councils have an obligation to assess their flood risk and to establish rules for safe land development. However, many are antipathetic to the idea of insurance. Some states and councils have been very supportive – in New South Wales and Victoria, particularly. Some states have a central repository – a library of all flood studies and digital terrain models (digital elevation data). Council reluctance to release data is most prevalent in Queensland, where, unfortunately, no central repository exists.
  • Second, models of flood risk are sometimes misused:
  • many councils only undertake flood modelling in order to create a single design flood level, usually the so-called one-in-100 year flood. (For reasons given later, a better term is the flood with an 1% annual likelihood of being exceeded.)
  • Inundation maps showing the extent of the flood with a 1% annual likelihood of exceedance are increasingly common on council websites, even in Queensland. Unfortunately these maps say little about the depth of water at an address or, importantly, how depth varies for less probable floods. Insurance claims usually begin when the ground is flooded and increase rapidly as water rises above the floor level. At Windsor in NSW, for example, the difference in the water depth between the flood with a 1% annual chance of exceedance and the maximum possible flood is nine metres. In other catchments this difference may be as small as ten centimetres. The risk of damage is quite different in both cases and an insurer needs this information if they are to provide coverage in these areas.
  • The ‘one-in-100 year flood’ term is misleading. To many it is something that happens regularly once every 100 years — with the reliability of a bus timetable. It is still possible, though unlikely, that a flood of similar magnitude or even greater flood could happen twice in one year or three times in successive years.
  • The calculations underpinning this are not straightforward but the probability that an address exposed to a 1-in-100 year flood will experience such an event or greater over the lifetime of the house – 50 years say – is around 40%. Over the lifetime of a typical home mortgage – 25 years – the probability of occurrence is 22%. These are not good odds.
  •  
    John McAneney of Risk Frontiers at Macquarie University in Sydney identifies some opportunities for better flood policies in Australia.
Weiye Loh

Freakonomics » The Economics of Happiness, Part 1: Reassessing the Easterlin ... - 0 views

  • Arguably the most important finding from the emerging economics of happiness has been the Easterlin Paradox. What is this paradox? It is the juxtaposition of three observations: 1) Within a society, rich people tend to be much happier than poor people. 2) But, rich societies tend not to be happier than poor societies (or not by much). 3) As countries get richer, they do not get happier.
  • Easterlin offered an appealing resolution to his paradox, arguing that only relative income matters to happiness. Other explanations suggest a “hedonic treadmill,” in which we must keep consuming more just to stay at the same level of happiness.
  • We have re-analyzed all of the relevant post-war data, and also analyzed the particularly interesting new data from the Gallup World Poll. Last Thursday we presented our research at the latest Brookings Panel on Economic Activity, and we have arrived at a rather surprising conclusion: There is no Easterlin Paradox. The facts about income and happiness turn out to be much simpler than first realized: 1) Rich people are happier than poor people. 2) Richer countries are happier than poorer countries. 3) As countries get richer, they tend to get happier.
  • ...4 more annotations...
  • What explains these new findings? The key turns out to be an accumulation of data over recent decades. Thirty years ago it was difficult to make convincing international comparisons because there were few datasets comparing rich and poor countries. Instead, researchers were forced to make comparisons based on a handful of moderately-rich and very-rich countries. These data just didn’t lend themselves to strong conclusions.
  • Moreover, repeated happiness surveys around the world have allowed us to observe the evolution of G.D.P. and happiness through time — both over a longer period, and for more countries. On balance, G.D.P. and happiness have tended to move together.
  • There is a second issue here that has led to mistaken inferences: a tendency to confuse absence of evidence for a proposition as evidence of its absence. Thus, when early researchers could not isolate a statistically reliable association between G.D.P. and happiness, they inferred that this meant the two were unrelated, and a paradox was born.
  • Our complete analysis is available here. An excellent summary is available in today’s New York Times, here, with a very cool graphic, and readers’ comments. Other commentary is available in the F.T. (here and here), and Time Magazine.
Weiye Loh

The messy business of cleaning up carbon policy (and how to sell it to the electorate) ... - 0 views

  • 1. Putting a price on carbon is not only about the climate.Yes, humans are affecting the climate and reducing carbon dioxide emissions is a key commitment of this government, and indeed the stated views of the opposition. But there are other reasons to price carbon, primarily to put Australia at the forefront of a global energy technology revolution that is already underway.In future years and decades the world is going to need vastly more energy that is secure, reliable, clean and affordable. Achieving these outcomes will require an energy technology revolution. The purpose of pricing carbon is to raise the revenues needed to invest in this future, just as we invest in health, agriculture and defence.
  • 2. A price on carbon raises revenues to invest in stimulating that energy technology revolution.Australia emits almost 400 million tonnes of carbon dioxide into the atmosphere every year. In round numbers, every dollar carbon tax per tonne on those emissions would raise about A$100 million. A significant portion of the proceeds from a carbon tax should be used to invest in energy technology innovation, using today’s energy economy to build a bridge to tomorrow’s economy. This is exactly the strategy that India has adopted with a small levy on coal and Germany has adopted with a tax on nuclear fuel rods, with proceeds in both instances invested into energy innovation.
  • 3. The purpose of a carbon tax is not to make energy, food, petrol or consumer goods appreciably more expensive.Just as scientists are in broad agreement that humans are affecting the global climate, economists and other experts are in broad agreement that we cannot revolutionise our energy economy through pricing mechanisms alone. Thus, we propose starting with a low carbon tax - one that has broad political support - and then committing to increasing it in a predictable manner over time.The Coalition has proposed a “direct action plan” on carbon policy that would cost A$30 billion over the next 8 years, which is the equivalent of about a $2.50 per tonne carbon tax. The question to be put to the Coalition is not whether we should be investing in a carbon policy, as we agree on that point, but how much and how it should be paid for. The Coalition’s plans leave unanswered how they would pay for their plan.A carbon tax offers a responsible and effective manner to raise funds without harming the economy or jobs. In fact, to the extent that investments in energy innovation bear fruit, new markets will be opened and new jobs will be created. The Coalition’s plan is not focused on energy technology innovation.The question for the Coalition should thus be, at what level would you set a carbon tax (or what other taxes would you raise?), and how would you invest the proceeds in a manner that accelerates energy technology innovation?
  • ...1 more annotation...
  • 4. Even a low carbon tax will make some goods cost a bit more, so it is important to help those who are most affected.Our carbon tax proposal is revenue neutral in the sense that we will lower other taxes in direct proportion to the impact, however modest, of a low carbon tax. We will do this with particular attention to those who may be most directly affected by a price on carbon.In addition, some portion of the revenue raised by a carbon tax will be returned to the public. But not all. It is important to invest in tomorrow’s energy technologies today and a carbon tax provides the mechanism for doing so.
‹ Previous 21 - 40 of 41 Next ›
Showing 20 items per page