Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Reasoning

Rss Feed Group items tagged

Weiye Loh

Balderdash: The problem with Liberal Utilitarianism - 0 views

  • Sam Harris's reinvention of Utilitarianism/Consequentialism has charmed many, and in my efforts to show people how pure Utilitarianism/Consequentialism fails (in the process encountering people who seem never to have read anything Harris has written or read on the subject, since I have been challenged to show where Harris has proposed Science as the foundation of our moral system, or that one can derive moral facts from facts about the world), "liberal utilitarianism" has been thrown at me as a way to resolve the problems with pure Utilitarianism/Consequentialism.
  • Liberal utilitarianism is not a position that one often encounters. I suspect this is because most philosophers recognise that unless one bites some big bullets, it is incoherent, being beholden to two separate moral theories, which brings many problems when they clash. It is much easier to stick to one foundation of morality.
  • utilitarians typically must claim that ‘the value of liberty. .. is wholly dependent on its contribution to utility. But if that is the case’, he asks, ‘how can the “right” to liberty be absolute and indefeasible when the consequences of exercising the right will surely vary with changing social circumstances?’ (1991, p. 213). His answer is that it cannot be, unless external moral considerations are imported into pure maximizing utilitarianism to guarantee the desired Millian result. In his view, the absolute barrier that Mill extcts against all forms of coercion really seems to require a non-utilitarian justification, even if ‘utilitarianism’ might somehow be defined or enlarged to subsume the requisite form of reasoning. Thus, ‘Mill is a consistent liberal’, he says, ‘whose view is inconsistent with hedonistic or preference utilitarianism’ (ibid., p. 236)...
  • ...4 more annotations...
  • From Riley's Mill on liberty:
  • Mill’s defence of liberty is not utilitarian’ because it ignores the dislike, disgust and so-called ‘moral’ disapproval which others feel as a result of self-regarding conduct.
  • Why doesn’t liberal utilitarianism consider the possibility that aggregate dislike of the individual’s self-regarding conduct might outweigh the value of his liberty, and justify suppression of his conduct? As we have seen, Mill devotes considerable effort to answering this question (111.1 , 10—1 9, IV.8— 12, pp. 260—1, 26 7—75, 280—4). Among other things, liberty in self-regarding matters is essential to the cultivation of individual character, he says, and is not incompatible with similar cultivation by others, because they remain free to think and do as they please, having directly suffered no perceptible damage against their wishes. When all is said and done, his implicit answer is that a person’s liberty in self-regarding matters is infinitely more valuable than any satisfaction the rest of us might take at suppression of his conduct. The utility of self-regarding liberty is of a higher kind than the utility of suppression based on mere dislike (no perceptible damages to others against their wishes is implicated), in that any amount (however small) of the higher kind outweighs any quantity (however large) of the lower.
  • The problem is that if you are using (implicitly or otherwise) mathematics to sum up the expected utility of different choices, you canot plug infinity into any expression, or you will get incoherent results as the expression in question will no longer be well-behaved.
Weiye Loh

Did Mark Zuckerberg Deserve to Be Named Person of the Year? No - 0 views

  • First, Time carried out a reader poll, in which individuals got the chance to vote and rate their favorite nominees. Zuckerberg ended up in 10th place with 18,353 votes and an average rating of 52 behind renowned individuals such as Lady Gaga, Julian Assange, John Stewart and Stephen Colbert, Barack Obama, Steve Jobs, et cetera. On the other end of the spectrum, Julian Assange managed to grab the first place with a whopping 382,026 votes and an average rating of 92.It turns out that the poll had no point or purpose at all. Time clearly did not take into account its readers’ opinion on the matter.
  • ulian Assange should have been named Person of the Year. His contribution to the world and history — whether you see it as positive or negative — has been more controversial and life-changing that those of Zuckerberg. Assange and his non-profit organization has changed the way we look at various governments around the world. Specially, the U.S. government. There’s a reason why hundreds of thousands of individuals voted for Assange and not Zuckerberg.
  • even other nominees deserve the title more than Zuckerberg. For instance, Lady Gaga has become a huge influence in the music scene. She’s also done a lot of charitable work for LGBT [lesbian, gay, bisexual, and transgender] individuals and support equality rights. Even though I’m not a fan, Apple CEO Steve Jobs has also done more than Zuckerberg. His opinion and mandate at Apple has completely revolutionize the tech industry.
  • ...1 more annotation...
  • Facebook as a company and social network deserve the title more than its CEO
Weiye Loh

Epiphenom: The evolution of dissent - 0 views

  • Genetic evolution in humans occurs in an environment shaped by culture - and culture, in turn is shaped by genetics.
  • If religion is a virus, then perhaps the spread of religion can be understood through the lens of evolutionary theory. Perhaps cultural evolution can be modelled using the same mathematical tools applied to genetic evolution.
  • Michael Doebli and Iaroslav Ispolatov at the University of  British Columbia
  • ...6 more annotations...
  • set out to model was the development of religious schisms. Such schisms are a recurrent feature of religion, especially in the West. The classic example is the fracturing of Christianity that occured after the reformation.
  • Their model made two simple assumptions. Firstly, that religions that are highly dominant actually induce some people to want to break away from them. When a religion becomes overcrowded, then some individuals will lose their religion and take up another.
  • Second, they assume that every religion has a value to the individual that is composed of it's costs and benefits. That value varies between religion, but is the same for all individuals. It's a pretty simplistic assumption, but even so they get some interesting results.
  • Now, this is a very simple model, and so the results shouldn't be over-interpreted. But it's a fascinating result for a couple of reasons. It shows how new religious 'species' can come into being in a mixed population - no need for geographical separation. That's such a common feature of religion - from the Judaeo-Christian religions to examples from Papua New Guinea - that it's worth trying to understand what drives it. What's more, this is the first time that anyone has attempted to model the transmission of religious ideas in evolutionary terms. It's a first step, to be sure, but just showing that it can be done is a significant achievement.
  • The value comes because it shifts the focus from thinking about how culture benefits the host, and instead asks how the cultural trait is adaptive in it's own right. What is important is not whether or not the human host benefits from the trait, but rather whether the trait can successfully transmit and reproducing itself (see Bible Belter for an example of how this could work).
  • Even more intriguing is the implications for understanding cultural-genetic co-evolution. After all, we know that viruses and their hosts co-evolve in a kind of arms race - sometimes ending up in a relationship that benefits both.
  •  
    Genetic evolution in humans occurs in an environment shaped by culture - and culture, in turn is shaped by genetics
Weiye Loh

takchek (读书 ): How Nature selects manuscripts for publication - 0 views

  • the explanation's pretty weak on the statistics given that it is a scientific journal. Drug Monkey and writedit have more on commentary about this particular editorial.
  • Good science, bad science, and whether it will lead to publication or not all rests on the decision of the editor. The gatekeeper.
  • do you know that Watson and Crick's landmark 1953 paper on the structure of DNA in the journal was not sent out for peer review at all?The reasons, as stated by Nature's Emeritus Editor John Maddox were:First, the Crick and Watson paper could not have been refereed: its correctness is self-evident. No referee working in the field (Linus Pauling?) could have kept his mouth shut once he saw the structure. Second, it would have been entirely consistent with my predecessor L. J. F. Brimble's way of working that Bragg's commendation should have counted as a referee's approval.
  • ...1 more annotation...
  • The whole business of scientific publishing is murky and sometimes who you know counts more than what you know in order to get your foot into the 'club'. Even Maddox alluded to the existence of such an 'exclusive' club:Brimble, who used to "take luncheon" at the Athenaeum in London most days, preferred to carry a bundle of manuscripts with him in the pocket of his greatcoat and pass them round among his chums "taking coffee" in the drawing-room after lunch. I set up a more systematic way of doing the job when I became editor in April 1966.
  •  
    How Nature selects manuscripts for publication Nature actually devoted an editorial (doi:10.1038/463850a) explaining its publication process.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Johann Hari: The Pope, the Prophet, and the religious support for evil - Johann Hari, C... - 0 views

  • What can make tens of millions of people – who are in their daily lives peaceful and compassionate and caring – suddenly want to physically dismember a man for drawing a cartoon, or make excuses for an international criminal conspiracy to protect child-rapists? Not reason. Not evidence. No. But it can happen when people choose their polar opposite – religion.
  • people can begin to behave in bizarre ways when they decide it is a good thing to abandon any commitment to fact and instead act on faith. It has led some to regard people accused of the attempted murders of the Mohamed cartoonists as victims, and to demand "respect" for the Pope, when he should be in a police station being quizzed about his role in covering up and thereby enabling the rape of children.
  • One otherwise liberal newspaper ran an article saying that since the cartoonists had engaged in an "aggressive act" and shown "prejudice... against religion per se", so it stated menacingly that no doubt "someone else is out there waiting for an opportunity to strike again".
  • ...3 more annotations...
  • if religion wasn't involved – would be so obvious it would seem ludicrous to have to say them out loud. Drawing a cartoon is not an act of aggression. Trying to kill somebody with an axe is. There is no moral equivalence between peacefully expressing your disagreement with an idea – any idea – and trying to kill somebody for it. Yet we have to say this because we have allowed religious people to claim their ideas belong to a different, exalted category, and it is abusive or violent merely to verbally question them. Nobody says I should "respect" conservatism or communism and keep my opposition to them to myself – but that's exactly what is routinely said about Islam or Christianity or Buddhism. What's the difference?
  • By 1962, it was becoming clear to the Vatican that a significant number of its priests were raping children. Rather than root it out, they issued a secret order called "Crimen Sollicitationis"' ordering bishops to swear the victims to secrecy and move the offending priest on to another parish. This of course meant they raped more children there, and on and on, in parish after parish.
  • when Ratzinger was Archbishop of Munich in the 1980s, one of his paedophile priests was "reassigned" in this way. He claims he didn't know. Yet a few years later he was put in charge of the Vatican's response to this kind of abuse and demanded every case had to be referred directly to him for 20 years. What happened on his watch, with every case going to his desk? Precisely this pattern, again and again. The BBC's Panorama studied one of many such cases. Father Tarcisio Spricigo was first accused of child abuse in 1991, in Brazil. He was moved by the Vatican four times, wrecking the lives of children at every stop. He was only caught in 2005 by the police, before he could be moved on once more.
  •  
    This enforced 'respect' is a creeping vine: it soon extends from ideas to institutions
Weiye Loh

Let's make science metrics more scientific : Article : Nature - 0 views

  • Measuring and assessing academic performance is now a fact of scientific life.
  • Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use1
  • Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
  • ...15 more annotations...
  • narrow or biased measures of scientific achievement can lead to narrow and biased science.
  • Global demand for, and interest in, metrics should galvanize stakeholders — national funding agencies, scientific research organizations and publishing houses — to combine forces. They can set an agenda and foster research that establishes sound scientific metrics: grounded in theory, built with high-quality data and developed by a community with strong incentives to use them.
  • Scientists are often reticent to see themselves or their institutions labelled, categorized or ranked. Although happy to tag specimens as one species or another, many researchers do not like to see themselves as specimens under a microscope — they feel that their work is too complex to be evaluated in such simplistic terms. Some argue that science is unpredictable, and that any metric used to prioritize research money risks missing out on an important discovery from left field.
    • Weiye Loh
       
      It is ironic that while scientists feel that their work are too complex to be evaluated in simplistic terms or matrics, they nevertheless feel ok to evaluate the world in simplistic terms. 
  • It is true that good metrics are difficult to develop, but this is not a reason to abandon them. Rather it should be a spur to basing their development in sound science. If we do not press harder for better metrics, we risk making poor funding decisions or sidelining good scientists.
  • Metrics are data driven, so developing a reliable, joined-up infrastructure is a necessary first step.
  • We need a concerted international effort to combine, augment and institutionalize these databases within a cohesive infrastructure.
  • On an international level, the issue of a unique researcher identification system is one that needs urgent attention. There are various efforts under way in the open-source and publishing communities to create unique researcher identifiers using the same principles as the Digital Object Identifier (DOI) protocol, which has become the international standard for identifying unique documents. The ORCID (Open Researcher and Contributor ID) project, for example, was launched in December 2009 by parties including Thompson Reuters and Nature Publishing Group. The engagement of international funding agencies would help to push this movement towards an international standard.
  • if all funding agencies used a universal template for reporting scientific achievements, it could improve data quality and reduce the burden on investigators.
    • Weiye Loh
       
      So in future, we'll only have one robust matric to evaluate scientific contribution? hmm...
  • Importantly, data collected for use in metrics must be open to the scientific community, so that metric calculations can be reproduced. This also allows the data to be efficiently repurposed.
  • As well as building an open and consistent data infrastructure, there is the added challenge of deciding what data to collect and how to use them. This is not trivial. Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes4 and even the production of YouTube videos.
  • Perhaps publications in these different media should be weighted differently in different fields.
  • There needs to be a greater focus on what these data mean, and how they can be best interpreted.
  • This requires the input of social scientists, rather than just those more traditionally involved in data capture, such as computer scientists.
  • An international data platform supported by funding agencies could include a virtual 'collaboratory', in which ideas and potential solutions can be posited and discussed. This would bring social scientists together with working natural scientists to develop metrics and test their validity through wikis, blogs and discussion groups, thus building a community of practice. Such a discussion should be open to all ideas and theories and not restricted to traditional bibliometric approaches.
  • Far-sighted action can ensure that metrics goes beyond identifying 'star' researchers, nations or ideas, to capturing the essence of what it means to be a good scientist.
  •  
    Let's make science metrics more scientific Julia Lane1 Top of pageAbstract To capture the essence of good science, stakeholders must combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity, says Julia Lane.
Weiye Loh

Morality, with limits | Russell Blackford | Comment is free | guardian.co.uk - 0 views

  • What can Darwin teach us about morality?At least to some extent, we are a species with an evolved psychology. Like other animals, we have inherited behavioural tendencies from our ancestors, since these were adaptive for them in the sense that they tended to lead to reproductive success in past environments.
  • But what follows from this?
  • we are not evolution's slaves. All other things being equal, we should act in accordance with the desires that we actually have
  • ...6 more annotations...
  • Generally speaking, it is rational for us to act in ways that accord with our reflectively-endorsed desires or values, rather than in ways that maximise our reproductive chances or in whatever ways we tend to respond without thinking.
  • Admittedly, our evolved nature may affect this, in the sense that any workable system of moral norms must be practical for the needs of beings like us, who are, it seems, naturally inclined to be neither angelically selfless nor utterly uncaring about others.
  • our evolved psychology may impose limits on what real-world moral systems can realistically demand of human beings, perhaps defeating some of the more extreme ambitions of both conservatives and liberals. It may not be realistic to expect each other to be either as self-denying as moral conservatives seem to want or as altruistic as some liberals seem to want.
  • realistic moral systems will allow considerable scope for individuals to act in accordance with whatever they actually value.
  • A rational and realistic approach to morality, based on our actual, reflectively-endorsed desires and values, and how they are best realised in current circumstances, might deflate some expectations. It might also diverge from familiar moral teachings, handed down through religious and cultural traditions. Much that is found in traditional Christian morality
  • But realising all this need not be shocking. If it leads to some deflation of extreme political expectations and to some reason-based correction of traditional morality, we should welcome it.
  •  
    Morality, with limits We can't expect people to be either as self-denying as conservatives or as altruistic as liberals seem to want
Weiye Loh

Times Higher Education - Unconventional thinkers or recklessly dangerous minds? - 0 views

  • The origin of Aids denialism lies with one man. Peter Duesberg has spent the whole of his academic career at the University of California, Berkeley. In the 1970s he performed groundbreaking work that helped show how mutated genes cause cancer, an insight that earned him a well-deserved international reputation.
  • in the early 1980s, something changed. Duesberg attempted to refute his own theories, claiming that it was not mutated genes but rather environmental toxins that are cancer's true cause. He dismissed the studies of other researchers who had furthered his original work. Then, in 1987, he published a paper that extended his new train of thought to Aids.
  • Initially many scientists were open to Duesberg's ideas. But as evidence linking HIV to Aids mounted - crucially the observation that ARVs brought Aids sufferers who were on the brink of death back to life - the vast majority concluded that the debate was over. Nonetheless, Duesberg persisted with his arguments, and in doing so attracted a cabal of supporters
  • ...12 more annotations...
  • In 1999, denialism secured its highest-profile advocate: Thabo Mbeki, who was then president of South Africa. Having studied denialist literature, Mbeki decided that the consensus on Aids sounded too much like a "biblical absolute truth" that couldn't be questioned. The following year he set up a panel of advisers, nearly half of whom were Aids denialists, including Duesberg. The resultant health policies cut funding for clinics distributing ARVs, withheld donor medication and blocked international aid grants. Meanwhile, Mbeki's health minister, Manto Tshabalala-Msimang, promoted the use of alternative Aids remedies, such as beetroot and garlic.
  • In 2007, Nicoli Nattrass, an economist and director of the Aids and Society Research Unit at the University of Cape Town, estimated that, between 1999 and 2007, Mbeki's Aids denialist policies led to more than 340,000 premature deaths. Later, scientists Max Essex, Pride Chigwedere and other colleagues at the Harvard School of Public Health arrived at a similar figure.
  • "I don't think it's hyperbole to say the (Mbeki regime's) Aids policies do not fall short of a crime against humanity," says Kalichman. "The science behind these medications was irrefutable, and yet they chose to buy into pseudoscience and withhold life-prolonging, if not life-saving, medications from the population. I just don't think there's any question that it should be looked into and investigated."
  • In fairness, there was a reason to have faint doubts about HIV treatment in the early days of Mbeki's rule.
  • some individual cases had raised questions about their reliability on mass rollout. In 2002, for example, Sarah Hlalele, a South African HIV patient and activist from a settlement background, died from "lactic acidosis", a side-effect of her drugs combination. Today doctors know enough about mixing ARVs not to make the same mistake, but at the time her death terrified the medical community.
  • any trial would be futile because of the uncertainties over ARVs that existed during Mbeki's tenure and the fact that others in Mbeki's government went along with his views (although they have since renounced them). "Mbeki was wrong, but propositions we had established then weren't as incontestably established as they are now ... So I think these calls (for genocide charges or criminal trials) are misguided, and I think they're a sideshow, and I don't support them."
  • Regardless of the culpability of politicians, the question remains whether scientists themselves should be allowed to promote views that go wildly against the mainstream consensus. The history of science is littered with offbeat ideas that were ridiculed by the scientific communities of the time. Most of these ideas missed the textbooks and went straight into the waste-paper basket, but a few - continental drift, the germ basis of disease or the Earth's orbit around the Sun, for instance - ultimately proved to be worth more than the paper they were written on. In science, many would argue, freedom of expression is too important to throw away.
  • Such an issue is engulfing the Elsevier journal Medical Hypotheses. Last year the journal, which is not peer reviewed, published a paper by Duesberg and others claiming that the South African Aids death-toll estimates were inflated, while reiterating the argument that there is "no proof that HIV causes Aids". That prompted several Aids scientists to complain to Elsevier, which responded by retracting the paper and asking the journal's editor, Bruce Charlton, to implement a system of peer review. Having refused to change the editorial policy, Charlton faces the sack
  • There are people who would like the journal to keep its current format and continue accepting controversial papers, but for Aids scientists, Duesberg's paper was a step too far. Although it was deleted from both the journal's website and the Medline database, its existence elsewhere on the internet drove Chigwedere and Essex to publish a peer-reviewed rebuttal earlier this year in AIDS and Behavior, lest any readers be "hoodwinked" into thinking there was genuine debate about the causes of Aids.
  • Duesberg believes he is being "censored", although he has found other outlets. In 1991, he helped form "The Group for the Scientific Reappraisal of the HIV/Aids Hypothesis" - now called Rethinking Aids, or simply The Group - to publicise denialist information. Backed by his Berkeley credentials, he regularly promotes his views in media articles and films. Meanwhile, his closest collaborator, David Rasnick, tells "anyone who asks" that "HIV drugs do more harm than good".
  • "Is academic freedom such a precious concept that scientists can hide behind it while betraying the public so blatantly?" asked John Moore, an Aids scientist at Cornell University, on a South African health news website last year. Moore suggested that universities could put in place a "post-tenure review" system to ensure that their researchers act within accepted bounds of scientific practice. "When the facts are so solidly against views that kill people, there must be a price to pay," he added.
  • Now it seems Duesberg may have to pay that price since it emerged last month that his withdrawn paper has led to an investigation at Berkeley for misconduct. Yet for many in the field, chasing fellow scientists comes second to dealing with the Aids pandemic.
  •  
    6 May 2010 Aids denialism is estimated to have killed many thousands. Jon Cartwright asks if scientists should be held accountable, while overleaf Bruce Charlton defends his decision to publish the work of an Aids sceptic, which sparked a row that has led to his being sacked and his journal abandoning its raison d'etre: presenting controversial ideas for scientific debate
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

It's Only A Theory: From the 2010 APA in Boston: Neuropsychology and ethics - 0 views

  • Joshua Greene from Harvard, known for his research on "neuroethics," the neurological underpinnings of ethical decision making in humans. The title of Greene's talk was "Beyond point-and-shoot morality: why cognitive neuroscience matters for ethics."
  • What Greene is interested in is to find out to what factors moral judgment is sensitive to, and whether it is sensitive to the relevant factors. He presented his dual process theory of morality. In this respect, he proposed an analogy with a camera. Cameras have automatic (point and shoot) settings as well as manual controls. The first mode is good enough for most purposes, the second allows the user to fine tune the settings more carefully. The two modes allow for a nice combination of efficiency and flexibility.
  • The idea is that the human brain also has two modes, a set of efficient automatic responses and a manual mode that makes us more flexible in response to non standard situations. The non moral example is our response to potential threats. Here the amygdala is very fast and efficient at focusing on potential threats (e.g., the outline of eyes in the dark), even when there actually is no threat (it's a controlled experiment in a lab, no lurking predator around).
  • ...12 more annotations...
  • Delayed gratification illustrates the interaction between the two modes. The brain is attracted by immediate rewards, no matter what kind. However, when larger rewards are eventually going to become available, other parts of the brain come into play to override (sometimes) the immediate urge.
  • Greene's research shows that our automatic setting is "Kantian," meaning that our intuitive responses are deontological, rule driven. The manual setting, on the other hand, tends to be more utilitarian / consequentialist. Accordingly, the first mode involves emotional areas of the brain, the second one involves more cognitive areas.
  • The evidence comes from the (in)famous trolley dilemma and it's many variations.
  • when people refuse to intervene in the footbridge (as opposed to the lever) version of the dilemma, they do so because of a strong emotional response, which contradicts the otherwise utilitarian calculus they make when considering the lever version.
  • psychopaths turn out to be more utilitarian than normal subjects - presumably not because consequentialism is inherently pathological, but because their emotional responses are stunted. Mood also affects the results, with people exposed to comedy (to enhance mood), for instance, more likely to say that it is okay to push the guy off the footbridge.
  • In a more recent experiment, subjects were asked to say which action carried the better consequences, which made them feel worse, and which was overall morally acceptable. The idea was to separate the cognitive, emotional and integrative aspects of moral decision making. Predictably, activity in the amygdala correlated with deontological judgment, activity in more cognitive areas was associated with utilitarianism, and different brain regions became involved in integrating the two.
  • Another recent experiment used visual vs. verbal descriptions of moral dilemmas. Turns out that more visual people tend to behave emotionally / deontologically, while more verbal people are more utilitarian.
  • studies show that interfering with moral judgment by engaging subjects with a cognitive task slows down (though it does not reverse) utilitarian judgment, but has no effect on deontological judgment. Again, in agreement with the conclusion that the first type of modality is the result of cognition, the latter of emotion.
  • Nice to know, by the way, that when experimenters controlled for "real world expectations" that people have about trolleys, or when they used more realistic scenarios than trolleys and bridges, the results don't vary. In other words, trolley thought experiments are actually informative, contrary to popular criticisms.
  • What factors affect people's decision making in moral judgment? The main one is proximity, with people feeling much stronger obligations if they are present to the event posing the dilemma, or even relatively near (a disaster happens in a nearby country), as opposed to when they are far (a country on the other side of the world).
  • Greene's general conclusion is that neuroscience matters to ethics because it reveals the hidden mechanisms of human moral decision making. However, he says this is interesting to philosophers because it may lead to question ethical theories that are implicitly or explicitly based on such judgments. But neither philosophical deontology nor consequentialism are in fact based on common moral judgments, seems to me. They are the result of explicit analysis. (Though Greene raises the possibility that some philosophers engage in rationalizing, rather than reason, as in Kant's famously convoluted idea that masturbation is wrong because one is using oneself as a mean to an end...)
  • this is not to say that understanding moral decision making in humans isn't interesting or in fact even helpful in real life cases. An example of the latter is the common moral condemnation of incest, which is an emotional reaction that probably evolved to avoid genetically diseased offspring. It follows that science can tell us that three is nothing morally wrong in cases of incest when precautions have been taken to avoid pregnancy (and assuming psychological reactions are also accounted for). Greene puts this in terms of science helping us to transform difficult ought questions into easier ought questions.
Weiye Loh

After Wakefield: Undoing a decade of damaging debate « Skepticism « Critical ... - 0 views

  • Mass vaccination completely eradicated smallpox, which had been killing one in seven children.  Public health campaigns have also eliminated diptheria, and reduced the incidence of pertussis, tetanus, measles, rubella and mumps to near zero.
  • when vaccination rates drop, diseases can reemerge in the population again. Measles is currently endemic in the United Kingdom, after vaccination rates dropped below 80%. When diptheria immunization dropped in Russia and Ukraine in the early 1990′s, there were over 100,000 cases with 1,200 deaths.  In Nigeria in 2001, unfounded fears of the polio vaccine led to a drop in vaccinations, an re-emergence of infection, and the spread of polio to ten other countries.
  • one reason that has experienced a dramatic upsurge over the past decade or so has been the fear that vaccines cause autism. The connection between autism and vaccines, in particular the measles, mumps, rubella (MMR) vaccine, has its roots in a paper published by Andrew Wakefield in 1998 in the medical journal The Lancet.  This link has already been completely and thoroughly debunked – there is no evidence to substantiate this connection. But over the past two weeks, the full extent of the deception propagated by Wakefield was revealed. The British Medical Journal has a series of articles from journalist Brian Deer (part 1, part 2), who spent years digging into the facts behind Wakefield,  his research, and the Lancet paper
  • ...3 more annotations...
  • Wakefield’s original paper (now retracted) attempted to link gastrointestinal symptoms and regressive autism in 12 children to the administration of the MMR vaccine. Last year Wakefield was stripped of his medical license for unethical behaviour, including undeclared conflicts of interest.  The most recent revelations demonstrate that it wasn’t just sloppy research – it was fraud.
  • Unbelievably, some groups still hold Wakefield up as some sort of martyr, but now we have the facts: Three of the 9 children said to have autism didn’t have autism at all. The paper claimed all 12 children were normal, before administration of the vaccine. In fact, 5 had developmental delays that were detected prior to the administration of the vaccine. Behavioural symptoms in some children were claimed in the paper as being closely related to the vaccine administration, but documentation showed otherwise. What were initially determined to be “unremarkable” colon pathology reports were changed to “non-specific colitis” after a secondary review. Parents were recruited for the “study” by anti-vaccinationists. The study was designed and funded to support future litigation.
  • As Dr. Paul Offit has been quoted as saying, you can’t unring a bell. So what’s going to stop this bell from ringing? Perhaps an awareness of its fraudulent basis will do more to change perceptions than a decade of scientific investigation has been able to achieve. For the sake of population health, we hope so.
Weiye Loh

The Mysterious Decline Effect | Wired Science | Wired.com - 0 views

  • Question #1: Does this mean I don’t have to believe in climate change? Me: I’m afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn’t mean, of course, that such theories won’t change or get modified – the strength of science is that nothing is settled.) Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study. The larger point is that we need to be a better job of considering the context behind every claim. In 1952, the Harvard philosopher Willard Von Orman published “The Two Dogmas of Empiricism.” In the essay, Quine compared the truths of science to a spider’s web, in which the strength of the lattice depends upon its interconnectedness. (Quine: “The unit of empirical significance is the whole of science.”) One of the implications of Quine’s paper is that, when evaluating the power of a given study, we need to also consider the other studies and untested assumptions that it depends upon. Don’t just fixate on the effect size – look at the web. Unfortunately for the denialists, climate change and natural selection have very sturdy webs.
  • biases are not fraud. We sometimes forget that science is a human pursuit, mingled with all of our flaws and failings. (Perhaps that explains why an episode like Climategate gets so much attention.) If there’s a single theme that runs through the article it’s that finding the truth is really hard. It’s hard because reality is complicated, shaped by a surreal excess of variables. But it’s also hard because scientists aren’t robots: the act of observation is simultaneously an act of interpretation.
  • (As Paul Simon sang, “A man sees what he wants to see and disregards the rest.”) Most of the time, these distortions are unconscious – we don’t know even we are misperceiving the data. However, even when the distortion is intentional it’s still rarely rises to the level of outright fraud. Consider the story of Mike Rossner. He’s executive director of the Rockefeller University Press, and helps oversee several scientific publications, including The Journal of Cell Biology.  In 2002, while trying to format a scientific image in Photoshop that was going to appear in one of the journals, Rossner noticed that the background of the image contained distinct intensities of pixels. “That’s a hallmark of image manipulation,” Rossner told me. “It means the scientist has gone in and deliberately changed what the data looks like. What’s disturbing is just how easy this is to do.” This led Rossner and his colleagues to begin analyzing every image in every accepted paper. They soon discovered that approximately 25 percent of all papers contained at least one “inappropriately manipulated” picture. Interestingly, the vast, vast majority of these manipulations (~99 percent) didn’t affect the interpretation of the results. Instead, the scientists seemed to be photoshopping the pictures for aesthetic reasons: perhaps a line on a gel was erased, or a background blur was deleted, or the contrast was exaggerated. In other words, they wanted to publish pretty images. That’s a perfectly understandable desire, but it gets problematic when that same basic instinct – we want our data to be neat, our pictures to be clean, our charts to be clear – is transposed across the entire scientific process.
  • ...2 more annotations...
  • One of the philosophy papers that I kept on thinking about while writing the article was Nancy Cartwright’s essay “Do the Laws of Physics State the Facts?” Cartwright used numerous examples from modern physics to argue that there is often a basic trade-off between scientific “truth” and experimental validity, so that the laws that are the most true are also the most useless. “Despite their great explanatory power, these laws [such as gravity] do not describe reality,” Cartwright writes. “Instead, fundamental laws describe highly idealized objects in models.”  The problem, of course, is that experiments don’t test models. They test reality.
  • Cartwright’s larger point is that many essential scientific theories – those laws that explain things – are not actually provable, at least in the conventional sense. This doesn’t mean that gravity isn’t true or real. There is, perhaps, no truer idea in all of science. (Feynman famously referred to gravity as the “greatest generalization achieved by the human mind.”) Instead, what the anomalies of physics demonstrate is that there is no single test that can define the truth. Although we often pretend that experiments and peer-review and clinical trials settle the truth for us – that we are mere passive observers, dutifully recording the results – the actuality of science is a lot messier than that. Richard Rorty said it best: “To say that we should drop the idea of truth as out there waiting to be discovered is not to say that we have discovered that, out there, there is no truth.” Of course, the very fact that the facts aren’t obvious, that the truth isn’t “waiting to be discovered,” means that science is intensely human. It requires us to look, to search, to plead with nature for an answer.
Weiye Loh

No Science please, we're Anthropologists « Critical Thinking « Skeptic North - 0 views

  • The debate is between researchers in science-based anthropological disciplines like archaeologists, physical anthropology and forensic anthropology — and anthropologists who focus on the more humanities based issues like race, ethnicity and gender.
  • Those that are defend the old mandate, members of the fields that are science based, are interested in relying on the scientific method to inform their theories about anthropology and ensuring that due diligence is done on new theories and that research is being conducted based on sound principles. In opposition are members who view themselves as advocates and activists. As they see it, research on culture, race, and gender is only harmed by science as it represents the cold arm of colonial imperialism.
  • viewing this as more than a simple cosmetic change, he compared the attacks and challenges on anthropology to creationism in that they both are “based on the rejection of rational argument and thought.
  • ...6 more annotations...
  • the American Anthropological Association attempted to clarify their position, they issued a statement in which they stated: “the Executive Board recognizes and endorses the crucial place of the scientific method in much anthropological research.” To further clarify matters they went on to describe anthropology as: “Anthropology is a holistic and expansive discipline that covers the full breadth of human history and culture.”
  • Damon Dozier, the association’s director of public affairs is further quoted saying “We mean holistic in terms of the diversity of the discipline.”
  • Despite the attempts to head off a huge rift, there appears to be lingering doubt as to the direction the American Anthropological Association is going and even more concern that the field of anthropology is under siege from post-modern attacks on its science foundations.
  • One of the most important contributions of science to the world has been a method of inquiry that has proven itself unequalled in explaining the natural world. The scientific method is, and should, be foundational in any field where the goal is to explain the natural world.
  • The so-called “hard sciences” understand this. Where things get muddled is in the “soft sciences” like anthropology, history, and psychology. For some reason these fields have proven especially vulnerable to post-modernism and have fallen prey to schizophrenic notion that science is “western” and trying to use science to explain things is another branch of imperialism.
  • The so-called “soft sciences” are occasionally put in the position of making assumptions. When you have a hypothesis you want to test, you unfortunately can’t travel back in time and do an experiment. Therefore, relying on the evidence you already have and employing your critical thinking skills you formulate a rational assumption and await the opportunity to confirm or deny it. It’s not based on a “hunch” or conjured up from the imagination. It’s based on rational skepticism.
Weiye Loh

Arianna Huffington: The Media Gets It Wrong on WikiLeaks: It's About Broken Trust, Not ... - 0 views

  • Too much of the coverage has been meta -- focusing on questions about whether the leaks were justified, while too little has dealt with the details of what has actually been revealed and what those revelations say about the wisdom of our ongoing effort in Afghanistan. There's a reason why the administration is so upset about these leaks.
  • True, there hasn't been one smoking-gun, bombshell revelation -- but that's certainly not to say the cables haven't been revealing. What there has been instead is more of the consistent drip, drip, drip of damning details we keep getting about the war.
  • It's notable that the latest leaks came out the same week President Obama went to Afghanistan for his surprise visit to the troops -- and made a speech about how we are "succeeding" and "making important progress" and bound to "prevail."
  • ...16 more annotations...
  • The WikiLeaks cables present quite a different picture. What emerges is one reality (the real one) colliding with another (the official one). We see smart, good-faith diplomats and foreign service personnel trying to make the truth on the ground match up to the one the administration has proclaimed to the public. The cables show the widening disconnect. It's like a foreign policy Ponzi scheme -- this one fueled not by the public's money, but the public's acquiescence.
  • The second aspect of the story -- the one that was the focus of the symposium -- is the changing relationship to government that technology has made possible.
  • Back in the year 2007, B.W. (Before WikiLeaks), Barack Obama waxed lyrical about government and the internet: "We have to use technology to open up our democracy. It's no coincidence that one of the most secretive administrations in our history has favored special interest and pursued policy that could not stand up to the sunlight."
  • Not long after the election, in announcing his "Transparency and Open Government" policy, the president proclaimed: "Transparency promotes accountability and provides information for citizens about what their Government is doing. Information maintained by the Federal Government is a national asset." Cut to a few years later. Now that he's defending a reality that doesn't match up to, well, reality, he's suddenly not so keen on the people having a chance to access this "national asset."
  • Even more wikironic are the statements by his Secretary of State who, less than a year ago, was lecturing other nations about the value of an unfettered and free internet. Given her description of the WikiLeaks as "an attack on America's foreign policy interests" that have put in danger "innocent people," her comments take on a whole different light. Some highlights: In authoritarian countries, information networks are helping people discover new facts and making governments more accountable... technologies with the potential to open up access to government and promote transparency can also be hijacked by governments to crush dissent and deny human rights... As in the dictatorships of the past, governments are targeting independent thinkers who use these tools. Now "making government accountable" is, as White House spokesman Robert Gibbs put it, a "reckless and dangerous action."
  • ay Rosen, one of the participants in the symposium, wrote a brilliant essay entitled "From Judith Miller to Julian Assange." He writes: For the portion of the American press that still looks to Watergate and the Pentagon Papers for inspiration, and that considers itself a check on state power, the hour of its greatest humiliation can, I think, be located with some precision: it happened on Sunday, September 8, 2002. That was when the New York Times published Judith Miller and Michael Gordon's breathless, spoon-fed -- and ultimately inaccurate -- account of Iraqi attempts to buy aluminum tubes to produce fuel for a nuclear bomb.
  • Miller's after-the-facts-proved-wrong response, as quoted in a Michael Massing piece in the New York Review of Books, was: "My job isn't to assess the government's information and be an independent intelligence analyst myself. My job is to tell readers of The New York Times what the government thought about Iraq's arsenal." In other words, her job is to tell citizens what their government is saying, not, as Obama called for in his transparency initiative, what their government is doing.
  • As Jay Rosen put it: Today it is recognized at the Times and in the journalism world that Judy Miller was a bad actor who did a lot of damage and had to go. But it has never been recognized that secrecy was itself a bad actor in the events that led to the collapse, that it did a lot of damage, and parts of it might have to go. Our press has never come to terms with the ways in which it got itself on the wrong side of secrecy as the national security state swelled in size after September 11th.
  • And in the WikiLeaks case, much of media has again found itself on the wrong side of secrecy -- and so much of the reporting about WikiLeaks has served to obscure, to conflate, to mislead. For instance, how many stories have you heard or read about all the cables being "dumped" in "indiscriminate" ways with no attempt to "vet" and "redact" the stories first. In truth, only just over 1,200 of the 250,000 cables have been released, and WikiLeaks is now publishing only those cables vetted and redacted by their media partners, which includes the New York Times here and the Guardian in England.
  • The establishment media may be part of the media, but they're also part of the establishment. And they're circling the wagons. One method they're using, as Andrew Rasiej put it after the symposium, is to conflate the secrecy that governments use to operate and the secrecy that is used to hide the truth and allow governments to mislead us.
  • Nobody, including WikiLeaks, is promoting the idea that government should exist in total transparency,
  • Assange himself would not disagree. "Secrecy is important for many things," he told Time's Richard Stengel. "We keep secret the identity of our sources, as an example, take great pains to do it." At the same time, however, secrecy "shouldn't be used to cover up abuses."
  • Decentralizing government power, limiting it, and challenging it was the Founders' intent and these have always been core conservative principles. Conservatives should prefer an explosion of whistleblower groups like WikiLeaks to a federal government powerful enough to take them down. Government officials who now attack WikiLeaks don't fear national endangerment, they fear personal embarrassment. And while scores of conservatives have long promised to undermine or challenge the current monstrosity in Washington, D.C., it is now an organization not recognizably conservative that best undermines the political establishment and challenges its very foundations.
  • It is not, as Simon Jenkins put it in the Guardian, the job of the media to protect the powerful from embarrassment. As I said at the symposium, its job is to play the role of the little boy in The Emperor's New Clothes -- brave enough to point out what nobody else is willing to say.
  • When the press trades truth for access, it is WikiLeaks that acts like the little boy. "Power," wrote Jenkins, "loathes truth revealed. When the public interest is undermined by the lies and paranoia of power, it is disclosure that takes sanity by the scruff of its neck and sets it back on its feet."
  • A final aspect of the story is Julian Assange himself. Is he a visionary? Is he an anarchist? Is he a jerk? This is fun speculation, but why does it have an impact on the value of the WikiLeaks revelations?
Weiye Loh

FT.com / Business education / Soapbox - Popular fads replace relevant teaching - 0 views

  • There is a great divide in business schools, one that few outsiders are aware of. It is the divide between research and teaching. There is little relation between them. What is being taught in management books and classrooms is usually not based on rigorous research and vice-versa; the research published in prestigious academic journals seldom finds its way into the MBA classroom.
  • none of this research is really intended to be used in the classroom, or to be communicated to managers in some other form, it is not suited to serve that purpose. The goal is publication in a prestigious academic journal, but that does not make it useful or even offer a guarantee that the research findings provide much insight into the workings of business reality.
  • is not a new problem. In 1994, Don Hambrick, then the president of the Academy of Management, said: “We read each others’ papers in our journals and write our own papers so that we may, in turn, have an audience . . . an incestuous, closed loop”. Management research is not required to be relevant. Consequently much of it is not.
  • ...6 more annotations...
  • But business education clearly also suffers. What is being taught in management courses is usually not based on solid scientific evidence. Instead, it concerns the generalisation of individual business cases or the lessons from popular management books. Such books often are based on the appealing formula that they look at several successful companies, see what they have in common and conclude that other companies should strive to do the same thing.
  • how do you know that the advice provided is reasonable, or if it comes from tomorrow’s Enrons, RBSs, Lehmans and WorldComs? How do you know that today’s advice and cases will not later be heralded as the epitome of mismanagement?
  • In the 1990s, ISO9000 (a quality management systems standard) spread through many industries. But research by professors Mary Benner and Mike Tushman showed that its adoption could, in time, lead to a fall in innovation (because ISO9000 does not allow for deviations from a set standard, which innovation requires), making the adopter worse off. This research was overlooked by practitioners, many business schools continued to applaud the benefits of ISO9000 in their courses, while firms continued – and still do – to implement the practice, ignorant of its potential pitfalls. Yet this research offers a clear example of the possible benefits of scientific research methods: rigorous research that reveals unintended consequences to expose the true nature of a business practice.
  • such research with important practical implications unfortunately is the exception rather than the rule. Moreover, even relevant research is largely ignored in business education – as happened to the findings by Benner and Tushman.
  • Of course one should not make the mistake that business cases and business books based on personal observation and opinion are without value. They potentially offer a great source of practical experience. Similarly, it would be naive to assume that scientific research can provide custom-made answers. Rigorous management research could and should provide the basis for skilled managers to make better decisions. However, they cannot do that without the in-depth knowledge of their specific organisation and circumstances.
  • at present, business schools largely fail in providing rigorous, evidence-based teaching.
Weiye Loh

Official Google Blog: Microsoft's Bing uses Google search results-and denies it - 0 views

  • By now, you may have read Danny Sullivan’s recent post: “Google: Bing is Cheating, Copying Our Search Results” and heard Microsoft’s response, “We do not copy Google's results.” However you define copying, the bottom line is, these Bing results came directly from Google
  • We created about 100 “synthetic queries”—queries that you would never expect a user to type, such as [hiybbprqag]. As a one-time experiment, for each synthetic query we inserted as Google’s top result a unique (real) webpage which had nothing to do with the query.
  • To be clear, the synthetic query had no relationship with the inserted result we chose—the query didn’t appear on the webpage, and there were no links to the webpage with that query phrase. In other words, there was absolutely no reason for any search engine to return that webpage for that synthetic query. You can think of the synthetic queries with inserted results as the search engine equivalent of marked bills in a bank.
  • ...1 more annotation...
  • We gave 20 of our engineers laptops with a fresh install of Microsoft Windows running Internet Explorer 8 with Bing Toolbar installed. As part of the install process, we opted in to the “Suggested Sites” feature of IE8, and we accepted the default options for the Bing Toolbar.We asked these engineers to enter the synthetic queries into the search box on the Google home page, and click on the results, i.e., the results we inserted. We were surprised that within a couple weeks of starting this experiment, our inserted results started appearing in Bing. Below is an example: a search for [hiybbprqag] on Bing returned a page about seating at a theater in Los Angeles. As far as we know, the only connection between the query and result is Google’s result page (shown above).
Weiye Loh

Breakthrough Europe: Towards a Social Theory of Climate Change - 0 views

  • Lever-Tracy confronted sociologists head on about their worrisome silence on the issue. Why have sociologists failed to address the greatest and most overwhelming challenge facing modern society? Why have the figureheads of the discipline, such as Anthony Giddens and Ulrich Beck, so far refused to apply their seminal notions of structuration and the risk society to the issue?
  • Earlier, we re-published an important contribution by Ulrich Beck, the world-renowned German sociologist and a Breakthrough Senior Fellow. More recently, Current Sociology published a powerful response by Reiner Grundmann of Aston University and Nico Stehr of Zeppelin University.
  • sociologists should not rush into the discursive arena without asking some critical questions in advance, questions such as: What exactly could sociology contribute to the debate? And, is there something we urgently need that is not addressed by other disciplines or by political proposals?
  • ...12 more annotations...
  • he authors disagree with Lever-Tracy's observation that the lack of interest in climate change among sociologists is driven by a widespread suspicion of naturalistic explanations, teleological arguments and environmental determinism.
  • While conceding that Lever-Tracy's observation may be partially true, the authors argue that more important processes are at play, including cautiousness on the part of sociologists to step into a heavily politicized debate; methodological differences with the natural sciences; and sensitivity about locating climate change in the longue durée.
  • Secondly, while Lever-Tracy argues that "natural and social change are now in lockstep with each other, operating on the same scales," and that therefore a multidisciplinary approach is needed, Grundmann and Stehr suggest that the true challenge is interdisciplinarity, as opposed to multidisciplinarity.
  • Thirdly, and this possibly the most striking observation of the article, Grundmann and Stehr challenge Lever-Tracy's argument that natural scientists have successfully made the case for anthropogenic climate change, and that therefore social scientists should cease to endlessly question this scientific consensus on the basis of a skeptical postmodern 'deconstructionism'.
  • As opposed to both Lever-Tracy's positivist view and the radical postmodern deconstructionist view, Grundmann and Stehr take the social constructivist view, which argues that that every idea is socially constructed and therefore the product of human interpretation and communication. This raises the 'intractable' specters of discourse and framing, to which we will return in a second.
  • Finally, Lever-Tracy holds that climate change needs to be posited "firmly at the heart of the discipline." Grundmann and Stehr, however, emphasize that "if this is going to [be] more than wishful thinking, we need to carefully consider the prospects of such an enterprise."
  • The importance of framing climate change in a way that allows it to resonate with the concerns of the average citizen is an issue that the Breakthrough Institute has long emphasized. Especially the apocalyptic politics of fear that is often associated with climate change tends to have a counterproductive effect on public opinion. Realizing this, Grundmann and Stehr make an important warning to sociologists: "the inherent alarmism in many social science contributions on climate change merely repeats the central message provided by mainstream media." In other words, it fails to provide the kind of distantiated observation needed to approach the issue with at least a mild degree of objectivity or impartiality.
  • While this tension is symptomatic of many social scientific attempts to get involved, we propose to study these very underlying assumptions. For example, we should ask: Does the dramatization of events lead to effective political responses? Do we need a politics of fear? Is scientific consensus instrumental for sound policies? And more generally, what are the relations between a changing technological infrastructure, social shifts and belief systems? What contribution can bottom-up initiatives have in fighting climate change? What roles are there for markets, hierarchies and voluntary action? How was it possible that the 'fight against climate change' rose from a marginal discourse to a hegemonic one (from heresy to dogma)? And will the discourse remain hegemonic or will too much pub¬lic debate about climate change lead to 'climate change fatigue'?
  • In this respect, Grundmann and Stehr make another crucial observation: "the severity of a problem does not mean that we as sociologists should forget about our analytical apparatus." Bringing the analytical apparatus of sociology back in, the hunting season for positivist approaches to knowledge and nature is opened. Grundmann and Stehr consequently criticize not only Lever-Tracy's unspoken adherence to a positivist nature-society duality, taking instead a more dialectical Marxian approach to the relationship between man and his environment, but they also criticize her idea that incremental increases in our scientific knowledge of climate change and its impacts will automatically coalesce into successful and meaningful policy responses.
  • Political decisions about climate change are made on the basis of scientific research and a host of other (economic, political, cultural) considerations. Regarding the scientific dimension, it is a common perception (one that Lever-Tracy seems to share) that the more knowledge we have, the better the political response will be. This is the assumption of the linear model of policy-making that has been dominant in the past but debunked time and again (Godin, 2006). What we increasingly realize is that knowl¬edge creation leads to an excess of information and 'objectivity' (Sarewitz, 2000). Even the consensual mechanisms of the IPCC lead to an increase in options because knowledge about climate change increases.
  • Instead, Grundmann and Stehr propose to look carefully at how we frame climate change socially and whether the hegemonic climate discourse is actually contributing to successful political action or hampering it. Defending this social constructivist approach from the unfounded allegation that it would play into the hands of the climate skeptics, the authors note that defining climate change as a social construction ... is not to diminish its importance, relevance, or reality. It simply means that sociologists study the process whereby something (like anthropogenic climate change) is transformed from a conjecture into an accepted fact. With regard to policy, we observe a near exclusive focus on carbon dioxide emissions. This framing has proven counter productive, as the Hartwell paper and other sources demonstrate (see Eastin et al., 2010; Prins et al., 2010). Reducing carbon emissions in the short term is among the most difficult tasks. More progress could be made by a re-framing of the issue, not as an issue of human sinfulness, but of human dignity. [emphasis added]
  • These observations allow the authors to come full circle, arriving right back at their first observation about the real reasons why sociologists have so far kept silent on climate change. Somehow, "there seems to be the curious conviction that lest you want to be accused of helping the fossil fuel lobbies and the climate skeptics, you better keep quiet."
  •  
    Towards a Social Theory of Climate Change
Weiye Loh

Rationally Speaking: A new eugenics? - 0 views

  • an interesting article I read recently, penned by Julian Savulescu for the Practical Ethics blog.
  • Savulescu discusses an ongoing controversy in Germany about genetic testing of human embryos. The Leopoldina, Germany’s equivalent of the National Academy of Sciences, has recommended genetic testing of pre-implant embryos, to screen for serious and incurable defects. The German Chancellor, Angela Merkel, has agreed to allow a parliamentary vote on this issue, but also said that she personally supports a ban on this type of testing. Her fear is that the testing would quickly lead to “designer babies,” i.e. to parents making choices about their unborn offspring based not on knowledge about serious disease, but simply because they happen to prefer a particular height or eye color.
  • He infers from Merkel’s comments (and many similar others) that people tend to think of selecting traits like eye color as eugenics, while acting to avoid incurable disease is not considered eugenics. He argues that this is exactly wrong: eugenics, as he points out, means “well born,” so eugenicists have historically been concerned with eliminating traits that would harm society (Wendell Holmes’ “three generation of imbeciles”), not with simple aesthetic choices. As Savulescu puts it: “[eugenics] is selecting embryos which are better, in this context, have better lives. Being healthy rather than sick is ‘better.’ Having blond hair and blue eyes is not in any plausible sense ‘better,’ even if people mistakenly think so.”
  • ...9 more annotations...
  • And there is another, related aspect of discussions about eugenics that should be at the forefront of our consideration: what was particularly objectionable about American and Nazi early 20th century eugenics is that the state, not individuals, were to make decisions about who could reproduce and who couldn’t. Savulescu continues: “to grant procreative liberty is the only way to avoid the objectionable form of eugenics that the Nazis practiced.” In other words, it makes all the difference in the world if it is an individual couple who decides to have or not have a baby, or if it is the state that imposes a particular reproductive choice on its citizenry.
  • but then Savulescu expands his argument to a point where I begin to feel somewhat uncomfortable. He says: “[procreative liberty] involves the freedom to choose a child with red hair or blond hair or no hair.”
  • Savulescu has suddenly sneaked into his argument for procreative liberty the assumption that all choices in this area are on the same level. But while it is hard to object to action aimed at avoiding devastating diseases, it is not quite so obvious to me what arguments favor the idea of designer babies. The first intervention can be justified, for instance, on consequentialist grounds because it reduces the pain and suffering of both the child and the parents. The second intervention is analogous to shopping for a new bag, or a new car, which means that it commodifies the act of conceiving a baby, thus degrading its importance. I’m not saying that that in itself is sufficient to make it illegal, but the ethics of it is different, and that difference cannot simply be swept under the broad rug of “procreative liberty.”
  • designing babies is to treat them as objects, not as human beings, and there are a couple of strong philosophical traditions in ethics that go squarely against that (I’m thinking, obviously, of Kant’s categorical imperative, as well as of virtue ethics; not sure what a consequentialist would say about this, probably she would remain neutral on the issue).
  • Commodification of human beings has historically produced all sorts of bad stuff, from slavery to exploitative prostitution, and arguably to war (after all, we are using our soldiers as means to gain access to power, resources, territory, etc.)
  • And of course, there is the issue of access. Across-the-board “procreative liberty” of the type envisioned by Savulescu will cost money because it requires considerable resources.
  • imagine that these parents decide to purchase the ability to produce babies that have the type of characteristics that will make them more successful in society: taller, more handsome, blue eyed, blonde, more symmetrical, whatever. We have just created yet another way for the privileged to augment and pass their privileges to the next generation — in this case literally through their genes, not just as real estate or bank accounts. That would quickly lead to an even further divide between the haves and the have-nots, more inequality, more injustice, possibly, in the long run, even two different species (why not design your babies so that they can’t breed with certain types of undesirables, for instance?). Is that the sort of society that Savulescu is willing to envision in the name of his total procreative liberty? That begins to sounds like the libertarian version of the eugenic ideal, something potentially only slightly less nightmarish than the early 20th century original.
  • Rich people already have better choices when it comes to their babies. Taller and richer men can choose between more attractive and physically fit women and attractive women can choose between more physically fit and rich men. So it is reasonable to conclude that on average rich and attractive people already have more options when it comes to their offspring. Moreover no one is questioning their right to do so and this is based on a respect for a basic instinct which we all have and which is exactly why these people would choose to have a DB. Is it fair for someone to be tall because his daddy was rich and married a supermodel but not because his daddy was rich and had his DNA resequenced? Is it former good because its natural and the latter bad because its not? This isn't at all obvious to me.
  • Not to mention that rich people can provide better health care, education and nutrition to their children and again no one is questioning their right to do so. Wouldn't a couple of inches be pretty negligible compared to getting into a good school? Aren't we applying double standards by objecting to this issue alone? Do we really live in a society that values equal opportunities? People (may) be equal before the law but they are not equal to each other and each one of us is tacitly accepting that fact when we acknowledge the social hierarchy (in other words, every time we interact with someone who is our superior). I am not crazy about this fact but that's just how people are and this has to be taken into account when discussing this.
Weiye Loh

Adventures in Flay-land: Dealing with Denialists - Delingpole Part III - 0 views

  • This post is about how one should deal with a denialist of Delingpole's ilk.
  • I saw someone I follow on Twitter retweet an update from another Twitter user called @AGW_IS_A_HOAX, which was this: "NZ #Climate Scientists Admit Faking Temperatures http://bit.ly/fHbdPI RT @admrich #AGW #Climategate #Cop16 #ClimateChange #GlobalWarming".
  • So I click on it. And this is how you deal with a denialist claim. You actually look into it. Here is the text of that article reproduced in full: New Zealand Climate Scientists Admit To Faking Temperatures: The Actual Temps Show Little Warming Over Last 50 YearsRead here and here. Climate "scientists" across the world have been blatantly fabricating temperatures in hopes of convincing the public and politicians that modern global warming is unprecedented and accelerating. The scientists doing the fabrication are usually employed by the government agencies or universities, which thrive and exist on taxpayer research dollars dedicated to global warming research. A classic example of this is the New Zealand climate agency, which is now admitting their scientists produced bogus "warming" temperatures for New Zealand. "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century. For all their talk about warming, for all their rushed invention of the “Eleven-Station Series” to prove warming, this new series shows that no warming has occurred here since about 1960. Almost all the warming took place from 1940-60, when the IPCC says that the effect of CO2 concentrations was trivial. Indeed, global temperatures were falling during that period.....Almost all of the 34 adjustments made by Dr Jim Salinger to the 7SS have been abandoned, along with his version of the comparative station methodology."A collection of temperature-fabrication charts.
  • ...10 more annotations...
  • I check out the first link, the first "here" where the article says "Read here and here". I can see that there's been some sort of dispute between two New Zealand groups associated with climate change. One is New Zealand’s Climate Science Coalition (NZCSC) and the other is New Zealand’s National Institute of Water and Atmospheric Research (NIWA), but it doesn't tell me a whole lot more than I already got from the other article.
  • I check the second source behind that article. The second article, I now realize, is published on the website of a person called Andrew Montford with whom I've been speaking recently and who is the author of a book titled The Hockey Stick Illusion. I would not label Andrew a denialist. He makes some good points and seems to be a decent guy and geniune sceptic (This is not to suggest all denialists are outwardly dishonest; however, they do tend to be hard to reason with). Again, this article doesn't give me anything that I haven't already seen, except a link to another background source. I go there.
  • From this piece written up on Scoop NZNEWSUK I discover that a coalition group consisting of the NZCSC and the Climate Conversation Group (CCG) has pressured the NIWA into abandoning a set of temperature record adjustments of which the coalition dispute the validity. This was the culmination of a court proceeding in December 2010, last month. In dispute were 34 adjustments that had been made by Dr Jim Salinger to the 7SS temperature series, though I don't know what that is exactly. I also discover that there is a guy called Richard Treadgold, Convenor of the CCG, who is quoted several times. Some of the statements he makes are quoted in the articles I've already seen. They are of a somewhat snide tenor. The CSC object to the methodology used by the NIWA to adjust temperature measurements (one developed as part of a PhD thesis), which they critique in a paper in November 2009 with the title "Are we feeling warmer yet?", and are concerned about how this public agency is spending its money. I'm going to have to dig a bit deeper if I want to find out more. There is a section with links under the heading "Related Stories on Scoop". I click on a few of those.
  • One of these leads me to more. Of particular interest is a fairly neutral article outlining the progress of the court action. I get some more background: For the last ten years, visitors to NIWA’s official website have been greeted by a graph of the “seven-station series” (7SS), under the bold heading “New Zealand Temperature Record”. The graph covers the period from 1853 to the present, and is adorned by a prominent trend-line sloping sharply upwards. Accompanying text informs the world that “New Zealand has experienced a warming trend of approximately 0.9°C over the past 100 years.” The 7SS has been updated and used in every monthly issue of NIWA’s “Climate Digest” since January 1993. Its 0.9°C (sometimes 1.0°C) of warming has appeared in the Australia/NZ Chapter of the IPCC’s 2001 and 2007 Assessment Reports. It has been offered as sworn evidence in countless tribunals and judicial enquiries, and provides the historical base for all of NIWA’s reports to both Central and Local Governments on climate science issues and future projections.
  • now I can see why this is so important. The temperature record informs the conclusions of the IPCC assessment reports and provides crucial evidence for global warming.
  • Further down we get: NIWA announces that it has now completed a full internal examination of the Salinger adjustments in the 7SS, and has forwarded its “review papers” to its Australian counterpart, the Bureau of Meteorology (BOM) for peer review.and: So the old 7SS has already been repudiated. A replacement NZTR [New Zealand Temperature Record] is being prepared by NIWA – presumably the best effort they are capable of producing. NZCSC is about to receive what it asked for. On the face of it, there’s nothing much left for the Court to adjudicate.
  • NIWA has been forced to withdraw its earlier temperature record and replace it with a new one. Treadgold quite clearly states that "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century" and that "the new temperature record shows no evidence of a connection with global warming." Earlier in the article he also stresses the role of the CSC in achieving these revisions, saying "after 12 months of futile attempts to persuade the public, misleading answers to questions in the Parliament from ACT and reluctant but gradual capitulation from NIWA, their relentless defence of the old temperature series has simply evaporated. They’ve finally given in, but without our efforts the faulty graph would still be there."
  • All this leads me to believe that if I look at the website of NIWA I will see a retraction of the earlier position and a new position that New Zealand has experienced no unusual warming. This is easy enough to check. I go there. Actually, I search for it to find the exact page. Here is the 7SS page on the NIWA site. Am I surprised that NIWA have retracted nothing and that in fact their revised graph shows similar results? Not really. However, I am somewhat surprised by this page on the Climate Conversation Group website which claims that the 7SS temperature record is as dead as the parrot in the Monty Python sketch. It says "On the eve of Christmas, when nobody was looking, NIWA declared that New Zealand had a new official temperature record (the NZT7) and whipped the 7SS off its website." However, I've already seen that this is not true. Perhaps there was once a 7SS graph and information about the temperature record on the site's homepage that can no longer be seen. I don't know. I can only speculate. I know that there is a section on the NIWA site about the 7SS temperature record that contains a number of graphs and figures and discusses recent revisions. It has been updated as recently as December 2010, last month. The NIWA page talks all about the 7SS series and has a heading that reads "Our new analysis confirms the warming trend".
  • The CCG page claims that the new NZT7 is not in fact a revision but rather a replacement. Although it results in a similar curve, the adjustments that were made are very different. Frankly I can't see how that matters at the end of the day. Now, I don't really know whether I can believe that the NIWA analysis is true, but what I am in no doubt of whatsoever is that the statements made by Richard Treadgold that were quoted in so many places are at best misleading. The NIWA has not changed its position in the slightest. The assertion that the NIWA have admitted that New Zealand has not warmed much since 1960 is a politician's careful argument. Both analyses showed the same result. This is a fact that NIWA have not disputed; however, they still maintain a connection to global warming. A document explaining the revisions talks about why the warming has slowed after 1960: The unusually steep warming in the 1940-1960 period is paralleled by an unusually large increase in northerly flow* during this same period. On a longer timeframe, there has been a trend towards less northerly flow (more southerly) since about 1960. However, New Zealand temperatures have continued to increase over this time, albeit at a reduced rate compared with earlier in the 20th century. This is consistent with a warming of the whole region of the southwest Pacific within which New Zealand is situated.
  • Denialists have taken Treadgold's misleading mantra and spread it far and wide including on Twitter and fringe websites, but it is faulty as I've just demonstrated. Why do people do this? Perhaps they are hoping that others won't check the sources. Most people don't. I hope this serves as a lesson for why you always should.
« First ‹ Previous 141 - 160 of 226 Next › Last »
Showing 20 items per page