Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Review

Rss Feed Group items tagged

Weiye Loh

Hermits and Cranks: Lessons from Martin Gardner on Recognizing Pseudoscientists: Scient... - 0 views

  • In 1950 Martin Gardner published an article in the Antioch Review entitled "The Hermit Scientist," about what we would today call pseudoscientists.
  • there has been some progress since Gardner offered his first criticisms of pseudoscience. Now largely antiquated are his chapters on believers in a flat Earth, a hollow Earth, Atlantis and Lemuria, Alfred William Lawson, Roger Babson, Trofim Lysenko, Wilhelm Reich and Alfred Korzybski. But disturbingly, a good two thirds of the book's contents are relevant today, including Gardner's discussions of homeopathy, naturopathy, osteopathy, iridiagnosis (reading the iris of the eye to deter- mine bodily malfunctions), food faddists, cancer cures and other forms of medical quackery, Edgar Cayce, the Great Pyramid's alleged mystical powers, handwriting analysis, ESP and PK (psychokinesis), reincarnation, dowsing rods, eccentric sexual theories, and theories of group racial differences.
  • The "hermit scientist," a youthful Gardner wrote, works alone and is ignored by mainstream scientists. "Such neglect, of course, only strengthens the convictions of the self-declared genius."
  • ...5 more annotations...
  • Even then Gardner was bemoaning that some beliefs never seem to go out of vogue, as he recalled an H. L. Mencken quip from the 1920s: "Heave an egg out of a Pullman window, and you will hit a Fundamentalist almost anywhere in the U.S. today." Gardner cautions that when religious superstition should be on the wane, it is easy "to forget that thousands of high school teachers of biology, in many of our southern states, are still afraid to teach the theory of evolution for fear of losing their jobs." Today creationism has spread northward and mutated into the oxymoronic form of "creation science."
  • the differences between science and pseudoscience. On the one extreme we have ideas that are most certainly false, "such as the dianetic view that a one-day-old embryo can make sound recordings of its mother's conversation." In the borderlands between the two "are theories advanced as working hypotheses, but highly debatable because of the lack of sufficient data." Of these Gardner selects a most propitious propitious example: "the theory that the universe is expanding." That theory would now fall at the other extreme end of the spectrum, where lie "theories al- most certainly true, such as the belief that the Earth is round or that men and beasts are distant cousins."
  • How can we tell if someone is a scientific crank? Gardner offers this advice: (1) "First and most important of these traits is that cranks work in almost total isolation from their colleagues." Cranks typically do not understand how the scientific process operates—that they need to try out their ideas on colleagues, attend conferences and publish their hypotheses in peer-reviewed journals before announcing to the world their startling discovery. Of course, when you explain this to them they say that their ideas are too radical for the conservative scientific establishment to accept.
  • (2) "A second characteristic of the pseudo-scientist, which greatly strengthens his isolation, is a tendency toward paranoia," which manifests itself in several ways: (1) He considers himself a genius. (2) He regards his colleagues, without exception, as ignorant blockheads....(3) He believes himself unjustly persecuted and discriminated against. The recognized societies refuse to let him lecture. The journals reject his papers and either ignore his books or assign them to "enemies" for review. It is all part of a dastardly plot. It never occurs to the crank that this opposition may be due to error in his work....(4) He has strong compulsions to focus his attacks on the greatest scientists and the best-established theories. When Newton was the outstanding name in physics, eccentric works in that science were violently anti-Newton. Today, with Einstein the father-symbol of authority, a crank theory of physics is likely to attack Einstein....(5) He often has a tendency to write in a complex jargon, in many cases making use of terms and phrases he himself has coined.
  • "If the present trend continues," Gardner concludes, "we can expect a wide variety of these men, with theories yet unimaginable, to put in their appearance in the years immediately ahead. They will write impressive books, give inspiring lectures, organize exciting cults. They may achieve a following of one—or one million. In any case, it will be well for ourselves and for society if we are on our guard against them."
  •  
    May 23, 2010 | 31 comments Hermits and Cranks: Lessons from Martin Gardner on Recognizing Pseudoscientists Fifty years ago Gardner launched the modern skeptical movement. Unfortunately, much of what he wrote about is still current today By Michael Shermer   
Weiye Loh

Skepticblog » ClimateGate Follow Up - 0 views

  • Recently the third of three independent reviews of the Climatic Research Unit (CRU) e-mail scandal has been completed. All three reviews concluded that the CRU was not hiding, destroying, or manipulating data.
  • At the time there were those who believed the e-mails to be the innocent chatter of scientists and others who thought it was the smoking gun of scientific fraud. At the time I wrote: I don’t know what the lessons of climategate are yet – we need to see what actually happened first. But how people deal with climategate says a lot about their process. Those who are making bold claims based upon ambiguous, circumstantial, and out-of-context evidence, are not doing themselves or their side any favors.
  • after a thorough review there is no evidence of any actual scientific fraud, but the scientists were not adequately complying with FOI requests. It seems the climate scientists at the CRU had developed a bit of a bunker mentality and felt justified in frustrating what they felt were frivolous and harassing FOI requests.
  • ...3 more annotations...
  • This, in turn, seems to be a symptom of an obscure scientific discipline (climate science) being thrust in recent years into the middle of a raging world-wide political controversy. There was not a culture among these scientists of dealing with the politically controversial aspects of their science.
  • This episode reminds us that scientists are human, and therefore science itself is a human endeavor and subject to all the foibles that plague any human activity.
  • there were charges that the CRU did not have backups of data they relied upon for their conclusions. But the CRU was never the primary source of this data – they simply aggregated and analyzed it. The primary data has always been available from the sources. As the BBC reports: “We find that CRU was not in a position to withhold access to such data or tamper with it,” it says. “We demonstrated that any independent researcher can download station data directly from primary sources and undertake their own temperature trend analysis”.
  •  
    CLIMATEGATE FOLLOW UP by STEVEN NOVELLA, Jul 12 2010
Weiye Loh

Research integrity: Sabotage! : Nature News - 0 views

  • University of Michigan in Ann Arbor
  • Vipul Bhrigu, a former postdoc at the university's Comprehensive Cancer Center, wears a dark-blue three-buttoned suit and a pinched expression as he cups his pregnant wife's hand in both of his. When Pollard Hines calls Bhrigu's case to order, she has stern words for him: "I was inclined to send you to jail when I came out here this morning."
  • Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells". Bhrigu has said on multiple occasions that he was compelled by "internal pressure" and had hoped to slow down Ames's work. Speaking earlier this month, he was contrite. "It was a complete lack of moral judgement on my part," he said.
  • ...16 more annotations...
  • Bhrigu's actions are surprising, but probably not unique. There are few firm numbers showing the prevalence of research sabotage, but conversations with graduate students, postdocs and research-misconduct experts suggest that such misdeeds occur elsewhere, and that most go unreported or unpoliced. In this case, the episode set back research, wasted potentially tens of thousands of dollars and terrorized a young student. More broadly, acts such as Bhrigu's — along with more subtle actions to hold back or derail colleagues' work — have a toxic effect on science and scientists. They are an affront to the implicit trust between scientists that is necessary for research endeavours to exist and thrive.
  • Despite all this, there is little to prevent perpetrators re-entering science.
  • federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data.
  • In Bhrigu's case, administrators at the University of Michigan worked with police to investigate, thanks in part to the persistence of Ames and her supervisor, Theo Ross. "The question is, how many universities have such procedures in place that scientists can go and get that kind of support?" says Christine Boesz, former inspector-general for the US National Science Foundation in Arlington, Virginia, and now a consultant on scientific accountability. "Most universities I was familiar with would not necessarily be so responsive."
  • Some labs are known to be hyper-competitive, with principal investigators pitting postdocs against each other. But Ross's lab is a small, collegial place. At the time that Ames was noticing problems, it housed just one other graduate student, a few undergraduates doing projects, and the lab manager, Katherine Oravecz-Wilson, a nine-year veteran of the lab whom Ross calls her "eyes and ears". And then there was Bhrigu, an amiable postdoc who had joined the lab in April 2009.
  • Some people whom Ross consulted with tried to convince her that Ames was hitting a rough patch in her work and looking for someone else to blame. But Ames was persistent, so Ross took the matter to the university's office of regulatory affairs, which advises on a wide variety of rules and regulations pertaining to research and clinical care. Ray Hutchinson, associate dean of the office, and Patricia Ward, its director, had never dealt with anything like it before. After several meetings and two more instances of alcohol in the media, Ward contacted the department of public safety — the university's police force — on 9 March. They immediately launched an investigation — into Ames herself. She endured two interrogations and a lie-detector test before investigators decided to look elsewhere.
  • At 4:00 a.m. on Sunday 18 April, officers installed two cameras in the lab: one in the cold room where Ames's blots had been contaminated, and one above the refrigerator where she stored her media. Ames came in that day and worked until 5:00 p.m. On Monday morning at around 10:15, she found that her medium had been spiked again. When Ross reviewed the tapes of the intervening hours with Richard Zavala, the officer assigned to the case, she says that her heart sank. Bhrigu entered the lab at 9:00 a.m. on Monday and pulled out the culture media that he would use for the day. He then returned to the fridge with a spray bottle of ethanol, usually used to sterilize lab benches. With his back to the camera, he rummaged through the fridge for 46 seconds. Ross couldn't be sure what he was doing, but it didn't look good. Zavala escorted Bhrigu to the campus police department for questioning. When he told Bhrigu about the cameras in the lab, the postdoc asked for a drink of water and then confessed. He said that he had been sabotaging Ames's work since February. (He denies involvement in the December and January incidents.)
  • Misbehaviour in science is nothing new — but its frequency is difficult to measure. Daniele Fanelli at the University of Edinburgh, UK, who studies research misconduct, says that overtly malicious offences such as Bhrigu's are probably infrequent, but other forms of indecency and sabotage are likely to be more common. "A lot more would be the kind of thing you couldn't capture on camera," he says. Vindictive peer review, dishonest reference letters and withholding key aspects of protocols from colleagues or competitors can do just as much to derail a career or a research project as vandalizing experiments. These are just a few of the questionable practices that seem quite widespread in science, but are not technically considered misconduct. In a meta-analysis of misconduct surveys, published last year (D. Fanelli PLoS ONE 4, e5738; 2009), Fanelli found that up to one-third of scientists admit to offences that fall into this grey area, and up to 70% say that they have observed them.
  • Some say that the structure of the scientific enterprise is to blame. The big rewards — tenured positions, grants, papers in stellar journals — are won through competition. To get ahead, researchers need only be better than those they are competing with. That ethos, says Brian Martinson, a sociologist at HealthPartners Research Foundation in Minneapolis, Minnesota, can lead to sabotage. He and others have suggested that universities and funders need to acknowledge the pressures in the research system and try to ease them by means of education and rehabilitation, rather than simply punishing perpetrators after the fact.
  • Bhrigu says that he felt pressure in moving from the small college at Toledo to the much bigger one in Michigan. He says that some criticisms he received from Ross about his incomplete training and his work habits frustrated him, but he doesn't blame his actions on that. "In any kind of workplace there is bound to be some pressure," he says. "I just got jealous of others moving ahead and I wanted to slow them down."
  • At Washtenaw County Courthouse in July, having reviewed the case files, Pollard Hines delivered Bhrigu's sentence. She ordered him to pay around US$8,800 for reagents and experimental materials, plus $600 in court fees and fines — and to serve six months' probation, perform 40 hours of community service and undergo a psychiatric evaluation.
  • But the threat of a worse sentence hung over Bhrigu's head. At the request of the prosecutor, Ross had prepared a more detailed list of damages, including Bhrigu's entire salary, half of Ames's, six months' salary for a technician to help Ames get back up to speed, and a quarter of the lab's reagents. The court arrived at a possible figure of $72,000, with the final amount to be decided upon at a restitution hearing in September.
  • Ross, though, is happy that the ordeal is largely over. For the month-and-a-half of the investigation, she became reluctant to take on new students or to hire personnel. She says she considered packing up her research programme. She even questioned her own sanity, worrying that she was the one sabotaging Ames's work via "an alternate personality". Ross now wonders if she was too trusting, and urges other lab heads to "realize that the whole spectrum of humanity is in your lab. So, when someone complains to you, take it seriously."
  • She also urges others to speak up when wrongdoing is discovered. After Bhrigu pleaded guilty in June, Ross called Trempe at the University of Toledo. He was shocked, of course, and for more than one reason. His department at Toledo had actually re-hired Bhrigu. Bhrigu says that he lied about the reason he left Michigan, blaming it on disagreements with Ross. Toledo let Bhrigu go in July, not long after Ross's call.
  • Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says.
  • Ames says that the experience shook her trust in her chosen profession. "I did have doubts about continuing with science. It hurt my idea of science as a community that works together, builds upon each other's work and collaborates."
  •  
    Research integrity: Sabotage! Postdoc Vipul Bhrigu destroyed the experiments of a colleague in order to get ahead.
Weiye Loh

Breakthroughs from the Second Tier - The Scientist - Magazine of the Life Sciences - 0 views

  • commonly voiced criticisms of traditional peer review is that it discourages truly innovative ideas, rejecting field-changing papers while publishing ideas that fall into a status quo and the “hot” fields of the day
  • Another is that it is nearly impossible to immediately spot the importance of a paper—to truly evaluate a paper, one needs months, if not years, to see the impact it has on its field.
  • we present some papers that suggest these two criticisms are correct, at least in part. These studies were published in lower-profile journals (all with current impact factors of 6 or below), suggesting they should have had less of an impact. But these papers eventually accumulated at least 1,000 citations. Many were rejected from higher-tier journals. All changed their fields forever.
  •  
    Breakthroughs from the Second Tier Peer review isn't perfect- meet 5 high-impact papers that should have ended up in bigger journals.
Weiye Loh

Roger Pielke Jr.'s Blog: Fabrications in Science - 0 views

  • You don't expect to pick up Science magazine and read an article that is chock full of fabrications and errors.  Yet, that is exactly what you'll find in Kevin Trenberth's review of The Climate Fix, which appears in this week's issue.
  • that reviewer choice by Science goes with the territory.  It says a lot about Science.  Trenberth's rambling and unhinged review is also not unexpected.  What is absolute unacceptable is that Trenberth makes a large number of factual mistakes in the piece, misrepresenting the book.
Weiye Loh

Referees' quotes - 2010 - 2010 - Environmental Microbiology - Wiley Online Library - 0 views

  • This paper is desperate. Please reject it completely and then block the author's email ID so they can't use the online system in future.
  • The type of lava vs. diversity has no meaning if only one of each sample is analyzed; multiple samples are required for generality. This controls provenance (e.g. maybe some beetle took a pee on one or the other of the samples, seriously skewing relevance to lava composition).
  • Merry X-mas! First, my recommendation was reject with new submission, because it is necessary to investigate further, but reading a well written manuscript before X-mas makes me feel like Santa Claus.
  • ...6 more annotations...
  • Season's Greetings! I apologise for my slow response but a roast goose prevented me from answering emails for a few days.• I started to review this but could not get much past the abstract.
  • Stating that the study is confirmative is not a good start for the Discussion. Rephrasing the first sentence of the Discussion would seem to be a good idea.
  • Reject – More holes than my grandad's string vest!• The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • Sorry for the overdue, it seems to me that ‘overdue’ is my constant, persistent and chronic EMI status. Good that the reviewers are not getting red cards! The editors could create, in addition to the referees quotes, a ranking for ‘on-time’ referees. I would get the bottom place. But fast is not equal to good (I am consoling myself!)
  • It hurts me a little to have so little criticism of a manuscript.
  • Based on titles seen in journals, many authors seem to be more fascinated these days by their methods than by their science. The authors should be encouraged to abstract the main scientific (i.e., novel) finding into the title.
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

FT.com / FT Magazine - A disastrous truth - 0 views

  • Every time a disaster strikes, some environmentalists blame it on climate change. “It’s been such a part of the narrative of the public and political debate, particularly after Hurricane Katrina,” Roger Pielke Jr, an expert on the politics of climate change at the University of Colorado, told me.
  • But nothing in the scientific literature indicates that this is true. A host of recent peer-reviewed studies agree: there’s no evidence that climate change has increased the damage from natural disasters. Most likely, climate change will make disasters worse some day, but not yet.
  • Laurens Bouwer, of Amsterdam’s Vrije Universiteit, has recently reviewed 22 “disaster loss studies” and concludes: “Anthropogenic climate change so far has not had a significant impact on losses from natural disasters.”
  • ...4 more annotations...
  • Eric Neumayer and Fabian Barthel of the London School of Economics found likewise in their recent “global analysis” of natural disasters.
  • in his book The Climate Fix: What Scientists and Politicians Won’t Tell You About Global Warming, Pielke writes that there’s no upward trend in the landfalls of tropical cyclones. Even floods in Brisbane aren’t getting worse – just check out the city’s 19th-century floods. Pielke says the consensus of peer-reviewed research on this point – that climate change is not yet worsening disasters – is as strong as any consensus in climate science.
  • It’s true that floods and hurricanes do more damage every decade. However, that’s because ever more people, owning ever more “stuff”, live in vulnerable spots.
  • When it comes to preventing today’s disasters, the squabble about climate change is just a distraction. The media usually has room for only one environmental argument: is climate change happening? This pits virtually all climate scientists against a band of self-taught freelance sceptics, many of whom think the “global warming hoax” is a ruse got up by 1960s radicals as a trick to bring in socialism. (I know, I get the sceptics’ e-mails.) Sometimes in this squabble, climate scientists are tempted to overstate their case, and to say that the latest disaster proves that the climate is changing. This is bad science. It also gives the sceptics something dubious to attack. Better to ignore the sceptics, and have more useful debates about disasters and climate change – which, for now, are two separate problems.
Weiye Loh

Book Review: Future Babble by Dan Gardner « Critical Thinking « Skeptic North - 0 views

  • I predict that you will find this review informative. If you do, you will congratulate my foresight. If you don’t, you’ll forget I was wrong.
  • My playful intro summarizes the main thesis of Gardner’s excellent book, Future Babble: Why Expert Predictions Fail – and Why We Believe Them Anyway.
  • In Future Babble, the research area explored is the validity of expert predictions, and the primary researcher examined is Philip Tetlock. In the early 1980s, Tetlock set out to better understand the accuracy of predictions made by experts by conducting a methodologically sound large-scale experiment.
  • ...10 more annotations...
  • Gardner presents Tetlock’s experimental design in an excellent way, making it accessible to the lay person. Concisely, Tetlock examined 27450 judgments in which 284 experts were presented with clear questions whose answers could later be shown to be true or false (e.g., “Will the official unemployment rate be higher, lower or the same a year from now?”). For each prediction, the expert must answer clearly and express their degree of certainty as a percentage (e.g., dead certain = 100%). The usage of precise numbers adds increased statistical options and removes the complications of vague or ambiguous language.
  • Tetlock found the surprising and disturbing truth “that experts’ predictions were no more accurate than random guesses.” (p. 26) An important caveat is that there was a wide range of capability, with some experts being completely out of touch, and others able to make successful predictions.
  • What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same is true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.” (p. 26)
  • The experts who did poorly were not comfortable with complexity and uncertainty, and tended to reduce most problems to some core theoretical theme. It was as if they saw the world through one lens or had one big idea that everything else had to fit into. Alternatively, the experts who did decently were self-critical, used multiple sources of information and were more comfortable with uncertainty and correcting their errors. Their thinking style almost results in a paradox: “The experts who were more accurate than others tended to be less confident they were right.” (p.27)
  • Gardner then introduces the terms ‘Hedgehog’ and ‘Fox’ to refer to bad and good predictors respectively. Hedgehogs are the ones you see pushing the same idea, while Foxes are likely in the background questioning the ability of prediction itself while making cautious proposals. Foxes are more likely to be correct. Unfortunately, it is Hedgehogs that we see on the news.
  • one of Tetlock’s findings was that “the bigger the media profile of an expert, the less accurate his predictions.” (p.28)
  • Chapter 2 – The Unpredictable World An exploration into how many events in the world are simply unpredictable. Gardner discusses chaos theory and necessary and sufficient conditions for events to occur. He supports the idea of actually saying “I don’t know,” which many experts are reluctant to do.
  • Chapter 3 – In the Minds of Experts A more detailed examination of Hedgehogs and Foxes. Gardner discusses randomness and the illusion of control while using narratives to illustrate his points à la Gladwell. This chapter provides a lot of context and background information that should be very useful to those less initiated.
  • Chapter 6 – Everyone Loves a Hedgehog More about predictions and how the media picks up hedgehog stories and talking points without much investigation into their underlying source or concern for accuracy. It is a good demolition of the absurdity of so many news “discussion shows.” Gardner demonstrates how the media prefer a show where Hedgehogs square off against each other, and it is important that these commentators not be challenged lest they become exposed and, by association, implicate the flawed structure of the program/network.Gardner really singles out certain people, like Paul Ehrlich, and shows how they have been wrong many times and yet can still get an audience.
  • “An assertion that cannot be falsified by any conceivable evidence is nothing more than dogma. It can’t be debated. It can’t be proven or disproven. It’s just something people choose to believe or not for reasons that have nothing to do with fact and logic. And dogma is what predictions become when experts and their followers go to ridiculous lengths to dismiss clear evidence that they failed.”
Weiye Loh

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
Weiye Loh

Roger Pielke Jr.'s Blog: Breakthrough Report on Rebound - 0 views

  • Whatever one thinks about the so-called "rebound effect" or the role of efficiency in contributing to emissions reductions goals, the Breakthrough Institute (where I am a Senior Fellow) has done a great service to the discussion by publishing a new literature review on the subject.  You can find the review here in PDF and a PowerPoint overview here in PPT.  They discuss the new report on their blog here. This massive effort represents think tanks at their very best, and is likely to be the definitive literature review for years to come.  Whatever your views or level of expertise, if you want to dive into the subject, I can think of no better place to start.
Weiye Loh

School children publish science project in peer reviewed academic journal « E... - 0 views

  • A group of school children aged between 8 and 10 years old have had their school science project accepted for publication in an internationally recognised peer-reviewed journal. The paper, which reports novel findings in how bumblebees perceive colour, is published in the Royal Society journal Biology Letters.
Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

Roger Pielke Jr.'s Blog: It Is Always the Media's Fault - 0 views

  • Last summer NCAR issued a dramatic press release announcing that oil from the Gulf spill would soon be appearing on the beaches of the Atlantic ocean.  I discussed it here. Here are the first four paragraphs of that press release: BOULDER—A detailed computer modeling study released today indicates that oil from the massive spill in the Gulf of Mexico might soon extend along thousands of miles of the Atlantic coast and open ocean as early as this summer. The modeling results are captured in a series of dramatic animations produced by the National Center for Atmospheric Research (NCAR) and collaborators. he research was supported in part by the National Science Foundation, NCAR’s sponsor. The results were reviewed by scientists at NCAR and elsewhere, although not yet submitted for peer-review publication. “I’ve had a lot of people ask me, ‘Will the oil reach Florida?’” says NCAR scientist Synte Peacock, who worked on the study. “Actually, our best knowledge says the scope of this environmental disaster is likely to reach far beyond Florida, with impacts that have yet to be understood.” The computer simulations indicate that, once the oil in the uppermost ocean has become entrained in the Gulf of Mexico’s fast-moving Loop Current, it is likely to reach Florida's Atlantic coast within weeks. It can then move north as far as about Cape Hatteras, North Carolina, with the Gulf Stream, before turning east. Whether the oil will be a thin film on the surface or mostly subsurface due to mixing in the uppermost region of the ocean is not known.
  • A few weeks ago NCAR's David Hosansky who presumably wrote that press release, asks whether NCAR got it wrong.  His answer?  No, not really: During last year’s crisis involving the massive release of oil into the Gulf of Mexico, NCAR issued a much-watched animation projecting that the oil could reach the Atlantic Ocean. But detectable amounts of oil never made it to the Atlantic, at least not in an easily visible form on the ocean surface. Not surprisingly, we’ve heard from a few people asking whether NCAR got it wrong. These events serve as a healthy reminder of a couple of things: *the difference between a projection and an actual forecast *the challenges of making short-term projections of natural processes that can act chaotically, such as ocean currents
  • What then went wrong? First, the projection. Scientists from NCAR, the Department of Energy’s Los Alamos National Laboratory, and IFM-GEOMAR in Germany did not make a forecast of where the oil would go. Instead, they issued a projection. While there’s not always a clear distinction between the two, forecasts generally look only days or hours into the future and are built mostly on known elements (such as the current amount of humidity in the atmosphere). Projections tend to look further into the future and deal with a higher number of uncertainties (such as the rate at which oil degrades in open waters and the often chaotic movements of ocean currents). Aware of the uncertainties, the scientific team projected the likely path of the spill with a computer model of a liquid dye. They used dye rather than actual oil, which undergoes bacterial breakdown, because a reliable method to simulate that breakdown was not available. As it turned out, the oil in the Gulf broke down quickly due to exceptionally strong bacterial action and, to some extent, the use of chemical dispersants.
  • ...3 more annotations...
  • Second, the challenges of short-term behavior. The Gulf's Loop Current acts as a conveyor belt, moving from the Yucatan through the Florida Straits into the Atlantic. Usually, the current curves northward near the Louisiana and Mississippi coasts—a configuration that would have put it on track to pick up the oil and transport it into open ocean. However, the current’s short-term movements over a few weeks or even months are chaotic and impossible to predict. Sometimes small eddies, or mini-currents, peel off, shifting the position and strength of the main current. To determine the threat to the Atlantic, the research team studied averages of the Loop Current’s past behavior in order to simulate its likely course after the spill and ran several dozen computer simulations under various scenarios. Fortunately for the East Coast, the Loop Current did not behave in its usual fashion but instead remained farther south than usual, which kept it far from the Louisiana and Mississippi coast during the crucial few months before the oil degraded and/or was dispersed with chemical treatments.
  • The Loop Current typically goes into a southern configuration about every 6 to 19 months, although it rarely remains there for very long. NCAR scientist Synte Peacock, who worked on the projection, explains that part of the reason the current is unpredictable is “no two cycles of the Loop Current are ever exactly the same." She adds that the cycles are influenced by such variables as how large the eddy is, where the current detaches and moves south, and how long it takes for the current to reform. Computer models can simulate the currents realistically, she adds. But they cannot predict when the currents will change over to a new cycle. The scientists were careful to explain that their simulations were a suite of possible trajectories demonstrating what was likely to happen, but not a definitive forecast of what would happen. They reiterated that point in a peer-reviewed study on the simulations that appeared last August in Environmental Research Letters. 
  • So who was at fault?  According to Hosansky it was those dummies in the media: These caveats, however, got lost in much of the resulting media coverage.Another perspective is that having some of these caveats in the press release might have been a good idea.
Weiye Loh

Major reform for climate body : Nature News - 0 views

  • The first major test of these changes will be towards the end of this year, with the release of a report assessing whether climate change is increasing the likelihood of extreme weather events. Despite much speculation, there is scant scientific evidence for such a link — particularly between climate warming, storm frequency and economic losses — and the report is expected to spark renewed controversy. "It'll be interesting to see how the IPCC will handle this hot potato where stakes are high but solid peer-reviewed results are few," says Silke Beck, a policy expert at the Helmholtz Centre for Environmental Research in Leipzig, Germany.
  •  
    A new conflict-of-interest policy will require all IPCC officials and authors to disclose financial and other interests relevant to their work (Pachauri had been harshly criticized in 2009 for alleged conflicts of interest.) The meeting also adopted a detailed protocol for addressing errors in existing and future IPCC reports, along with guidelines to ensure that descriptions of scientific uncertainties remain consistent across reports. "This is a heartening and encouraging outcome of the review we started one year ago," Pachauri told Nature. "It will strengthen the IPCC and help restore public trust in the climate sciences."
Weiye Loh

Roger Pielke Jr.'s Blog: Richard Muller on NPR: Don't Play With the Peer Review System - 0 views

  • CONAN: Do you find that, though, there is a lot of ideology in this business? Prof. MULLER: Well, I think what's happened is that many scientists have gotten so concerned about global warming, correctly concerned I mean they look at it and they draw a conclusion, and then they're worried that the public has not been concerned, and so they become advocates. And at that point, it's unfortunate, I feel that they're not trusting the public. They're not presenting the science to the public. They're presenting only that aspect to the science that will convince the public. That's not the way science works. And because they don't trust the public, in the end the public doesn't trust them. And the saddest thing from this, I think, is a loss of credibility of scientists because so many of them have become advocates.
  • CONAN: And that's, you would say, would be at the heart of the so-called Climategate story, where emails from some scientists seemed to be working to prevent the work of other scientists from appearing in peer-reviewed journals. Prof. MULLER: That really shook me up when I learned about that. I think that Climategate is a very unfortunate thing that happened, that the scientists who were involved in that, from what I've read, didn't trust the public, didn't even trust the scientific public. They were not showing the discordant data. That's something that - as a scientist I was trained you always have to show the negative data, the data that disagrees with you, and then make the case that your case is stronger. And they were hiding the data, and a whole discussion of suppressing publications, I thought, was really unfortunate. It was not at a high point for science
  • And I really get even more upset when some other people say, oh, science is just a human activity. This is the way it happens. You have to recognize, these are people. No, no, no, no. These are not scientific standards. You don't hide the data. You don't play with the peer review system.
Weiye Loh

BBC News - Facebook v academia: The gloves are off - 0 views

  •  
    "But this latest story once again sparked headlines around the world, even if articles often made the point that the research was not peer-reviewed. What was different, however, was Facebook's reaction. Previously, its PR team has gone into overdrive behind the scenes to rubbish this kind of research but said nothing in public. This time they used a new tactic, humour, to undermine the story. Mike Develin, a data scientist for the social network, published a note on Facebook mocking the Princeton team's "innovative use of Google search trends". He went on to use the same techniques to analyse the university's own prospects, concluding that a decline in searches over recent years "suggests that Princeton will have only half its current enrollment by 2018, and by 2021 it will have no students at all". Now, who knows, Facebook may well face an uncertain future. But academics looking to predict its demise have been put on notice - the company employs some pretty smart scientists who may take your research apart and fire back. The gloves are off."
Weiye Loh

The Real Hoax Was Climategate | Media Matters Action Network - 0 views

  • Sen. Jim Inhofe's (R-OK) biggest claim to fame has been his oft-repeated line that global warming is "the greatest hoax ever perpetrated on the American people."
  • In 2003, he conceded that the earth was warming, but denied it was caused by human activity and suggested that "increases in global temperatures may have a beneficial effect on how we live our lives."
  • In 2009, however, he appeared on Fox News to declare that the earth was actually cooling, claiming "everyone understands that's the case" (they don't, because it isn't).
  • ...7 more annotations...
  • nhofe's battle against climate science kicked into overdrive when a series of illegally obtained emails surfaced from the Climatic Research Unit at East Anglia University. 
  • When the dubious reports surfaced about flawed science, manipulated data, and unsubstantiated studies, Inhofe was ecstatic.  In March, he viciously attacked former Vice President Al Gore for defending the science behind climate change
  • Unfortunately for Senator Inhofe, none of those things are true.  One by one, the pillars of evidence supporting the alleged "scandals" have shattered, causing the entire "Climategate" storyline to come crashing down. 
  • a panel established by the University of East Anglia to investigate the integrity of the research of the Climatic Research Unit wrote: "We saw no evidence of any deliberate scientific malpractice in any of the work of the Climatic Research Unit and had it been there we believe that it is likely that we would have detected it."
  • Responding to allegations that Dr. Michael Mann tampered with scientific evidence, Pennsylvania State University conducted a thorough investigation. It concluded: "The Investigatory Committee, after careful review of all available evidence, determined that there is no substance to the allegation against Dr. Michael E. Mann, Professor, Department of Meteorology, The Pennsylvania State University.  More specifically, the Investigatory Committee determined that Dr. Michael E. Mann did not engage in, nor did he participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research, or other scholarly activities."
  • London's Sunday Times retracted its story, echoed by dozens of outlets, that an IPCC issued an unsubstantiated report claiming 40% of the Amazon rainforest was endangered due to changing rainfall patterns.  The Times wrote: "In fact, the IPCC's Amazon statement is supported by peer-reviewed scientific evidence. In the case of the WWF report, the figure had, in error, not been referenced, but was based on research by the respected Amazon Environmental Research Institute (IPAM) which did relate to the impact of climate change."
  • The Times also admitted it misrepresented the views of Dr. Simon Lewis, a Royal Society research fellow at the University of Leeds, implying he agreed with the article's false premise and believed the IPCC should not utilize reports issued by outside organizations.  In its retraction, the Times was forced to admit: "Dr Lewis does not dispute the scientific basis for both the IPCC and the WWF reports," and, "We accept that Dr Lewis holds no such view... A version of our article that had been checked with Dr Lewis underwent significant late editing and so did not give a fair or accurate account of his views on these points. We apologise for this."
  •  
    The Real Hoax Was Climategate July 02, 2010 1:44 pm ET by Chris Harris
Weiye Loh

Essay - The End of Tenure? - NYTimes.com - 0 views

  • The cost of a college education has risen, in real dollars, by 250 to 300 percent over the past three decades, far above the rate of inflation. Elite private colleges can cost more than $200,000 over four years. Total student-loan debt, at nearly $830 billion, recently surpassed total national credit card debt. Meanwhile, university presidents, who can make upward of $1 million annually, gravely intone that the $50,000 price tag doesn’t even cover the full cost of a year’s education.
  • Then your daughter reports that her history prof is a part-time adjunct, who might be making $1,500 for a semester’s work. There’s something wrong with this picture.
  • The higher-ed jeremiads of the last generation came mainly from the right. But this time, it’s the tenured radicals — or at least the tenured liberals — who are leading the charge. Hacker is a longtime contributor to The New York Review of Books and the author of the acclaimed study “Two Nations: Black and White, Separate, Hostile, Unequal,”
  • ...6 more annotations...
  • And these two books arrive at a time, unlike the early 1990s, when universities are, like many students, backed into a fiscal corner. Taylor writes of walking into a meeting one day and learning that Columbia’s endowment had dropped by “at least” 30 percent. Simply brushing off calls for reform, however strident and scattershot, may no longer be an option.
  • The labor system, for one thing, is clearly unjust. Tenured and tenure-track professors earn most of the money and benefits, but they’re a minority at the top of a pyramid. Nearly two-thirds of all college teachers are non-tenure-track adjuncts like Matt Williams, who told Hacker and Dreifus he had taught a dozen courses at two colleges in the Akron area the previous year, earning the equivalent of about $8.50 an hour by his reckoning. It is foolish that graduate programs are pumping new Ph.D.’s into a world without decent jobs for them. If some programs were phased out, teaching loads might be raised for some on the tenure track, to the benefit of undergraduate education.
  • it might well be time to think about vetoing Olympic-quality athletic ­facilities and trimming the ranks of administrators. At Williams, a small liberal arts college renowned for teaching, 70 percent of employees do something other than teach.
  • But Hacker and Dreifus go much further, all but calling for an end to the role of universities in the production of knowledge. Spin off the med schools and research institutes, they say. University presidents “should be musing about education, not angling for another center on antiterrorist technologies.” As for the humanities, let professors do research after-hours, on top of much heavier teaching schedules. “In other occupations, when people feel there is something they want to write, they do it on their own time and at their own expense,” the authors declare. But it seems doubtful that, say, “Battle Cry of Freedom,” the acclaimed Civil War history by Princeton’s James McPherson, could have been written on the weekends, or without the advance spadework of countless obscure monographs. If it is false that research invariably leads to better teaching, it is equally false to say that it never does.
  • Hacker’s home institution, the public Queens College, which has a spartan budget, commuter students and a three-or-four-course teaching load per semester. Taylor, by contrast, has spent his career on the elite end of higher education, but he is no less disillusioned. He shares Hacker and Dreifus’s concerns about overspecialized research and the unintended effects of tenure, which he believes blocks the way to fresh ideas. Taylor has backed away from some of the most incendiary proposals he made last year in a New York Times Op-Ed article, cheekily headlined “End the University as We Know It” — an article, he reports, that drew near-universal condemnation from academics and near-universal praise from everyone else. Back then, he called for the flat-out abolition of traditional departments, to be replaced by temporary, “problem-centered” programs focusing on issues like Mind, Space, Time, Life and Water. Now, he more realistically suggests the creation of cross-­disciplinary “Emerging Zones.” He thinks professors need to get over their fear of corporate partnerships and embrace efficiency-enhancing technologies.
  • It is not news that America is a land of haves and have-nots. It is news that colleges are themselves dividing into haves and have-nots; they are becoming engines of inequality. And that — not whether some professors can afford to wear Marc Jacobs — is the real scandal.
  •  
    The End of Tenure? By CHRISTOPHER SHEA Published: September 3, 2010
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
‹ Previous 21 - 40 of 136 Next › Last »
Showing 20 items per page