Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Academic Research

Rss Feed Group items tagged

Weiye Loh

TODAYonline | Commentary | Trust us, we're academics ... or should you? - 0 views

  • the 2011 Edelman Trust Barometer, published by research firm StrategyOne, which surveyed 5,075 "informed publics" in 23 countries on their trust in business, government, institutions and individuals. One of the questions asked of respondents was: "If you heard information about a company from one of these people, how credible would that information be?". Of the eight groups of individuals - academic/expert, technical expert in company, financial/industry analyst, CEO, non-governmental organisation representative, government official, person like myself, and regular employee - academic/expert came out tops with a score of 70 per cent, followed by technical expert at 64 per cent.
  • the film on the global financial crisis Inside Job, which won the 2011 Academy Award for best documentary. One of the documentary's themes is the role a number of renowned academics, particularly academic economists, played in the global crisis. It highlighted potentially serious conflicts of interests related to significant compensation derived by these academics serving on boards of financial services firms and advising such firms.
  • Often, these academics also played key roles in shaping government policies relating to deregulation - most appear allergic to regulation of the financial services industry. The documentary argued that these academics from Ivy League universities had basically become advocates for financial services firms, which blinded them to firms' excesses. It noted that few academic economists saw the financial crisis coming, and suggested this might be because they were too busy making money from the industry.
  • ...12 more annotations...
  • It is difficult to say if the "failure" of the academics was due to an unstinting belief in free markets or conflicts of interest. Parts of the movie did appear to be trying too hard to prove the point. However, the threat posed by academics earning consulting fees that dwarf their academic compensation, and which might therefore impair their independence, is a real one.
  • One of the worst was the Ivy League university economics professor engaged by the Icelandic Chamber of Commerce to co-author a report on the Icelandic financial system. He concluded that the system was sound even though there were numerous warning signs. When he was asked how he arrived at his conclusions, he said he had talked to people and were misled by them. One wonders how much of his conclusions were actually based on rigorous analysis.
  • it is troubling if academics merely become mouthpieces for vested interests. The impression one gets from watching the movie certainly does not fit with the high level of trust in academics shown by the Edelman Trust Barometer.
  • As an academic, I have often been told that I can be independent and objective - that I should have no axe to grind and no wheels to grease. However, I worry about an erosion of trust in academics. This may be especially true in certain disciplines like business (which is mine, incidentally).
  • too many business school professors were serving on US corporate boards and have lost their willingness to be critical about unethical business practices. In corporate scandals such as Enron and Satyam, academics from top business schools have not particularly covered themselves in glory.
  • It is more and more common for universities - in the US and here - to invite business people to serve on their boards.
  • universities and academics may lose their independence and objectivity in commenting on business issues critically, for fear of offending those who ultimately have an oversight role over the varsity's senior management.
  • Universities might also have business leaders serving on boards as potential donors, which would also confuse the role of board members and lead to conflicts of interest. In the Satyam scandal in India, the founder of Satyam sat on the board of the Indian School of Business, while the Dean of the Indian School of Business sat on Satyam's board. Satyam also made a significant donation to the Indian School of Business.
  • Universities are increasingly dependent on funding from industry and wealthy individuals as well as other sources, sometimes even dubious ones. The recent scandal at the London School of Economics involving its affiliation with Libya is an example.
  • It is important for universities to have robust gift policies as part of the risk management to protect their reputation, which can be easily tainted if a donation comes from a questionable source. It is especially important that donations do not cause universities to be captured by vested interests.
  • From time to time, people in industry ask me if I have been pressured by the university to tone down on my outspokenness on corporate governance issues. Thankfully, while there have been instances where varsity colleagues and friends in industry have conveyed messages from others to "tone down", I have felt relatively free to express my views. Of course, were I trying to earn more money from external consulting, I guess I would be less vocal.
  • I do worry about the loss of independence and, therefore, trust in academics and academic institutions if we are not careful about it.
Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Weiye Loh

Research integrity: Sabotage! : Nature News - 0 views

  • University of Michigan in Ann Arbor
  • Vipul Bhrigu, a former postdoc at the university's Comprehensive Cancer Center, wears a dark-blue three-buttoned suit and a pinched expression as he cups his pregnant wife's hand in both of his. When Pollard Hines calls Bhrigu's case to order, she has stern words for him: "I was inclined to send you to jail when I came out here this morning."
  • Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells". Bhrigu has said on multiple occasions that he was compelled by "internal pressure" and had hoped to slow down Ames's work. Speaking earlier this month, he was contrite. "It was a complete lack of moral judgement on my part," he said.
  • ...16 more annotations...
  • Bhrigu's actions are surprising, but probably not unique. There are few firm numbers showing the prevalence of research sabotage, but conversations with graduate students, postdocs and research-misconduct experts suggest that such misdeeds occur elsewhere, and that most go unreported or unpoliced. In this case, the episode set back research, wasted potentially tens of thousands of dollars and terrorized a young student. More broadly, acts such as Bhrigu's — along with more subtle actions to hold back or derail colleagues' work — have a toxic effect on science and scientists. They are an affront to the implicit trust between scientists that is necessary for research endeavours to exist and thrive.
  • Despite all this, there is little to prevent perpetrators re-entering science.
  • federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data.
  • In Bhrigu's case, administrators at the University of Michigan worked with police to investigate, thanks in part to the persistence of Ames and her supervisor, Theo Ross. "The question is, how many universities have such procedures in place that scientists can go and get that kind of support?" says Christine Boesz, former inspector-general for the US National Science Foundation in Arlington, Virginia, and now a consultant on scientific accountability. "Most universities I was familiar with would not necessarily be so responsive."
  • Some labs are known to be hyper-competitive, with principal investigators pitting postdocs against each other. But Ross's lab is a small, collegial place. At the time that Ames was noticing problems, it housed just one other graduate student, a few undergraduates doing projects, and the lab manager, Katherine Oravecz-Wilson, a nine-year veteran of the lab whom Ross calls her "eyes and ears". And then there was Bhrigu, an amiable postdoc who had joined the lab in April 2009.
  • Some people whom Ross consulted with tried to convince her that Ames was hitting a rough patch in her work and looking for someone else to blame. But Ames was persistent, so Ross took the matter to the university's office of regulatory affairs, which advises on a wide variety of rules and regulations pertaining to research and clinical care. Ray Hutchinson, associate dean of the office, and Patricia Ward, its director, had never dealt with anything like it before. After several meetings and two more instances of alcohol in the media, Ward contacted the department of public safety — the university's police force — on 9 March. They immediately launched an investigation — into Ames herself. She endured two interrogations and a lie-detector test before investigators decided to look elsewhere.
  • At 4:00 a.m. on Sunday 18 April, officers installed two cameras in the lab: one in the cold room where Ames's blots had been contaminated, and one above the refrigerator where she stored her media. Ames came in that day and worked until 5:00 p.m. On Monday morning at around 10:15, she found that her medium had been spiked again. When Ross reviewed the tapes of the intervening hours with Richard Zavala, the officer assigned to the case, she says that her heart sank. Bhrigu entered the lab at 9:00 a.m. on Monday and pulled out the culture media that he would use for the day. He then returned to the fridge with a spray bottle of ethanol, usually used to sterilize lab benches. With his back to the camera, he rummaged through the fridge for 46 seconds. Ross couldn't be sure what he was doing, but it didn't look good. Zavala escorted Bhrigu to the campus police department for questioning. When he told Bhrigu about the cameras in the lab, the postdoc asked for a drink of water and then confessed. He said that he had been sabotaging Ames's work since February. (He denies involvement in the December and January incidents.)
  • Misbehaviour in science is nothing new — but its frequency is difficult to measure. Daniele Fanelli at the University of Edinburgh, UK, who studies research misconduct, says that overtly malicious offences such as Bhrigu's are probably infrequent, but other forms of indecency and sabotage are likely to be more common. "A lot more would be the kind of thing you couldn't capture on camera," he says. Vindictive peer review, dishonest reference letters and withholding key aspects of protocols from colleagues or competitors can do just as much to derail a career or a research project as vandalizing experiments. These are just a few of the questionable practices that seem quite widespread in science, but are not technically considered misconduct. In a meta-analysis of misconduct surveys, published last year (D. Fanelli PLoS ONE 4, e5738; 2009), Fanelli found that up to one-third of scientists admit to offences that fall into this grey area, and up to 70% say that they have observed them.
  • Some say that the structure of the scientific enterprise is to blame. The big rewards — tenured positions, grants, papers in stellar journals — are won through competition. To get ahead, researchers need only be better than those they are competing with. That ethos, says Brian Martinson, a sociologist at HealthPartners Research Foundation in Minneapolis, Minnesota, can lead to sabotage. He and others have suggested that universities and funders need to acknowledge the pressures in the research system and try to ease them by means of education and rehabilitation, rather than simply punishing perpetrators after the fact.
  • Bhrigu says that he felt pressure in moving from the small college at Toledo to the much bigger one in Michigan. He says that some criticisms he received from Ross about his incomplete training and his work habits frustrated him, but he doesn't blame his actions on that. "In any kind of workplace there is bound to be some pressure," he says. "I just got jealous of others moving ahead and I wanted to slow them down."
  • At Washtenaw County Courthouse in July, having reviewed the case files, Pollard Hines delivered Bhrigu's sentence. She ordered him to pay around US$8,800 for reagents and experimental materials, plus $600 in court fees and fines — and to serve six months' probation, perform 40 hours of community service and undergo a psychiatric evaluation.
  • But the threat of a worse sentence hung over Bhrigu's head. At the request of the prosecutor, Ross had prepared a more detailed list of damages, including Bhrigu's entire salary, half of Ames's, six months' salary for a technician to help Ames get back up to speed, and a quarter of the lab's reagents. The court arrived at a possible figure of $72,000, with the final amount to be decided upon at a restitution hearing in September.
  • Ross, though, is happy that the ordeal is largely over. For the month-and-a-half of the investigation, she became reluctant to take on new students or to hire personnel. She says she considered packing up her research programme. She even questioned her own sanity, worrying that she was the one sabotaging Ames's work via "an alternate personality". Ross now wonders if she was too trusting, and urges other lab heads to "realize that the whole spectrum of humanity is in your lab. So, when someone complains to you, take it seriously."
  • She also urges others to speak up when wrongdoing is discovered. After Bhrigu pleaded guilty in June, Ross called Trempe at the University of Toledo. He was shocked, of course, and for more than one reason. His department at Toledo had actually re-hired Bhrigu. Bhrigu says that he lied about the reason he left Michigan, blaming it on disagreements with Ross. Toledo let Bhrigu go in July, not long after Ross's call.
  • Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says.
  • Ames says that the experience shook her trust in her chosen profession. "I did have doubts about continuing with science. It hurt my idea of science as a community that works together, builds upon each other's work and collaborates."
  •  
    Research integrity: Sabotage! Postdoc Vipul Bhrigu destroyed the experiments of a colleague in order to get ahead.
Weiye Loh

FT.com / Business education / Soapbox - Popular fads replace relevant teaching - 0 views

  • There is a great divide in business schools, one that few outsiders are aware of. It is the divide between research and teaching. There is little relation between them. What is being taught in management books and classrooms is usually not based on rigorous research and vice-versa; the research published in prestigious academic journals seldom finds its way into the MBA classroom.
  • none of this research is really intended to be used in the classroom, or to be communicated to managers in some other form, it is not suited to serve that purpose. The goal is publication in a prestigious academic journal, but that does not make it useful or even offer a guarantee that the research findings provide much insight into the workings of business reality.
  • is not a new problem. In 1994, Don Hambrick, then the president of the Academy of Management, said: “We read each others’ papers in our journals and write our own papers so that we may, in turn, have an audience . . . an incestuous, closed loop”. Management research is not required to be relevant. Consequently much of it is not.
  • ...6 more annotations...
  • But business education clearly also suffers. What is being taught in management courses is usually not based on solid scientific evidence. Instead, it concerns the generalisation of individual business cases or the lessons from popular management books. Such books often are based on the appealing formula that they look at several successful companies, see what they have in common and conclude that other companies should strive to do the same thing.
  • how do you know that the advice provided is reasonable, or if it comes from tomorrow’s Enrons, RBSs, Lehmans and WorldComs? How do you know that today’s advice and cases will not later be heralded as the epitome of mismanagement?
  • In the 1990s, ISO9000 (a quality management systems standard) spread through many industries. But research by professors Mary Benner and Mike Tushman showed that its adoption could, in time, lead to a fall in innovation (because ISO9000 does not allow for deviations from a set standard, which innovation requires), making the adopter worse off. This research was overlooked by practitioners, many business schools continued to applaud the benefits of ISO9000 in their courses, while firms continued – and still do – to implement the practice, ignorant of its potential pitfalls. Yet this research offers a clear example of the possible benefits of scientific research methods: rigorous research that reveals unintended consequences to expose the true nature of a business practice.
  • such research with important practical implications unfortunately is the exception rather than the rule. Moreover, even relevant research is largely ignored in business education – as happened to the findings by Benner and Tushman.
  • Of course one should not make the mistake that business cases and business books based on personal observation and opinion are without value. They potentially offer a great source of practical experience. Similarly, it would be naive to assume that scientific research can provide custom-made answers. Rigorous management research could and should provide the basis for skilled managers to make better decisions. However, they cannot do that without the in-depth knowledge of their specific organisation and circumstances.
  • at present, business schools largely fail in providing rigorous, evidence-based teaching.
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

The Origins of "Basic Research" - 0 views

  • For many scientists, "basic research" means "fundamental" or "pure" research conducted without consideration of practical applications. At the same time, policy makers see "basic research" as that which leads to societal benefits including economic growth and jobs.
  • The mechanism that has allowed such divergent views to coexist is of course the so-called "linear model" of innovation, which holds that investments in "basic research" are but the first step in a sequence that progresses through applied research, development, and application. As recently explained in a major report of the US National Academy of Sciences: "[B]asic research ... has the potential to be transformational to maintain the flow of new ideas that fuel the economy, provide security, and enhance the quality of life" (Rising Above the Gathering Storm).
  • A closer look at the actual history of Google reveals how history becomes mythology. The 1994 NSF project that funded the scientific work underpinning the search engine that became Google (as we know it today) was conducted from the start with commercialization in mind: "The technology developed in this project will provide the 'glue' that will make this worldwide collection usable as a unified entity, in a scalable and economically viable fashion." In this case, the scientist following his curiosity had at least one eye simultaneously on commercialization.
  • ...1 more annotation...
  • In their appeal for more funding for scientific research, Leshner and Cooper argued that: "Across society, we don't have to look far for examples of basic research that paid off." They cite the creation of Google as a prime example of such payoffs: "Larry Page and Sergey Brin, then a National Science Foundation [NSF] fellow, did not intend to invent the Google search engine. Originally, they were intrigued by a mathematical challenge ..." The appealing imagery of a scientist who simply follows his curiosity and then makes a discovery with a large societal payoff is part of the core mythology of post-World War II science policies. The mythology shapes how governments around the world organize, account for, and fund research. A large body of scholarship has critiqued postwar science policies and found that, despite many notable successes, the science policies that may have made sense in the middle of the last century may need updating in the 21st century. In short, investments in "basic research" are not enough. Benoit Godin has asserted (PDF) that: "The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications." Or as Leshner and Cooper declare in The Washington Post: "Federal investments in R&D have fueled half of the nation's economic growth since World War II."
Weiye Loh

Political - or politicized? - psychology » Scienceline - 0 views

  • The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today’s heated political climate, understanding people on the “other side” — whether that side is left or right — takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
  • Consider the following 2006 study by the late California psychologists Jeanne and Jack Block, which compared the personalities of nursery school children to their political leanings as 23-year olds. Preschoolers who went on to identify as liberal were described by the authors as self-reliant, energetic, somewhat dominating and resilient. The children who later identified as conservative were described as easily offended, indecisive, fearful, rigid, inhibited and vulnerable. The negative descriptions of conservatives in this study strike Jacob Vigil, a psychologist at the University of New Mexico, as morally loaded. Studies like this one, he said, use language that suggests the researchers are “motivated to present liberals with more ideal descriptions as compared to conservatives.”
  • Most of the researchers in this field are, in fact, liberal. In 2007 UCLA’s Higher Education Research Institute conducted a survey of faculty at four-year colleges and universities in the United States. About 68 percent of the faculty in history, political science and social science departments characterized themselves as liberal, 22 percent characterized themselves as moderate, and only 10 percent as conservative. Some social psychologists, like Jonathan Haidt of the University of Virginia, have charged that this liberal majority distorts the research in political psychology.
  • ...9 more annotations...
  • It’s a charge that John Jost, a social psychologist at New York University, flatly denies. Findings in political psychology bear upon deeply held personal beliefs and attitudes, he said, so they are bound to spark controversy. Research showing that conservatives score higher on measures of “intolerance of ambiguity” or the “need for cognitive closure” might bother some people, said Jost, but that does not make it biased.
  • “The job of the behavioral scientist is not to try to find something to say that couldn’t possibly be offensive,” said Jost. “Our job is to say what we think is true, and why.
  • Jost and his colleagues in 2003 compiled a meta-analysis of 88 studies from 12 different countries conducted over a 40-year period. They found strong evidence that conservatives tend to have higher needs to reduce uncertainty and threat. Conservatives also share psychological factors like fear, aggression, dogmatism, and the need for order, structure and closure. Political conservatism, they explained, could serve as a defense against anxieties and threats that arise out of everyday uncertainty, by justifying the status quo and preserving conditions that are comfortable and familiar.
  • The study triggered quite a public reaction, particularly within the conservative blogosphere. But the criticisms, according to Jost, were mistakenly focused on the researchers themselves; the findings were not disputed by the scientific community and have since been replicated. For example, a 2009 study followed college students over the span of their undergraduate experience and found that higher perceptions of threat did indeed predict political conservatism. Another 2009 study found that when confronted with a threat, liberals actually become more psychologically and politically conservative. Some studies even suggest that physiological traits like sensitivity to sudden noises or threatening images are associated with conservative political attitudes.
  • “The debate should always be about the data and its proper interpretation,” said Jost, “and never about the characteristics or motives of the researchers.” Phillip Tetlock, a psychologist at the University of California, Berkeley, agrees. However, Tetlock thinks that identifying the proper interpretation can be tricky, since personality measures can be described in many ways. “One observer’s ‘dogmatism’ can be another’s ‘principled,’ and one observer’s ‘open-mindedness’ can be another’s ‘flaccid and vacillating,’” Tetlock explained.
  • Richard Redding, a professor of law and psychology at Chapman University in Orange, California, points to a more general, indirect bias in political psychology. “It’s not the case that researchers are intentionally skewing the data,” which rarely happens, Redding said. Rather, the problem may lie in what sorts of questions are or are not asked.
  • For example, a conservative might be more inclined to undertake research on affirmative action in a way that would identify any negative outcomes, whereas a liberal probably wouldn’t, said Redding. Likewise, there may be aspects of personality that liberals simply haven’t considered. Redding is currently conducting a large-scale study on self-righteousness, which he suspects may be associated more highly with liberals than conservatives.
  • “The way you frame a problem is to some extent dictated by what you think the problem is,” said David Sears, a political psychologist at the University of California, Los Angeles. People’s strong feelings about issues like prejudice, sexism, authoritarianism, aggression, and nationalism — the bread and butter of political psychology — may influence how they design a study or present a problem.
  • The indirect bias that Sears and Redding identify is a far cry from the liberal groupthink others warn against. But given that psychology departments are predominantly left leaning, it’s important to seek out alternative viewpoints and explanations, said Jesse Graham, a social psychologist at the University of Southern California. A self-avowed liberal, Graham thinks it would be absurd to say he couldn’t do fair science because of his political preferences. “But,” he said, “it is something that I try to keep in mind.”
  •  
    The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today's heated political climate, understanding people on the "other side" - whether that side is left or right - takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
Weiye Loh

What Is Academic Work? - NYTimes.com - 0 views

  • After it was all over, everyone pronounced the occasion a great success; not because any substantive problems had been solved, but because a set of intellectual problems had been tossed around and teased out by men and women at the top of their game.
  • academic work is distinctive — something and not everything — and that a part of its distinctiveness is its distance from political agendas. This does not mean that political agendas can’t be the subject of academic work — one should inquire into their structure, history, etc. — but that the point of introducing them into the classroom should never be to urge them or to warn against them.
  • The conference format reflected its academic (not policy) imperatives. A presenter summarized his or her paper. A designated commentator posed sharp questions. The presenter responded and then the floor was opened to the other participants, who posed their own sharp questions to both the presenter and the commentator. The exchanges were swift and spirited. The room took on some of the aspects of an athletic competition — parry, thrust, soft balls, hard balls, palpable hits, ingenious defenses and a series of “well dones” said by everyone to everyone else at the end of each round.
  • ...1 more annotation...
  • The kind of questions asked also marked the occasion as an academic one. Not “Won’t the economy implode if we do this?” or “Wouldn’t free expression rights be eroded if we went down that path?”, but “Would you be willing to follow your argument to its logical conclusion?” or “Doesn’t that amount to just making up the law as you go along?” These questions were continuations of a philosophical conversation that stretches back at least to the beginning of the republic; and while they were illustrated by real-world topics (the pardon power, habeas corpus, the electoral college), the focus was always on the theoretical puzzles of which those topics were disposable examples; they were never the main show.
Weiye Loh

Oxford academic wins right to read UEA climate data | Environment | guardian.co.uk - 0 views

  • Jonathan Jones, physics professor at Oxford University and self-confessed "climate change agnostic", used freedom of information law to demand the data that is the life's work of the head of the University of East Anglia's Climatic Research Unit, Phil Jones. UEA resisted the requests to disclose the data, but this week it was compelled to do so.
  • Graham gave the UEA one month to deliver the data, which includes more than 4m individual thermometer readings taken from 4,000 weather stations over the past 160 years. The commissioner's office said this was his first ruling on demands for climate data made in the wake of the climategate affair.
  • an archive of world temperature records collected jointly with the Met Office.
  • ...3 more annotations...
  • Critics of the UEA's scientists say an independent analysis of the temperature data may reveal that Phil Jones and his colleagues have misinterpreted the evidence of global warming. They may have failed to allow for local temperature influences, such as the growth of cities close to many of the thermometers.
  • when Jonathan Jones and others asked for the data in the summer of 2009, the UEA said legal exemptions applied. It said variously that the temperature data were the property of foreign meteorological offices; were intellectual property that might be valuable if sold to other researchers; and were in any case often publicly available.
  • Jonathan Jones said this week that he took up the cause of data freedom after Steve McIntyre, a Canadian mathematician, had requests for the data turned down. He thought this was an unreasonable response when Phil Jones had already shared the data with academic collaborators, including Prof Peter Webster of the Georgia Institute of Technology in the US. He asked to be given the data already sent to Webster, and was also turned down.
  •  
    An Oxford academic has won the right to read previously secret data on climate change held by the University of East Anglia (UEA). The decision, by the government's information commissioner, Christopher Graham, is being hailed as a landmark ruling that will mean that thousands of British researchers are required to share their data with the public.
Weiye Loh

Let's make science metrics more scientific : Article : Nature - 0 views

  • Measuring and assessing academic performance is now a fact of scientific life.
  • Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use1
  • Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
  • ...15 more annotations...
  • narrow or biased measures of scientific achievement can lead to narrow and biased science.
  • Global demand for, and interest in, metrics should galvanize stakeholders — national funding agencies, scientific research organizations and publishing houses — to combine forces. They can set an agenda and foster research that establishes sound scientific metrics: grounded in theory, built with high-quality data and developed by a community with strong incentives to use them.
  • Scientists are often reticent to see themselves or their institutions labelled, categorized or ranked. Although happy to tag specimens as one species or another, many researchers do not like to see themselves as specimens under a microscope — they feel that their work is too complex to be evaluated in such simplistic terms. Some argue that science is unpredictable, and that any metric used to prioritize research money risks missing out on an important discovery from left field.
    • Weiye Loh
       
      It is ironic that while scientists feel that their work are too complex to be evaluated in simplistic terms or matrics, they nevertheless feel ok to evaluate the world in simplistic terms. 
  • It is true that good metrics are difficult to develop, but this is not a reason to abandon them. Rather it should be a spur to basing their development in sound science. If we do not press harder for better metrics, we risk making poor funding decisions or sidelining good scientists.
  • Metrics are data driven, so developing a reliable, joined-up infrastructure is a necessary first step.
  • We need a concerted international effort to combine, augment and institutionalize these databases within a cohesive infrastructure.
  • On an international level, the issue of a unique researcher identification system is one that needs urgent attention. There are various efforts under way in the open-source and publishing communities to create unique researcher identifiers using the same principles as the Digital Object Identifier (DOI) protocol, which has become the international standard for identifying unique documents. The ORCID (Open Researcher and Contributor ID) project, for example, was launched in December 2009 by parties including Thompson Reuters and Nature Publishing Group. The engagement of international funding agencies would help to push this movement towards an international standard.
  • if all funding agencies used a universal template for reporting scientific achievements, it could improve data quality and reduce the burden on investigators.
    • Weiye Loh
       
      So in future, we'll only have one robust matric to evaluate scientific contribution? hmm...
  • Importantly, data collected for use in metrics must be open to the scientific community, so that metric calculations can be reproduced. This also allows the data to be efficiently repurposed.
  • As well as building an open and consistent data infrastructure, there is the added challenge of deciding what data to collect and how to use them. This is not trivial. Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes4 and even the production of YouTube videos.
  • Perhaps publications in these different media should be weighted differently in different fields.
  • There needs to be a greater focus on what these data mean, and how they can be best interpreted.
  • This requires the input of social scientists, rather than just those more traditionally involved in data capture, such as computer scientists.
  • An international data platform supported by funding agencies could include a virtual 'collaboratory', in which ideas and potential solutions can be posited and discussed. This would bring social scientists together with working natural scientists to develop metrics and test their validity through wikis, blogs and discussion groups, thus building a community of practice. Such a discussion should be open to all ideas and theories and not restricted to traditional bibliometric approaches.
  • Far-sighted action can ensure that metrics goes beyond identifying 'star' researchers, nations or ideas, to capturing the essence of what it means to be a good scientist.
  •  
    Let's make science metrics more scientific Julia Lane1 Top of pageAbstract To capture the essence of good science, stakeholders must combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity, says Julia Lane.
Weiye Loh

Roger Pielke Jr.'s Blog: Innovation in Drug Development: An Inverse Moore's Law? - 0 views

  • Today's FT has this interesting graph and an accompanying story, showing a sort of inverse Moore's Law of drug development.  Over almost 60 years the number of new drugs developed per unit of investment has declined in a fairly constant manner, and some drug companies are now slashing their R&D budgets.
  • why this trend has occurred.  The FT points to a combination of low-hanging fruit that has been plucked and increasing costs of drug development. To some observers, that reflects the end of the mid to late 20th century golden era for drug discovery, when first-generation medicines such as antibiotics and beta-blockers to treat high blood pressure transformed healthcare. At the same time, regulatory demands to prove safety and efficacy have grown firmer. The result is larger and more costly clinical trials, and high failure rates for experimental drugs.
  • Others point to flawed innovation policies in industry and governments: “The markets treat drug companies as though research and development spending destroys value,” says Jack Scannell, an analyst at Bernstein Research. “People have stopped distinguishing the good from the bad. All those which performed well returned cash to shareholders. Unless the industry can articulate what the problem is, I don’t expect that to change.”
  • ...6 more annotations...
  • Mr [Andrew] Baum [of Morgan Stanley] argues that the solution for drug companies is to share the risks of research with others. That means reducing in-house investment in research, and instead partnering and licensing experimental medicines from smaller companies after some of the early failures have been eliminated.
  • Chas Bountra of Oxford university calls for a more radical partnership combining industry and academic research. “What we are trying to do is just too difficult,” he says. “No one organisation can do it, so we have to pool resources and expertise.” He suggests removing intellectual property rights until a drug is in mid-stage testing in humans, which would make academics more willing to co-operate because they could publish their results freely. The sharing of data would enable companies to avoid duplicating work.
  • The challenge is for academia and biotech companies to fill the research gap. Mr Ratcliffe argues that after a lull in 2009 and 2010, private capital is returning to the sector – as demonstrated by a particular buzz at JPMorgan’s new year biotech conference in California.
  • Patrick Vallance, senior vice-president for discovery at GSK, is cautious about deferring patents until so late, arguing that drug companies need to be able to protect their intellectual property in order to fund expensive late-stage development. But he too is experimenting with ways to co-operate more closely with academics over longer periods. He is also championing the “externalisation” of the company’s pipeline, with biotech and university partners accounting for half the total. GSK has earmarked £50m to support fledgling British companies, many “wrapped around” the group’s sites. One such example is Convergence, a spin-out from a GSK lab researching pain relief.
  • Big pharmaceutical companies are scrambling to find ways to overcome the loss of tens of billions of dollars in revenue as patents on top-selling drugs run out. Many sound similar notes about encouraging entrepreneurialism in their ranks, making smart deals and capitalizing on emerging-market growth, But their actual plans are often quite different—and each carries significant risks. Novartis AG, for instance, is so convinced that diversification is the best course that the company has a considerable business selling low-priced generics. Meantime, Bristol-Myers Squibb Co. has decided to concentrate on innovative medicines, shedding so many nonpharmaceutical units that it' has become midsize. GlaxoSmithKline PLC is still investing in research, but like Pfizer it has narrowed the range of disease areas in which it's seeking new treatments. Underlying the divergence is a deep-seated philosophical dispute over the merits of the heavy investment that companies must make to discover new drugs. By most estimates, bringing a new molecule to market costs drug makers more than $1 billion. Industry officials have been engaged in a vigorous debate over whether the investment is worth it, or whether they should leave it to others whose work they can acquire or license after a demonstration of strong potential.
  • To what extent can approached to innovation influence the trend line in the graph above?  I don't think that anyone really knows the answer.  The different approaches being taken by Merck and Pfizer, for instance, represent a real world policy experiment: The contrast between Merck and Pfizer reflects the very different personal approaches of their CEOs. An accountant by training, Mr. Read has held various business positions during a three-decade career at Pfizer. The 57-year-old cited torcetrapib, a cholesterol medicine that the company spent more than $800 million developing but then pulled due to safety concerns, as an example of the kind of wasteful spending Pfizer would avoid. "We're going to have metrics," Mr. Read said. He wants Pfizer to stop "always investing on hope rather than strong signals and the quality of the science, the quality of the medicine." Mr. Frazier, 56, a Harvard-educated lawyer who joined Merck in 1994 from private practice, said the company was sticking by its own troubled heart drug, vorapaxar. Mr. Frazier said he wanted to see all of the data from the trials before rushing to judgment. "We believe in the innovation approach," he said.
Weiye Loh

Red-Wine Researcher Charged With 'Photoshop' Fraud - 0 views

  •  
    A University of Connecticut researcher known for touting the health benefits of red wine is guilty of 145 counts of fabricating and falsifying data with image-editing software, according to a 3-year university investigation made public Wednesday. The researcher, Dipak K. Das, PhD, is a director of the university's Cardiovascular Research Center (CRC) and a professor in the Department of Surgery. The university stated in a press release that it has frozen all externally funded research in Dr. Das's lab and turned down $890,000 in federal research grants awarded to him. The process to dismiss Dr. Das from the university is already underway, the university added.
Weiye Loh

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
Weiye Loh

An insider's view of academic censorship in Singapore | Asian Correspondent - 0 views

  • Mark, who is now assistant professor of history at the University of Hong Kong, talks candidly about the censorship, both self-imposed and external, that guided his research and writing.
  • During my 6 years in the city, I definitely became ever more acutely aware of "political sensitivities". Thus, there were comments that came up in interviews with some of Singapore's former political detainees (interviews which are cited in the book) that were not included because they would have possibly resulted in libel actions. There were other things, such as the deviousness of LKY's political negotiations with the British in the late 50s and early 60s, which we could have gone into further (the details have been published) rather than just pointing to them in the footnotes. Was this the result of a subconscious self-censorship or a desire to move the story on? I'm still thinking about that one. But I do recall that, as a foreign academic working at the National Univ. of Singapore, you inevitably became careful about what sort of public criticism you directed at your paymasters. No doubt, this carefulness ultimately seeps into you (though I think good work can be done in Singapore, nevertheless, and many people in academia there continue to do it).
  • The decision to halt Singapore: a Biography in 1965, and in that sense narrow the narrative, was a very conscious one. I am still not comfortable tackling Singapore's political history after 1965, given the current political constraints in the Republic, and the official control of the archive. I have told publishers who have enquired about us extending the story or writing a sequel that this would involve a narrative far more critical of the ruling party. Repressive political measures that might have garnered a degree of popular support in the turbulent early-60s became, I believe, for many Singaporeans, less justifiable and more reprehensible in the 70s and 80s (culminating with the disgust that many people felt over the treatment of Catholic agitators involved in the so-called "Marxist conspiracy" of 1987).
  • ...2 more annotations...
  • As for the rise of the PAP, my personal view is that in the late 1950s the PAP was the only viable alternative to colonial rule, once Marshall had bailed - that is, in terms of getting Singapore out of its postwar social and economic predicament. As much as my heart is with the idealists who founded the Barisan, I'm not sure they would have achieved the same practical results as the PAP did in its first 5 years, had they got into power. There were already rifts in the Barisan prior to Operation Cold Store in 1963, and the more one looks into the party at this time, the more chaotic it appears. (Undoubtedly, this chaos was also a result of the pressures exerted upon it by the PAP.)
  • when the Barisan was systematically destroyed, hopeless though its leaders might have proved as technocrats, Singapore turned a corner. From 1963, economic success and political stability were won at the expense of freedom of expression and 'responsible dissent', generating a conformity, an intellectual sterility and a deep loss of historical identity that I hope the Epilogue to the book conveys. That's basically my take on the rise of the PAP. The party became something very different from 1963.
  •  
    An insider's view of academic censorship in Singapore
Weiye Loh

BBC News - Facebook v academia: The gloves are off - 0 views

  •  
    "But this latest story once again sparked headlines around the world, even if articles often made the point that the research was not peer-reviewed. What was different, however, was Facebook's reaction. Previously, its PR team has gone into overdrive behind the scenes to rubbish this kind of research but said nothing in public. This time they used a new tactic, humour, to undermine the story. Mike Develin, a data scientist for the social network, published a note on Facebook mocking the Princeton team's "innovative use of Google search trends". He went on to use the same techniques to analyse the university's own prospects, concluding that a decline in searches over recent years "suggests that Princeton will have only half its current enrollment by 2018, and by 2021 it will have no students at all". Now, who knows, Facebook may well face an uncertain future. But academics looking to predict its demise have been put on notice - the company employs some pretty smart scientists who may take your research apart and fire back. The gloves are off."
Weiye Loh

Roger Pielke Jr.'s Blog: New Bridges Column: The Origins of "Basic Research" - 0 views

  •  
    "The appealing imagery of a scientist who simply follows his curiosity and then makes a discovery with a large societal payoff is part of the core mythology of post-World War II science policies. The mythology shapes how governments around the world organize, account for, and fund research. A large body of scholarship has critiqued postwar science policies and found that, despite many notable successes, the science policies that may have made sense in the middle of the last century may need updating in the 21st century. In short, investments in "basic research" are not enough. Benoit Godin has asserted (PDF) that: "The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications." Or as Leshner and Cooper declare in The Washington Post: "Federal investments in R&D have fueled half of the nation's economic growth since World War II." A closer look at the actual history of Google reveals how history becomes mythology. The 1994 NSF project that funded the scientific work underpinning the search engine that became Google (as we know it today) was conducted from the start with commercialization in mind: "The technology developed in this project will provide the 'glue' that will make this worldwide collection usable as a unified entity, in a scalable and economically viable fashion." In this case, the scientist following his curiosity had at least one eye simultaneously on commercialization."
Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
1 - 20 of 80 Next › Last »
Showing 20 items per page