Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Deliberation

Rss Feed Group items tagged

Weiye Loh

Roger Pielke Jr.'s Blog: A Democracy Paradox in Studies of Science and Technology - 0 views

  • I am a co-author along with Eva Lövbrand and Silke Beck on a paper published in the current issue of Science, Technology and Human Values (which I also mentioned on this site last fall).  The paper is titled "A Democracy Paradox in Studies of Science and Technology," (it can also be found here in PDF) and it takes a close look at claims made by scholars who study science and technology that the governance of science and technology ought to be grounded in deliberation among experts and the general public. Political legitimacy, it is argued, derives from such deliberation. However, such claims are themselves almost universally grounded not in deliberation, but authority.  Hence the "democracy paradox."
  • Only when specifying and adhering to internally consistent criteria of legitimacy, will students of science and technology be able to make a convincing case for more deliberative governance of science and technology.For my part (not speaking for my co-authors), appeals to deliberative democracy by science studies scholars can not evade the paradox.  Instead, we must look to other conceptions of democracy to understand the legitimate roles of science and expertise in governance.
Weiye Loh

Do peer reviewers get worse with experience? Plus a poll « Retraction Watch - 0 views

  • We’re not here to defend peer review against its many critics. We have the same feelings about it that Churchill did about democracy, aka the worst form of government except for all those others that have been tried. Of course, a good number of the retractions we write about are due to misconduct, and it’s not clear how peer review, no matter how good, would detect out-and-out fraud.
  • With that in mind, a paper published last week in the Annals of Emergency Medicine caught our eye. Over 14 years, 84 editors at the journal rated close to 15,000 reviews by about 1,500 reviewers. Highlights of their findings: …92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately correlated with their mean quality score, with better-than average reviewers decreasing at about half the rate of those below average). Only 8% improved, and those by very small amount.
  • The average reviewer in our study would have taken 12.5 years to reach this threshold; only 3% of reviewers whose quality decreased would have reached it in less than 5 years, and even the worst would take 3.2 years. Another 35% of all reviewers would reach the threshold in 5 to 10 years, 28% in 10 to 15 years, 12% in 15 to 20 years, and 22% in 20 years or more. So the decline was slow. Still, the results, note the authors, were surprising: Such a negative overall trend is contrary to most editors’ and reviewers’ intuitive expectations and beliefs about reviewer skills and the benefits of experience.
  • ...4 more annotations...
  • What could account for this decline? The study’s authors say it might be the same sort of decline you generally see as people get older. This is well-documented in doctors, so why shouldn’t it be true of doctors — and others — who peer review?
  • Other than the well-documented cognitive decline of humans as they age, there are other important possible causes of deterioration of performance that may play a role among scientific reviewers. Examples include premature closure of decisionmaking, less compliance with formal structural review requirements, and decay of knowledge base with time (ie, with aging more of the original knowledge base acquired in training becomes out of date). Most peer reviewers say their reviews have changed with experience, becoming shorter and focusing more on methods and larger issues; only 25% think they have improved.
  • Decreased cognitive performance capability may not be the only or even chief explanation. Competing career activities and loss of motivation as tasks become too familiar may contribute as well, by decreasing the time and effort spent on the task. Some research has concluded that the decreased productivity of scientists as they age is due not to different attributes or access to resources but to “investment motivation.” This is another way of saying that competition for the reviewer’s time (which is usually uncompensated) increases with seniority, as they develop (more enticing) opportunities for additional peer review, research, administrative, and leadership responsibilities and rewards. However, from the standpoint of editors and authors (or patients), whether the cause of the decrease is decreasing intrinsic cognitive ability or diminished motivation and effort does not matter. The result is the same: a less rigorous review by which to judge articles
  • What can be done? The authors recommend “deliberate practice,” which involves assessing one’s skills, accurately identifying areas of relative weakness, performing specific exercises designed to improve and extend those weaker skills, and investing high levels of concentration and hundreds or thousands of hours in the process. A key component of deliberate practice is immediate feedback on one’s performance. There’s a problem: But acting on prompt feedback (to guide deliberate practice) would be almost impossible for peer reviewers, who typically get no feedback (and qualitative research reveals this is one of their chief complaints).
  •  
    92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately corre
Weiye Loh

Sunita Narain: Indian scientists: missing in action - 0 views

  • Since then there has been dead silence among the powerful scientific leaders of the country, with one exception. Kiran Karnik, a former employee of ISRO and board member of Deva
  • when the scientists who understand the issue are not prepared to engage with the public, there can be little informed discussion. The cynical public, which sees scams tumble out each day, believes easily that everybody is a crook. But, as I said, the country’s top scientists have withdrawn further into their comfort holes, their opinion frozen in contempt that Indian society is scientifically illiterate. I can assure you in the future there will be even less conversation between scientists and all of us in the public sphere.
  • This is not good. Science is about everyday policy. It needs to be understood and for this it needs to be discussed and deliberated openly and strenuously. But how will this happen if one side — the one with information, knowledge and power — will not engage in public discourse?
  • ...8 more annotations...
  • I suspect Indian scientists have retired hurt to the pavilion. They were exposed to some nasty public scrutiny on a deal made by a premier science research establishment, Indian Space Research Organisation (ISRO), with Devas, a private company, on the allocation of spectrum. The public verdict was that the arrangement was a scandal; public resources had been given away for a song. The government, already scam-bruised, hastily scrapped the contract.
  • Take the issue of genetically-modified (GM) crops. For long this matter has been decided inside closed-door committee rooms, where scientists are comforted by the fact that their decisions will not be challenged. Their defence is “sound science” and “superior knowledge”. It is interesting that the same scientists will accept data produced by private companies pushing the product. Issues of conflict of interest will be brushed aside as integrity cannot be questioned behind closed doors. Silence is the best insurance. This is what happened inside a stuffy committee room, where scientists sat to give permission to Mahyco-Monsanto to grow genetically-modified brinjal.
  • This case involved a vegetable we all eat. This was a matter of science we had the right to know about and to decide upon. The issue made headlines. The reaction of the scientific fraternity was predictable and obnoxious. They resented the questions. They did not want a public debate.
  • As the controversy raged and more people got involved, the scientists ran for cover. They wanted none of this messy street fight. They were meant to advise prime ministers and the likes, not to answer simple questions of people. Finally, when environment minister Jairam Ramesh took the decision on the side of the ordinary vegetable eater, unconvinced by the validity of the scientific data to justify no-harm, scientists were missing in their public reactions. Instead, they whispered about lack of “sound science” in the decision inside committees.
  • The matter did not end there. The minister commissioned an inter-academy inquiry — six top scientific institutions looked into GM crops and Bt-brinjal — expecting a rigorous examination of the technical issues and data gaps. The report released by this committee was shoddy to say the least. It contained no references or attributions and not a single citation. It made sweeping statements and lifted passages from a government newsletter and even from global biotech industry. The report was thrashed. Scientists again withdrew into offended silence.
  • The final report of this apex-science group is marginally better in that it includes citations but it reeks of scientific arrogance cloaked in jargon. The committee did not find it fit to review the matter, which had reached public scrutiny. The report is only a cover for their established opinion about the ‘truth’ of Bt-brinjal. Science for them is certainly not a matter of enquiry, critique or even dissent.
  • the world has changed. No longer is this report meant only for top political and policy leaders, who would be overwhelmed by the weight of the matter, the language and the expert knowledge of the writer. The report will be subjected to public scrutiny. Its lack of rigour will be deliberated, its unquestioned assertion challenged.
  • This is the difference between the manufactured comfortable world of science behind closed doors and the noisy messy world outside. It is clear to me that Indian scientists need confidence to creatively engage in public concerns. The task to build scientific literacy will not happen without their engagement and their tolerance for dissent. The role of science in Indian democracy is being revisited with a new intensity. The only problem is that the key players are missing in action.
Weiye Loh

Cut secrecy down to a minimum - 0 views

  • This is not an anarchist call for the ransacking of government files, in the manner of Julian Assange. WikiLeaks has raised the issue of whether the unauthorised and anarchic acquisition and leaking of government records is legally or ethically defensible. I don't wish to embark on that debate. I believe it is a distraction from a much more important debate about how to enhance the quality of political and public deliberation while drastically reducing secrecy.
  • If public policy is sound, it must be possible for the grounds of such policy to be made public without caveat and to withstand public scrutiny. We should not be left guessing, as we too commonly are; and deploring the evasions of politicians and their minions.
Weiye Loh

The Real Hoax Was Climategate | Media Matters Action Network - 0 views

  • Sen. Jim Inhofe's (R-OK) biggest claim to fame has been his oft-repeated line that global warming is "the greatest hoax ever perpetrated on the American people."
  • In 2003, he conceded that the earth was warming, but denied it was caused by human activity and suggested that "increases in global temperatures may have a beneficial effect on how we live our lives."
  • In 2009, however, he appeared on Fox News to declare that the earth was actually cooling, claiming "everyone understands that's the case" (they don't, because it isn't).
  • ...7 more annotations...
  • nhofe's battle against climate science kicked into overdrive when a series of illegally obtained emails surfaced from the Climatic Research Unit at East Anglia University. 
  • When the dubious reports surfaced about flawed science, manipulated data, and unsubstantiated studies, Inhofe was ecstatic.  In March, he viciously attacked former Vice President Al Gore for defending the science behind climate change
  • Unfortunately for Senator Inhofe, none of those things are true.  One by one, the pillars of evidence supporting the alleged "scandals" have shattered, causing the entire "Climategate" storyline to come crashing down. 
  • a panel established by the University of East Anglia to investigate the integrity of the research of the Climatic Research Unit wrote: "We saw no evidence of any deliberate scientific malpractice in any of the work of the Climatic Research Unit and had it been there we believe that it is likely that we would have detected it."
  • Responding to allegations that Dr. Michael Mann tampered with scientific evidence, Pennsylvania State University conducted a thorough investigation. It concluded: "The Investigatory Committee, after careful review of all available evidence, determined that there is no substance to the allegation against Dr. Michael E. Mann, Professor, Department of Meteorology, The Pennsylvania State University.  More specifically, the Investigatory Committee determined that Dr. Michael E. Mann did not engage in, nor did he participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research, or other scholarly activities."
  • London's Sunday Times retracted its story, echoed by dozens of outlets, that an IPCC issued an unsubstantiated report claiming 40% of the Amazon rainforest was endangered due to changing rainfall patterns.  The Times wrote: "In fact, the IPCC's Amazon statement is supported by peer-reviewed scientific evidence. In the case of the WWF report, the figure had, in error, not been referenced, but was based on research by the respected Amazon Environmental Research Institute (IPAM) which did relate to the impact of climate change."
  • The Times also admitted it misrepresented the views of Dr. Simon Lewis, a Royal Society research fellow at the University of Leeds, implying he agreed with the article's false premise and believed the IPCC should not utilize reports issued by outside organizations.  In its retraction, the Times was forced to admit: "Dr Lewis does not dispute the scientific basis for both the IPCC and the WWF reports," and, "We accept that Dr Lewis holds no such view... A version of our article that had been checked with Dr Lewis underwent significant late editing and so did not give a fair or accurate account of his views on these points. We apologise for this."
  •  
    The Real Hoax Was Climategate July 02, 2010 1:44 pm ET by Chris Harris
juliet huang

Go slow with Net law - 4 views

Article : Go slow with tech law Published : 23 Aug 2009 Source: Straits Times Background : When Singapore signed a free trade agreement with the USA in 2003, intellectual property rights was a ...

sim lim square

started by juliet huang on 26 Aug 09 no follow-up yet
Weiye Loh

Op-Ed Columnist - The Moral Naturalists - NYTimes.com - 0 views

  • Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
  • By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
  • Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it. At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.
  • ...6 more annotations...
  • This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
  • If you ask for donations with the photo and name of one sick child, you are likely to get twice as much money than if you had asked for donations with a photo and the names of eight children. Our minds respond more powerfully to the plight of an individual than the plight of a group.
  • If you are in a bad mood you will make harsher moral judgments than if you’re in a good mood or have just seen a comedy. As Elizabeth Phelps of New York University points out, feelings of disgust will evoke a desire to expel things, even those things unrelated to your original mood. General fear makes people risk-averse. Anger makes them risk-seeking.
  • People who behave morally don’t generally do it because they have greater knowledge; they do it because they have a greater sensitivity to other people’s points of view.
  • The moral naturalists differ over what role reason plays in moral judgments. Some, like Haidt, believe that we make moral judgments intuitively and then construct justifications after the fact. Others, like Joshua Greene of Harvard, liken moral thinking to a camera. Most of the time we rely on the automatic point-and-shoot process, but occasionally we use deliberation to override the quick and easy method.
  • For people wary of abstract theorizing, it’s nice to see people investigating morality in ways that are concrete and empirical. But their approach does have certain implicit tendencies. They emphasize group cohesion over individual dissent. They emphasize the cooperative virtues, like empathy, over the competitive virtues, like the thirst for recognition and superiority. At this conference, they barely mentioned the yearning for transcendence and the sacred, which plays such a major role in every human society. Their implied description of the moral life is gentle, fair and grounded. But it is all lower case. So far, at least, it might not satisfy those who want their morality to be awesome, formidable, transcendent or great.
  •  
    The Moral Naturalists By DAVID BROOKS Published: July 22, 2010
Weiye Loh

The American Spectator : Can't Live With Them… - 1 views

  • ommentators have repeatedly told us in recent years that the gap between rich and poor has been widening. It is true, if you compare the income of those in the top fifth of earners with the income of those in the bottom fifth, that the spread between them increased between 1996 and 2005. But, as Sowell points out, this frequently cited figure is not counting the same people. If you look at individual taxpayers, Sowell notes, those who happened to be in the bottom fifth in 1996 saw their incomes nearly double over the decade, while those who happened to be in the top fifth in 1995 saw gains of only 10 percent on average and those in the top 5 percent actually experienced decline in their incomes. Similar distortions are perpetrated by those bewailing "stagnation" in average household incomes -- without taking into account that households have been getting smaller, as rising wealth allows people to move out of large family homes.
  • Sometimes the distortion seems to be deliberate. Sowell gives the example of an ABC news report in the 1980s focusing on five states where "unemployment is most severe" -- without mentioning that unemployment was actually declining in all the other 45 states. Sometimes there seems to be willful incomprehension. Journalists have earnestly reported that "prisons are ineffective" because two-thirds of prisoners are rearrested within three years of their release. As Sowell comments: "By this kind of reasoning, food is ineffective as a response to hunger because it is only a matter of time after eating before you get hungry again. Like many other things, incarceration only works when it is done."
  • why do intellectuals often seem so lacking in common sense? Sowell thinks it goes with the job-literally: He defines "intellectuals" as "an occupational category [Sowell's emphasis], people whose occupations deal primarily with ideas -- writers, academics and the like." Medical researchers or engineers or even "financial wizards" may apply specialized knowledge in ways that require great intellectual skill, but that does not make them "intellectuals," in Sowell's view: "An intellectual's work begins and ends with ideas [Sowell's emphasis]." So an engineer "is ruined" if his bridges or buildings collapse and so with a financier who "goes broke… the proof of the pudding is ultimately in the eating…. but the ultimate test of a [literary] deconstructionist's ideas is whether other deconstructionists find those ideas interesting, original, persuasive, elegant or ingenious. There is no external test." The ideas dispensed by intellectuals aren't subject to "external" checks or exposed to the test of "verifiability" (apart from what "like-minded individuals" find "plausible") and so intellectuals are not really "accountable" in the same way as people in other occupations.
  • ...7 more annotations...
  • it is not quite true, even among tenured professors in the humanities, that idea-mongers can entirely ignore "external" checks. Even academics want to be respectable, which means they can't entirely ignore the realities that others notice. There were lots of academics talking about the achievements of socialism in the 1970s (I can remember them) but very few talking that way after China and Russia repudiated these fantasies.
  • THE MOST DISTORTING ASPECT of Sowell's account is that, in focusing so much on the delusions of intellectuals, he leaves us more confused about what motivates the rest of society. In a characteristic passage, Sowell protests that "intellectuals...have sought to replace the groups into which people have sorted themselves with groupings created and imposed by the intelligentsia. Ties of family, religion, and patriotism, for example, have long been rated as suspect or detrimental by the intelligentsia, and new ties that intellectuals have created, such as class -- and more recently 'gender' -- have been projected as either more real or more important."
  • There's no disputing the claim that most "intellectuals" -- surely most professors in the humanities-are down on "patriotism" and "religion" and probably even "family." But how did people get to be patriotic and religious in the first place? In Sowell's account, they just "sorted themselves" -- as if by the invisible hand of the market.
  • Let's put aside all the violence and intimidation that went into building so many nations and so many faiths in the past. What is it, even today, that makes people revere this country (or some other); what makes people adhere to a particular faith or church? Don't inspiring words often move people? And those who arrange these words -- aren't they doing something similar to what Sowell says intellectuals do? Is it really true, when it comes to embracing national or religious loyalties, that "the proof of the pudding is in the eating"?
  • Even when it comes to commercial products, people don't always want to be guided by mundane considerations of reliable performance. People like glamour, prestige, associations between the product and things they otherwise admire. That's why companies spend so much on advertising. And that's part of the reason people are willing to pay more for brand names -- to enjoy the associations generated by advertising. Even advertising plays on assumptions about what is admirable and enticing-assumptions that may change from decade to decade, as background opinions change. How many products now flaunt themselves as "green" -- and how many did so 20 years ago?
  • If we closed down universities and stopped subsidizing intellectual publications, would people really judge every proposed policy by external results? Intellectuals tend to see what they expect to see, as Sowell's examples show -- but that's true of almost everyone. We have background notions about how the world works that help us make sense of what we experience. We might have distorted and confused notions, but we don't just perceive isolated facts. People can improve in their understanding, developing background understandings that are more defined or more reliable. That's part of what makes people interested in the ideas of intellectuals -- the hope of improving their own understanding.
  • On Sowell's account, we wouldn't need the contributions of a Friedrich Hayek -- or a Thomas Sowell -- if we didn't have so many intellectuals peddling so many wrong-headed ideas. But the wealthier the society, the more it liberates individuals to make different choices and the more it can afford to indulge even wasteful or foolish choices. I'd say that means not that we have less need of intellectuals, but more need of better ones. 
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

Nature's choices : Article : Nature - 0 views

  • Another long-standing myth is that we allow one negative referee to determine the rejection of a paper. On the contrary, there were several occasions last year when all the referees were underwhelmed by a paper, yet we published it on the basis of our own estimation of its worth. That internal assessment has always been central to our role; Nature has never had an editorial board. Our editors spend several weeks a year in scientific meetings and labs, and are constantly reading the literature. Papers selected for review are seen by two or more referees. The number of referees is greater for multidisciplinary papers. We act on any technical concerns and we value the referees' opinions about a paper's potential significance or lack thereof. But we make the final call on the basis of criteria such as the paper's depth of mechanistic insight, or its value as a data resource or in enabling applications of an innovative technique.
    • Weiye Loh
       
      So even when scientists disagree with the research, the journal may still choose to publish it based on their non-scientifically trained insights? hmm...
  • controversies over scientific conclusions in fields such as climate change can have the effect — deliberate or otherwise — of undermining the public's faith in science.
  • One myth that never seems to die is that Nature's editors seek to inflate the journal's impact factor by sifting through submitted papers (some 16,000 last year) in search of those that promise a high citation rate. We don't. Not only is it difficult to predict what a paper's citation performance will be, but citations are an unreliable measure of importance. Take two papers in synthetic organic chemistry, both published in June 2006. One, 'Control of four stereocentres in a triple cascade organocatalytic reaction' (D. Enders et al. Nature 441, 861–863; 2006), had acquired 182 citations by late 2009, and was the fourth most cited chemistry paper that we published that year. Another, 'Synthesis and structural analysis of 2-quinuclidonium tetrafluoroborate' (K. Tani and B. M. Stoltz Nature 441, 731–734; 2006), had acquired 13 citations over the same period. Yet the latter paper was highlighted as an outstanding achievement in Chemical and Engineering News, the magazine of the American Chemical Society.
  • ...2 more annotations...
  • we operate on the strict principle that our decisions are not influenced by the identity or location of any author. Almost all our papers have multiple authors, often from several countries. And we commonly reject papers whose authors happen to include distinguished or 'hot' scientists.
  • Yet another myth is that we rely on a small number of privileged referees in any given discipline. In fact, we used nearly 5,400 referees last year, and are constantly recruiting more — especially younger researchers with hands-on expertise in newer techniques. We use referees from around the scientifically developed world, whether or not they have published papers with us, and avoid those with a track record of slow response. And in highly competitive areas, we will usually follow authors' requests and our own judgement in avoiding referees with known conflicts of interest.
  •  
    Editorial Nature 463, 850 (18 February 2010) | doi:10.1038/463850a; Published online 17 February 2010 Nature's choices Top of pageAbstract Exploding the myths surrounding how and why we select our research papers.
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • Almost never used in English before the 1940s, “human rights” were mentioned in the New York Times five times as often in 1977 as in any prior year of the newspaper’s history. By the nineties, human rights had become central to the thinking not only of liberals but also of neoconservatives, who urged military intervention and regime change in the faith that these freedoms would blossom once tyranny was toppled. From being almost peripheral, the human-rights agenda found itself at the heart of politics and international relations.
  • In fact, it has become entrenched in extremis: nowadays, anyone who is skeptical about human rights is angrily challenged
  • The contemporary human-rights movement is demonstrably not the product of a revulsion against the worst crimes of Nazism. For one thing, the Holocaust did not figure in the deliberations that led up to the Universal Declaration of Human Rights adopted by the UN in 1948.
  • ...1 more annotation...
  • Contrary to received history, the rise of human rights had very little to do with the worst crime against humanity ever committed.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

The Mysterious Decline Effect | Wired Science | Wired.com - 0 views

  • Question #1: Does this mean I don’t have to believe in climate change? Me: I’m afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn’t mean, of course, that such theories won’t change or get modified – the strength of science is that nothing is settled.) Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study. The larger point is that we need to be a better job of considering the context behind every claim. In 1952, the Harvard philosopher Willard Von Orman published “The Two Dogmas of Empiricism.” In the essay, Quine compared the truths of science to a spider’s web, in which the strength of the lattice depends upon its interconnectedness. (Quine: “The unit of empirical significance is the whole of science.”) One of the implications of Quine’s paper is that, when evaluating the power of a given study, we need to also consider the other studies and untested assumptions that it depends upon. Don’t just fixate on the effect size – look at the web. Unfortunately for the denialists, climate change and natural selection have very sturdy webs.
  • biases are not fraud. We sometimes forget that science is a human pursuit, mingled with all of our flaws and failings. (Perhaps that explains why an episode like Climategate gets so much attention.) If there’s a single theme that runs through the article it’s that finding the truth is really hard. It’s hard because reality is complicated, shaped by a surreal excess of variables. But it’s also hard because scientists aren’t robots: the act of observation is simultaneously an act of interpretation.
  • (As Paul Simon sang, “A man sees what he wants to see and disregards the rest.”) Most of the time, these distortions are unconscious – we don’t know even we are misperceiving the data. However, even when the distortion is intentional it’s still rarely rises to the level of outright fraud. Consider the story of Mike Rossner. He’s executive director of the Rockefeller University Press, and helps oversee several scientific publications, including The Journal of Cell Biology.  In 2002, while trying to format a scientific image in Photoshop that was going to appear in one of the journals, Rossner noticed that the background of the image contained distinct intensities of pixels. “That’s a hallmark of image manipulation,” Rossner told me. “It means the scientist has gone in and deliberately changed what the data looks like. What’s disturbing is just how easy this is to do.” This led Rossner and his colleagues to begin analyzing every image in every accepted paper. They soon discovered that approximately 25 percent of all papers contained at least one “inappropriately manipulated” picture. Interestingly, the vast, vast majority of these manipulations (~99 percent) didn’t affect the interpretation of the results. Instead, the scientists seemed to be photoshopping the pictures for aesthetic reasons: perhaps a line on a gel was erased, or a background blur was deleted, or the contrast was exaggerated. In other words, they wanted to publish pretty images. That’s a perfectly understandable desire, but it gets problematic when that same basic instinct – we want our data to be neat, our pictures to be clean, our charts to be clear – is transposed across the entire scientific process.
  • ...2 more annotations...
  • One of the philosophy papers that I kept on thinking about while writing the article was Nancy Cartwright’s essay “Do the Laws of Physics State the Facts?” Cartwright used numerous examples from modern physics to argue that there is often a basic trade-off between scientific “truth” and experimental validity, so that the laws that are the most true are also the most useless. “Despite their great explanatory power, these laws [such as gravity] do not describe reality,” Cartwright writes. “Instead, fundamental laws describe highly idealized objects in models.”  The problem, of course, is that experiments don’t test models. They test reality.
  • Cartwright’s larger point is that many essential scientific theories – those laws that explain things – are not actually provable, at least in the conventional sense. This doesn’t mean that gravity isn’t true or real. There is, perhaps, no truer idea in all of science. (Feynman famously referred to gravity as the “greatest generalization achieved by the human mind.”) Instead, what the anomalies of physics demonstrate is that there is no single test that can define the truth. Although we often pretend that experiments and peer-review and clinical trials settle the truth for us – that we are mere passive observers, dutifully recording the results – the actuality of science is a lot messier than that. Richard Rorty said it best: “To say that we should drop the idea of truth as out there waiting to be discovered is not to say that we have discovered that, out there, there is no truth.” Of course, the very fact that the facts aren’t obvious, that the truth isn’t “waiting to be discovered,” means that science is intensely human. It requires us to look, to search, to plead with nature for an answer.
Weiye Loh

BBC News - Graduates - the new measure of power - 0 views

  • There are more universities operating in other countries, recruiting students from overseas, setting up partnerships, providing online degrees and teaching in other languages than ever before. Capturing the moment: South Korea has turned itself into a global player in higher education Chinese students are taking degrees taught in English in Finnish universities; the Sorbonne is awarding French degrees in Abu Dhabi; US universities are opening in China and South Korean universities are switching teaching to English so they can compete with everyone else. It's like one of those board games where all the players are trying to move on to everyone else's squares. It's not simply a case of western universities looking for new markets. Many countries in the Middle East and Asia are deliberately seeking overseas universities, as a way of fast-forwarding a research base.
  • "There's a world view that universities, and the most talented people in universities, will operate beyond sovereignty. "Much like in the renaissance in Europe, when the talent class and the creative class travelled among the great idea capitals, so in the 21st century, the people who carry the ideas that will shape the future will travel among the capitals.
  • "But instead of old European names it will be names like Shanghai and Abu Dhabi and London and New York. Those universities will be populated by those high-talent people." New York University, one of the biggest private universities in the US, has campuses in New York and Abu Dhabi, with plans for another in Shanghai. It also has a further 16 academic centres around the world. Mr Sexton sets out a different kind of map of the world, in which universities, with bases in several cities, become the hubs for the economies of the future, "magnetising talent" and providing the ideas and energy to drive economic innovation.
  • ...6 more annotations...
  • Universities are also being used as flag carriers for national economic ambitions - driving forward modernisation plans. For some it's been a spectacularly fast rise. According to the OECD, in the 1960s South Korea had a similar national wealth to Afghanistan. Now it tops international education league tables and has some of the highest-rated universities in the world. The Pohang University of Science and Technology in South Korea was only founded in 1986 - and is now in the top 30 of the Times Higher's global league table, elbowing past many ancient and venerable institutions. It also wants to compete on an international stage so the university has decided that all its graduate programmes should be taught in English rather than Korean.
  • governments want to use universities to upgrade their workforce and develop hi-tech industries.
  • "Universities are being seen as a key to the new economies, they're trying to grow the knowledge economy by building a base in universities," says Professor Altbach. Families, from rural China to eastern Europe, are also seeing university as a way of helping their children to get higher-paid jobs. A growing middle-class in India is pushing an expansion in places. Universities also stand to gain from recruiting overseas. "Universities in the rich countries are making big bucks," he says. This international trade is worth at least $50 billion a year, he estimates, the lion's share currently being claimed by the US.
  • Technology, much of it hatched on university campuses, is also changing higher education and blurring national boundaries.
  • It raises many questions too. What are the expectations of this Facebook generation? They might have degrees and be able to see what is happening on the other side of the world, but will there be enough jobs to match their ambitions? Who is going to pay for such an expanded university system? And what about those who will struggle to afford a place?
  • The success of the US system is not just about funding, says Professor Altbach. It's also because it's well run and research is effectively organised. "Of course there are lots of lousy institutions in the US, but overall the system works well." Continue reading the main story “Start Quote Developed economies are already highly dependent on universities and if anything that reliance will increase” End Quote David Willetts UK universities minister The status of the US system has been bolstered by the link between its university research and developing hi-tech industries. Icons of the internet-age such Google and Facebook grew out of US campuses.
Weiye Loh

Real Climate faces libel suit | Environment | guardian.co.uk - 0 views

  • Gavin Schmidt, a climate modeller and Real Climate member based at Nasa's Goddard Institute for Space Studies in New York, has claimed that Energy & Environment (E&E) has "effectively dispensed with substantive peer review for any papers that follow the editor's political line." The journal denies the claim, and, according to Schmidt, has threatened to take further action unless he retracts it.
  • Every paper that is submitted to the journal is vetted by a number of experts, she said. But she did not deny that she allows her political agenda to influence which papers are published in the journal. "I'm not ashamed to say that I deliberately encourage the publication of papers that are sceptical of climate change," said Boehmer-Christiansen, who does not believe in man-made climate change.
  • Simon Singh, a science writer who last year won a major libel battle with the British Chiropractic Association (BCA), said: "A libel threat is potentially catastrophic. It can lead to a journalist going bankrupt or a blogger losing his house. A lot of journalists and scientists will understandably react to the threat of libel by retracting their articles, even if they are confident they are correct. So I'm delighted that Gavin Schmidt is going to stand up for what he has written." During the case with the BCA, Singh also received a libel threat in response to an article he had written about climate change, but Singh stood by what he had written and threat was not carried through.
  • ...7 more annotations...
  • Schmidt has refused to retract his comments and maintains that the majority of papers published in the journal are "dross"."I would personally not credit any article that was published there with any useful contribution to the science," he told the Guardian. "Saying a paper was published in E&E has become akin to immediately discrediting it." He also describes the journal as a "backwater" of poorly presented and incoherent contributions that "anyone who has done any science can see are fundamentally flawed from the get-go."
  • Schmidt points to an E&E paper that claimed that the Sun is made of iron. "The editor sent it out for review, where it got trashed (as it should have been), and [Boehmer-Christiansen] published it anyway," he says.
  • The journal also published a much-maligned analysis suggesting that levels of the greenhouse gas carbon dioxide could go up and down by 100 parts per million in a year or two, prompting marine biologist Ralph Keeling at the Scripps Institute of Oceanography in La Jolla, California to write a response to the journal, in which he asked: "Is it really the intent of E&E to provide a forum for laundering pseudo-science?"
  • Schmidt and Keeling are not alone in their criticisms. Roger Pielke Jr, a professor of environmental studies at the University of Colorado, said he regrets publishing a paper in the journal in 2000 – one year after it was established and before he had time to realise that it was about to become a fringe platform for climate sceptics. "[E&E] has published a number of low-quality papers, and the editor's political agenda has clearly undermined the legitimacy of the outlet," Pielke says. "If I had a time machine I'd go back and submit our paper elsewhere."
  • Any paper published in E&E is now ignored by the broader scientific community, according to Pielke. "In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis," he said. In the long run, Pielke is confident that good ideas will win out over bad ideas. "But without care to the legitimacy of our science institutions – including journals and peer review – that long run will be a little longer," he says.
  • she has no intention of changing the way she runs E&E – which is not listed on the ISI Journal Master list, an official list of academic journals – in response to his latest criticisms.
  • Schmidt is unsurprised. "You would need a new editor, new board of advisors, and a scrupulous adherence to real peer review, perhaps ... using an open review process," he said. "But this is very unlikely to happen since their entire raison d'être is political, not scientific."
Weiye Loh

DenialDepot: A word of caution to the BEST project team - 0 views

  • 1) Any errors, however inconsequential, will be taken Very Seriously and accusations of fraud will be made.
  • 2) If you adjust the raw data we will accuse you of fraudulently fiddling the figures whilst cooking the books.3) If you don't adjust the raw data we will accuse you of fraudulently failing to account for station biases and UHI.
  • 7) By all means publish all your source code, but we will still accuse you of hiding the methodology for your adjustments.
  • ...10 more annotations...
  • 8) If you publish results to your website and errors are found, we will accuse you of a Very Serious Error irregardless of severity (see point #1) and bemoan the press release you made about your results even though you won't remember making any press release about your results.
  • 9) With regard to point #8 above, at extra cost and time to yourself you must employ someone to thoroughly check each monthly update before is is published online, even if this delays publication of the results till the end of the month. You might be surprised at this because no-one actually relies on such freshly published data anyway and aren't the many eyes of blog audit better than a single pair of eyes? Well that's irrelevant. See points #1 and #810) If you don't publish results promptly at the start of the month on the public website, but instead say publish the results to a private site for checks to be performed before release, we will accuse you of engaging in unscientific-like secrecy and massaging the data behind closed doors.
  • 14) If any region/station shows a warming trend that doesn't match the raw data, and we can't understand why, we will accuse you of fraud and dismiss the entire record. Don't expect us to have to read anything to understand results.
  • 15) You must provide all input datasets on your website. It's no good referencing NOAAs site and saying they "own" the GHCN data for example. I don't want their GHCN raw temperatures file, I want the one on your hard drive which you used for the analysis, even if you claim they are the same. If you don't do this we will accuse you of hiding the data and preventing us checking your results.
  • 24. In the event that you comply with all of the above, we will point out that a mere hundred-odd years of data is irrelevant next to the 4.5 billion year history of Earth. So why do you even bother?
  • 23) In the unlikely event that I haven't wasted enough of your time forcing you to comply with the above rules, I also demand to see all emails you have sent or will send during the period 1950 to 2050 that contain any of these keywords
  • 22) We don't need any scrutiny because our role isn't important.
  • 17) We will treat your record as if no alternative exists. As if your record is the make or break of Something Really Important (see point #1) and we just can't check the results in any other way.
  • 16) You are to blame for any station data your team uses. If we find out that a station you use is next to an AC Unit, we will conclude you personally planted the thermometer there to deliberately get warming.
  • an article today by Roger Pielke Nr. (no relation) that posited the fascinating concept that thermometers are just as capricious and unreliable proxies for temperature as tree rings. In fact probably more so, and re-computing global temperature by gristlecone pines would reveal the true trend of global cooling, which will be in all our best interests and definitely NOT just those of well paying corporate entities.
  •  
    Dear Professor Muller and Team, If you want your Berkley Earth Surface Temperature project to succeed and become the center of attention you need to learn from the vast number of mistakes Hansen and Jones have made with their temperature records. To aid this task I created a point by point list for you.
Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • "A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point." So wrote the celebrated Stanford University psychologist Leon Festinger (PDF)
  • How would people so emotionally invested in a belief system react, now that it had been soundly refuted? At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they'd all been spared at the last minute. Festinger summarized the extraterrestrials' new pronouncement: "The little group, sitting all night long, had spread so much light that God had saved the world from destruction." Their willingness to believe in the prophecy had saved Earth from the prophecy!
  • This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
  • ...4 more annotations...
  • The theory of motivated reasoning builds on a key insight of modern neuroscience (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.
  • We're not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about.
  • Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. "They retrieve thoughts that are consistent with their previous beliefs," says Taber, "and that will lead them to build an argument and challenge what they're hearing."
  • when we think we're reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we're being scientists, but we're actually being lawyers (PDF). Our "reasoning" is a means to a predetermined end—winning our "case"—and is shot through with biases. They include "confirmation bias," in which we give greater heed to evidence and arguments that bolster our beliefs, and "disconfirmation bias," in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.
Weiye Loh

FleetStreetBlues: Independent columnist Johann Hari admits copying and pasting intervie... - 0 views

  • this isn't just a case of referencing something the interviewee has written previously - 'As XXX has written before...', or such like. No, Hari adds dramatic context to quotes which were never said - the following paragraph, for instance, is one of the quotes from the Levy interview which seems to have appeared elsewhere before. After saying this, he falls silent, and we stare at each other for a while. Then he says, in a quieter voice: “The facts are clear. Israel has no real intention of quitting the territories or allowing the Palestinian people to exercise their rights. No change will come to pass in the complacent, belligerent, and condescending Israel of today. This is the time to come up with a rehabilitation programme for Israel.”
  • So how does Hari justify it? Well, his post on 'Interview etiquette', as he calls it, is so stunningly brazen about playing fast-and-loose with quotes
  • When I’ve interviewed a writer, it’s quite common that they will express an idea or sentiment to me that they have expressed before in their writing – and, almost always, they’ve said it more clearly in writing than in speech. (I know I write much more clearly than I speak – whenever I read a transcript of what I’ve said, or it always seems less clear and more clotted. I think we’ve all had that sensation in one form or another). So occasionally, at the point in the interview where the subject has expressed an idea, I’ve quoted the idea as they expressed it in writing, rather than how they expressed it in speech. It’s a way of making sure the reader understands the point that (say) Gideon Levy wants to make as clearly as possible, while retaining the directness of the interview. Since my interviews are intellectual portraits that I hope explain how a person thinks, it seemed the most thorough way of doing it...
  • ...3 more annotations...
  • ...I’m a bit bemused to find one blogger considers this “plagiarism”. Who’s being plagiarized? Plagiarism is passing off somebody else’s intellectual work as your own – whereas I’m always making it clear that (say) Gideon Levy’s thought is Gideon Levy’s thought. I’m also a bit bemused to find that some people consider this “churnalism”. Churnalism is a journalist taking a press release and mindlessly recycling it – not a journalist carefully reading over all a writer’s books and selecting parts of it to accurately quote at certain key moments to best reflect how they think.
  • I called round a few other interviewers for British newspapers and they said what I did was normal practice and they had done it themselves from time to time. My test for journalism is always – would the readers mind you did this, or prefer it? Would they rather I quoted an unclear sentence expressing a thought, or a clear sentence expressing the same thought by the same person very recently? Both give an accurate sense of what a person is like, but one makes their ideas as accessible as possible for the reader while also being an accurate portrait of the person.
  • The Independent's top columnist and interviewer has just admitted that he routinely adds things his interviewees have written at some point in the past to their quotes, and then deliberately passes these statements off as though they were said to him in the course of an interview. The main art of being an interviewer is to be skilled at eliciting the right quotes from your subject. If Johann Hari wants to write 'intellectual portraits', he should go and write fiction. Do his editors really know that the copy they're printing ('we stare at each other for a while. Then he says in a quieter voice...') is essentially made up? What would Jayson Blair make of it all? Astonishing.
  •  
    In the last few days, a couple of blogs have been scrutinising the work of Johann Hari, the multiple award-winning Independent columnist and interviewer. A week ago on Friday the political DSG blog pointed out an eerie series of similarities between the quotes in Hari's interview with Toni Negri in 2004, and quotes in the book Negri on Negri, published in 2003. Brian Whelan, an editor with Yahoo! Ireland and a regular FleetStreetBlues contributor, spotted this and got in touch to suggest perhaps this wasn't the only time quotes in Hari's interviews had appeared elsewhere before. We ummed and ahhed slightly about running the piece based on one analysis from a self-proclaimed leftist blog - so Brian went away and did some analysis of his own. And found that a number of quotes in Hari's interview with Gideon Levy in the Independent last year had also been copied from elsewhere. So far, so scurrilous. But what's really astonishing is that Johann Hari has now responded to the blog accusations. And cheerfully admitted that he regularly includes in interviews quotes which the interviewee never actually said to him.
1 - 18 of 18
Showing 20 items per page