Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Reasoning" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

The American Spectator : Can't Live With Them… - 1 views

  • ommentators have repeatedly told us in recent years that the gap between rich and poor has been widening. It is true, if you compare the income of those in the top fifth of earners with the income of those in the bottom fifth, that the spread between them increased between 1996 and 2005. But, as Sowell points out, this frequently cited figure is not counting the same people. If you look at individual taxpayers, Sowell notes, those who happened to be in the bottom fifth in 1996 saw their incomes nearly double over the decade, while those who happened to be in the top fifth in 1995 saw gains of only 10 percent on average and those in the top 5 percent actually experienced decline in their incomes. Similar distortions are perpetrated by those bewailing "stagnation" in average household incomes -- without taking into account that households have been getting smaller, as rising wealth allows people to move out of large family homes.
  • Sometimes the distortion seems to be deliberate. Sowell gives the example of an ABC news report in the 1980s focusing on five states where "unemployment is most severe" -- without mentioning that unemployment was actually declining in all the other 45 states. Sometimes there seems to be willful incomprehension. Journalists have earnestly reported that "prisons are ineffective" because two-thirds of prisoners are rearrested within three years of their release. As Sowell comments: "By this kind of reasoning, food is ineffective as a response to hunger because it is only a matter of time after eating before you get hungry again. Like many other things, incarceration only works when it is done."
  • why do intellectuals often seem so lacking in common sense? Sowell thinks it goes with the job-literally: He defines "intellectuals" as "an occupational category [Sowell's emphasis], people whose occupations deal primarily with ideas -- writers, academics and the like." Medical researchers or engineers or even "financial wizards" may apply specialized knowledge in ways that require great intellectual skill, but that does not make them "intellectuals," in Sowell's view: "An intellectual's work begins and ends with ideas [Sowell's emphasis]." So an engineer "is ruined" if his bridges or buildings collapse and so with a financier who "goes broke… the proof of the pudding is ultimately in the eating…. but the ultimate test of a [literary] deconstructionist's ideas is whether other deconstructionists find those ideas interesting, original, persuasive, elegant or ingenious. There is no external test." The ideas dispensed by intellectuals aren't subject to "external" checks or exposed to the test of "verifiability" (apart from what "like-minded individuals" find "plausible") and so intellectuals are not really "accountable" in the same way as people in other occupations.
  • ...7 more annotations...
  • it is not quite true, even among tenured professors in the humanities, that idea-mongers can entirely ignore "external" checks. Even academics want to be respectable, which means they can't entirely ignore the realities that others notice. There were lots of academics talking about the achievements of socialism in the 1970s (I can remember them) but very few talking that way after China and Russia repudiated these fantasies.
  • THE MOST DISTORTING ASPECT of Sowell's account is that, in focusing so much on the delusions of intellectuals, he leaves us more confused about what motivates the rest of society. In a characteristic passage, Sowell protests that "intellectuals...have sought to replace the groups into which people have sorted themselves with groupings created and imposed by the intelligentsia. Ties of family, religion, and patriotism, for example, have long been rated as suspect or detrimental by the intelligentsia, and new ties that intellectuals have created, such as class -- and more recently 'gender' -- have been projected as either more real or more important."
  • There's no disputing the claim that most "intellectuals" -- surely most professors in the humanities-are down on "patriotism" and "religion" and probably even "family." But how did people get to be patriotic and religious in the first place? In Sowell's account, they just "sorted themselves" -- as if by the invisible hand of the market.
  • Let's put aside all the violence and intimidation that went into building so many nations and so many faiths in the past. What is it, even today, that makes people revere this country (or some other); what makes people adhere to a particular faith or church? Don't inspiring words often move people? And those who arrange these words -- aren't they doing something similar to what Sowell says intellectuals do? Is it really true, when it comes to embracing national or religious loyalties, that "the proof of the pudding is in the eating"?
  • Even when it comes to commercial products, people don't always want to be guided by mundane considerations of reliable performance. People like glamour, prestige, associations between the product and things they otherwise admire. That's why companies spend so much on advertising. And that's part of the reason people are willing to pay more for brand names -- to enjoy the associations generated by advertising. Even advertising plays on assumptions about what is admirable and enticing-assumptions that may change from decade to decade, as background opinions change. How many products now flaunt themselves as "green" -- and how many did so 20 years ago?
  • If we closed down universities and stopped subsidizing intellectual publications, would people really judge every proposed policy by external results? Intellectuals tend to see what they expect to see, as Sowell's examples show -- but that's true of almost everyone. We have background notions about how the world works that help us make sense of what we experience. We might have distorted and confused notions, but we don't just perceive isolated facts. People can improve in their understanding, developing background understandings that are more defined or more reliable. That's part of what makes people interested in the ideas of intellectuals -- the hope of improving their own understanding.
  • On Sowell's account, we wouldn't need the contributions of a Friedrich Hayek -- or a Thomas Sowell -- if we didn't have so many intellectuals peddling so many wrong-headed ideas. But the wealthier the society, the more it liberates individuals to make different choices and the more it can afford to indulge even wasteful or foolish choices. I'd say that means not that we have less need of intellectuals, but more need of better ones. 
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Research integrity: Sabotage! : Nature News - 0 views

  • University of Michigan in Ann Arbor
  • Vipul Bhrigu, a former postdoc at the university's Comprehensive Cancer Center, wears a dark-blue three-buttoned suit and a pinched expression as he cups his pregnant wife's hand in both of his. When Pollard Hines calls Bhrigu's case to order, she has stern words for him: "I was inclined to send you to jail when I came out here this morning."
  • Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells". Bhrigu has said on multiple occasions that he was compelled by "internal pressure" and had hoped to slow down Ames's work. Speaking earlier this month, he was contrite. "It was a complete lack of moral judgement on my part," he said.
  • ...16 more annotations...
  • Bhrigu's actions are surprising, but probably not unique. There are few firm numbers showing the prevalence of research sabotage, but conversations with graduate students, postdocs and research-misconduct experts suggest that such misdeeds occur elsewhere, and that most go unreported or unpoliced. In this case, the episode set back research, wasted potentially tens of thousands of dollars and terrorized a young student. More broadly, acts such as Bhrigu's — along with more subtle actions to hold back or derail colleagues' work — have a toxic effect on science and scientists. They are an affront to the implicit trust between scientists that is necessary for research endeavours to exist and thrive.
  • Despite all this, there is little to prevent perpetrators re-entering science.
  • federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data.
  • In Bhrigu's case, administrators at the University of Michigan worked with police to investigate, thanks in part to the persistence of Ames and her supervisor, Theo Ross. "The question is, how many universities have such procedures in place that scientists can go and get that kind of support?" says Christine Boesz, former inspector-general for the US National Science Foundation in Arlington, Virginia, and now a consultant on scientific accountability. "Most universities I was familiar with would not necessarily be so responsive."
  • Some labs are known to be hyper-competitive, with principal investigators pitting postdocs against each other. But Ross's lab is a small, collegial place. At the time that Ames was noticing problems, it housed just one other graduate student, a few undergraduates doing projects, and the lab manager, Katherine Oravecz-Wilson, a nine-year veteran of the lab whom Ross calls her "eyes and ears". And then there was Bhrigu, an amiable postdoc who had joined the lab in April 2009.
  • Some people whom Ross consulted with tried to convince her that Ames was hitting a rough patch in her work and looking for someone else to blame. But Ames was persistent, so Ross took the matter to the university's office of regulatory affairs, which advises on a wide variety of rules and regulations pertaining to research and clinical care. Ray Hutchinson, associate dean of the office, and Patricia Ward, its director, had never dealt with anything like it before. After several meetings and two more instances of alcohol in the media, Ward contacted the department of public safety — the university's police force — on 9 March. They immediately launched an investigation — into Ames herself. She endured two interrogations and a lie-detector test before investigators decided to look elsewhere.
  • At 4:00 a.m. on Sunday 18 April, officers installed two cameras in the lab: one in the cold room where Ames's blots had been contaminated, and one above the refrigerator where she stored her media. Ames came in that day and worked until 5:00 p.m. On Monday morning at around 10:15, she found that her medium had been spiked again. When Ross reviewed the tapes of the intervening hours with Richard Zavala, the officer assigned to the case, she says that her heart sank. Bhrigu entered the lab at 9:00 a.m. on Monday and pulled out the culture media that he would use for the day. He then returned to the fridge with a spray bottle of ethanol, usually used to sterilize lab benches. With his back to the camera, he rummaged through the fridge for 46 seconds. Ross couldn't be sure what he was doing, but it didn't look good. Zavala escorted Bhrigu to the campus police department for questioning. When he told Bhrigu about the cameras in the lab, the postdoc asked for a drink of water and then confessed. He said that he had been sabotaging Ames's work since February. (He denies involvement in the December and January incidents.)
  • Misbehaviour in science is nothing new — but its frequency is difficult to measure. Daniele Fanelli at the University of Edinburgh, UK, who studies research misconduct, says that overtly malicious offences such as Bhrigu's are probably infrequent, but other forms of indecency and sabotage are likely to be more common. "A lot more would be the kind of thing you couldn't capture on camera," he says. Vindictive peer review, dishonest reference letters and withholding key aspects of protocols from colleagues or competitors can do just as much to derail a career or a research project as vandalizing experiments. These are just a few of the questionable practices that seem quite widespread in science, but are not technically considered misconduct. In a meta-analysis of misconduct surveys, published last year (D. Fanelli PLoS ONE 4, e5738; 2009), Fanelli found that up to one-third of scientists admit to offences that fall into this grey area, and up to 70% say that they have observed them.
  • Some say that the structure of the scientific enterprise is to blame. The big rewards — tenured positions, grants, papers in stellar journals — are won through competition. To get ahead, researchers need only be better than those they are competing with. That ethos, says Brian Martinson, a sociologist at HealthPartners Research Foundation in Minneapolis, Minnesota, can lead to sabotage. He and others have suggested that universities and funders need to acknowledge the pressures in the research system and try to ease them by means of education and rehabilitation, rather than simply punishing perpetrators after the fact.
  • Bhrigu says that he felt pressure in moving from the small college at Toledo to the much bigger one in Michigan. He says that some criticisms he received from Ross about his incomplete training and his work habits frustrated him, but he doesn't blame his actions on that. "In any kind of workplace there is bound to be some pressure," he says. "I just got jealous of others moving ahead and I wanted to slow them down."
  • At Washtenaw County Courthouse in July, having reviewed the case files, Pollard Hines delivered Bhrigu's sentence. She ordered him to pay around US$8,800 for reagents and experimental materials, plus $600 in court fees and fines — and to serve six months' probation, perform 40 hours of community service and undergo a psychiatric evaluation.
  • But the threat of a worse sentence hung over Bhrigu's head. At the request of the prosecutor, Ross had prepared a more detailed list of damages, including Bhrigu's entire salary, half of Ames's, six months' salary for a technician to help Ames get back up to speed, and a quarter of the lab's reagents. The court arrived at a possible figure of $72,000, with the final amount to be decided upon at a restitution hearing in September.
  • Ross, though, is happy that the ordeal is largely over. For the month-and-a-half of the investigation, she became reluctant to take on new students or to hire personnel. She says she considered packing up her research programme. She even questioned her own sanity, worrying that she was the one sabotaging Ames's work via "an alternate personality". Ross now wonders if she was too trusting, and urges other lab heads to "realize that the whole spectrum of humanity is in your lab. So, when someone complains to you, take it seriously."
  • She also urges others to speak up when wrongdoing is discovered. After Bhrigu pleaded guilty in June, Ross called Trempe at the University of Toledo. He was shocked, of course, and for more than one reason. His department at Toledo had actually re-hired Bhrigu. Bhrigu says that he lied about the reason he left Michigan, blaming it on disagreements with Ross. Toledo let Bhrigu go in July, not long after Ross's call.
  • Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says.
  • Ames says that the experience shook her trust in her chosen profession. "I did have doubts about continuing with science. It hurt my idea of science as a community that works together, builds upon each other's work and collaborates."
  •  
    Research integrity: Sabotage! Postdoc Vipul Bhrigu destroyed the experiments of a colleague in order to get ahead.
Weiye Loh

Why Did 17 Million Students Go to College? - Innovations - The Chronicle of Higher Education - 0 views

  • Over 317,000 waiters and waitresses have college degrees (over 8,000 of them have doctoral or professional degrees), along with over 80,000 bartenders, and over 18,000 parking lot attendants. All told, some 17,000,000 Americans with college degrees are doing jobs that the BLS says require less than the skill levels associated with a bachelor’s degree.
  • Charles Murray’s thesis that an increasing number of people attending college do not have the cognitive abilities or other attributes usually necessary for success at higher levels of learning. As more and more try to attend colleges, either college degrees will be watered down (something already happening I suspect) or drop-out rates will rise.
  • interesting new study was posted on the Web site of America’s most prestigious economic-research organization, the National Bureau of Economic Research. Three highly regarded economists (one of whom has won the Nobel Prize in Economic Science) have produced “Estimating Marginal Returns to Education,” Working Paper 16474 of the NBER. After very sophisticated and elaborate analysis, the authors conclude “In general, marginal and average returns to college are not the same.” (p. 28)
  • ...8 more annotations...
  • even if on average, an investment in higher education yields a good, say 10 percent, rate of return, it does not follow that adding to existing investments will yield that return, partly for reasons outlined above. The authors (Pedro Carneiro, James Heckman, and Edward Vytlacil) make that point explicitly, stating “Some marginal expansions of schooling produce gains that are well below average returns, in general agreement with the analysis of Charles Murray.”  (p.29)
  • Once the economy improves, and history tells us it will improve within our lifetimes, those who already have a college degree under their belts will be better equipped to take advantage of new employment opportunities than those who don’t. Perhaps not because of the actual knowledge obtained through their degrees, but definitely as an offset to the social stigma that still exists for those who do not attend college. A college degree may not help a young person secure professional work immediately – so new graduates spend a few years waiting tables until the right opportunity comes along. So what? It’s probably good for them. But they have 40-50 years in the workforce ahead of them and need to be forward-thinking if they don’t want to wait tables for that entire time. If we stop encouraging all young people to view college as both a goal and a possibility, and start weeding out those whose “prior academic records suggest little likelihood of academic success” which, let’s face it, will happen in larger proportions in poorer schools, then in 20 years we’ll find that efforts to reduce socioeconomic gaps between minorities and non-minorities have been seriously undermined.
  • Bet you a lot of those janitors with PhDs are from the humanities, in particular ethic studies, film studies,…basket weaving courses… or non-economics social sciences, eg., sociology, anthropology of never heard of country….There should be a buyer beware warning on all those non-quantitative majors that make people into sophisticated malcontent complainers!
  • This article also presumes that the purpose of higher education is merely to train one for a career path and enhance future income. This devalues the university, turning it into a vocational training institution. There’s nothing in this data that suggests that they are “sophisticated complainers”; that’s an unwarranted inference.
  • it was mentioned that the Bill and Melinda Gates Foundation would like 80% of American youth to attend and graduate from college. It is a nice thought in many ways. As a teacher and professor, intellectually I am all for it (if the university experience is a serious one, which these days, I don’t know).
  • students’ expectations in attending college are not just intellectual; they are careerist (probably far more so)
  • This employment issue has more to do with levels of training and subsequent levels of expectation. When a Korean student emerges from 20 years of intense study with a university degree, he or she reasonably expects a “good” job — which is to say, a well-paying professional or managerial job with good forward prospects. But here’s the problem. There does not exist, nor will there ever exist, a society in which 80% of the available jobs are professional, managerial, comfortable, and well-paid. No way.
  • Korea has a number of other jobs, but some are low-paid service work, and many others — in factories, farming, fishing — are scorned as 3-D jobs (difficult, dirty, and dangerous). Educated Koreans don’t want them. So the country is importing labor in droves — from China, Vietnam, Cambodia, the Philippines, even Uzbekistan. In the countryside, rural Korean men are having such a difficult time finding prospective wives to share their agricultural lifestyle that fully 40% of rural marriages are to poor women from those other Asian countries, who are brought in by match-makers and marriage brokers.
  •  
    Why Did 17 Million Students Go to College?
Weiye Loh

How wise are crowds? - 0 views

  • n the past, economists trying to model the propagation of information through a population would allow any given member of the population to observe the decisions of all the other members, or of a random sampling of them. That made the models easier to deal with mathematically, but it also made them less representative of the real world.
    • Weiye Loh
       
      Random sampling is not representative
  • this paper does is add the important component that this process is typically happening in a social network where you can’t observe what everyone has done, nor can you randomly sample the population to find out what a random sample has done, but rather you see what your particular friends in the network have done,” says Jon Kleinberg, Tisch University Professor in the Cornell University Department of Computer Science, who was not involved in the research. “That introduces a much more complex structure to the problem, but arguably one that’s representative of what typically happens in real settings.”
    • Weiye Loh
       
      So random sampling is actually more accurate?
  • Earlier models, Kleinberg explains, indicated the danger of what economists call information cascades. “If you have a few crucial ingredients — namely, that people are making decisions in order, that they can observe the past actions of other people but they can’t know what those people actually knew — then you have the potential for information cascades to occur, in which large groups of people abandon whatever private information they have and actually, for perfectly rational reasons, follow the crowd,”
  • ...8 more annotations...
  • The MIT researchers’ paper, however, suggests that the danger of information cascades may not be as dire as it previously seemed.
  • a mathematical model that describes attempts by members of a social network to make binary decisions — such as which of two brands of cell phone to buy — on the basis of decisions made by their neighbors. The model assumes that for all members of the population, there is a single right decision: one of the cell phones is intrinsically better than the other. But some members of the network have bad information about which is which.
  • The MIT researchers analyzed the propagation of information under two different conditions. In one case, there’s a cap on how much any one person can know about the state of the world: even if one cell phone is intrinsically better than the other, no one can determine that with 100 percent certainty. In the other case, there’s no such cap. There’s debate among economists and information theorists about which of these two conditions better reflects reality, and Kleinberg suggests that the answer may vary depending on the type of information propagating through the network. But previous models had suggested that, if there is a cap, information cascades are almost inevitable.
  • if there’s no cap on certainty, an expanding social network will eventually converge on an accurate representation of the state of the world; that wasn’t a big surprise. But they also showed that in many common types of networks, even if there is a cap on certainty, convergence will still occur.
  • people in the past have looked at it using more myopic models,” says Acemoglu. “They would be averaging type of models: so my opinion is an average of the opinions of my neighbors’.” In such a model, Acemoglu says, the views of people who are “oversampled” — who are connected with a large enough number of other people — will end up distorting the conclusions of the group as a whole.
  • What we’re doing is looking at it in a much more game-theoretic manner, where individuals are realizing where the information comes from. So there will be some correction factor,” Acemoglu says. “If I’m seeing you, your action, and I’m seeing Munzer’s action, and I also know that there is some probability that you might have observed Munzer, then I discount his opinion appropriately, because I know that I don’t want to overweight it. And that’s the reason why, even though you have these influential agents — it might be that Munzer is everywhere, and everybody observes him — that still doesn’t create a herd on his opinion.”
  • the new paper leaves a few salient questions unanswered, such as how quickly the network will converge on the correct answer, and what happens when the model of agents’ knowledge becomes more complex.
  • the MIT researchers begin to address both questions. One paper examines rate of convergence, although Dahleh and Acemoglu note that that its results are “somewhat weaker” than those about the conditions for convergence. Another paper examines cases in which different agents make different decisions given the same information: some people might prefer one type of cell phone, others another. In such cases, “if you know the percentage of people that are of one type, it’s enough — at least in certain networks — to guarantee learning,” Dahleh says. “I don’t need to know, for every individual, whether they’re for it or against it; I just need to know that one-third of the people are for it, and two-thirds are against it.” For instance, he says, if you notice that a Chinese restaurant in your neighborhood is always half-empty, and a nearby Indian restaurant is always crowded, then information about what percentages of people prefer Chinese or Indian food will tell you which restaurant, if either, is of above-average or below-average quality.
  •  
    By melding economics and engineering, researchers show that as social networks get larger, they usually get better at sorting fact from fiction.
Weiye Loh

Before Assange there was Jayakumar: Context, realpolitik, and the public interest « Singapore 2025 - 0 views

  • Singapore Ministry of Foreign Affairs spokesman’s remarks in the Wall Street Journal Asia piece, “Leaked cable spooks some U.S. sources” dated 3 Dec 2010. The paragraph in question went like this: “Others laid blame not on working U.S. diplomats, but on Wikileaks. Singapore’s Ministry of Foreign Affairs said it had “deep concerns about the damaging action of Wikileaks.” It added, ‘it is critical to protect the confidentiality of diplomatic and official correspondence.’” (emphasis my own)
  • on 25 Jan 2003, the then Singapore Minister of Foreign Affairs and current Senior Minister without portfolio, Professor S Jayakumar, in an unprecedented move, unilaterally released all diplomatic and official correspondence relating to confidential discussions on water negotiations between Singapore and Malaysia from the year 2000. In a parliamentary speech that would have had Julian Assange smiling from ear to ear, Jayakumar said, “We therefore have no choice but to set the record straight by releasing these documents for people to judge for themselves the truth of the matter.” The parliamentary reason for the unprecedented release of information was the misrepresentations made by Malaysia over the price of water, amongst others.
  • The then Malaysian Prime Minister, Mahathir’s response to Singapore’s pre-Wikileak wikileak was equally quote-worthy, “I don’t feel nice. You write a letter to your girlfriend. And your girlfriend circulates it to all her boyfriends. I don’t think I’ll get involved with that girl.”
  • ...9 more annotations...
  • Mahathir did not leave it at that. He foreshadowed the Wikileak-chastised countries of today saying what William, the Singapore Ministry of Foreign Affairs, the US and Iran today, amongst others, must agree with, “It’s very difficult now for us to write letters at all because we might as well negotiate through the media.”
  • I proceeded to the Ministry of Foreign Affairs homepage to search for the full press release. As I anticipated, there was a caveat. This is the press release in full: In response to media queries on the WikiLeaks release of confidential and secret-graded US diplomatic correspondence, the MFA Spokesman expressed deep concerns about the damaging action of WikiLeaks. It is critical to protect the confidentiality of diplomatic and official correspondence, which is why Singapore has the Officials Secrets Act. In particular, the selective release of documents, especially when taken out of context, will only serve to sow confusion and fail to provide a complete picture of the important issues that were being discussed amongst leaders in the strictest of confidentiality.
  • The sentence in red seems to posit that the selective release of documents can be legitimised if released documents are not taken out of context. If this interpretation is true, then one can account for the political decision to release confidential correspondence covering the Singapore and Malaysia water talks referred to above. In parallel, one can imagine Assange or his supporters arguing that lies of weapons of mass destruction in Iraq and the advent of abject two-faced politics today to be sufficient grounds to justify the actions of Wikileaks. As for the arguments about confidentiality and official correspondence, the events in parliament in 2003 tell us no one should underestimate the ability of nation-states to do an Assange if it befits their purpose – be it directly, as Jayakumar did, or indirectly, through the media or some other medium of influence.
  • Timothy Garton Ash put out the dilemma perfectly when he said, “There is a public interest in understanding how the world works and what is done in our name. There is a public interest in the confidential conduct of foreign policy. The two public interests conflict.”
  • the advent of technology will only further blur the lines between these two public interests, if it has not already. Quite apart from technology, the absence of transparent and accountable institutions may also serve to guarantee the prospect of more of such embarrassing leaks in future.
  • In August 2009, there was considerable interest in Singapore about the circumstances behind the departure of Chip Goodyear, former CEO of the Australian mining giant BHP Billiton, from the national sovereign wealth fund, Temasek Holdings. Before that, all the public knew was – in the name of leadership renewal – Chip Goodyear had been carefully chosen and apparently hand-picked to replace Ho Ching as CEO of Temasek Holdings. In response to Chip’s untimely departure, Finance Minister Tharman Shanmugaratnam was quoted, “People do want to know, there is curiosity, it is a matter of public interest. That is not sufficient reason to disclose information. It is not sufficient that there be curiosity and interest that you want to disclose information.”
  • Overly secretive and furtive politicians operating in a parliamentary democracy are unlikely to inspire confidence among an educated citizenry either, only serving to paradoxically fuel public cynicism and conspiracy theories.
  • I believe that government officials and politicians who perform their jobs honourably have nothing to fear from Wikileaks. I would admit that there is an inherent naivety and idealism in this position. But if the lesson from the Wikileaks episode portends a higher standard of ethical conduct, encourages transparency and accountability – all of which promote good governance, realpolitik notwithstanding – then it is perhaps a lesson all politicians and government officials should pay keen attention to.
  • Post-script: “These disclosures are largely of analysis and high-grade gossip. Insofar as they are sensational, it is in showing the corruption and mendacity of those in power, and the mismatch between what they claim and what they do….If American spies are breaking United Nations rules by seeking the DNA biometrics of the UN director general, he is entitled to hear of it. British voters should know what Afghan leaders thought of British troops. American (and British) taxpayers might question, too, how most of the billions of dollars going in aid to Afghanistan simply exits the country at Kabul airport.” –Simon Jenkins, Guardian
Weiye Loh

The Men Who Stole the World -TimeFrames- Printout - TIME - 0 views

  •  
    For Johansen as for all of the pirate kings, it was always about writing good code, and what good code does is give power to the people who use it. That's the real reason the pirate apocalypse never happened. The pirates never wanted music and movies and all the rest of it to be free - at least, not in the financial sense. They wanted it to be free as in freedom.
Weiye Loh

MacIntyre on money « Prospect Magazine - 0 views

  • MacIntyre has often given the impression of a robe-ripping Savonarola. He has lambasted the heirs to the principal western ethical schools: John Locke’s social contract, Immanuel Kant’s categorical imperative, Jeremy Bentham’s utilitarian “the greatest happiness for the greatest number.” Yet his is not a lone voice in the wilderness. He can claim connections with a trio of 20th-century intellectual heavyweights: the late Elizabeth Anscombe, her surviving husband, Peter Geach, and the Canadian philosopher Charles Taylor, winner in 2007 of the Templeton prize. What all four have in common is their Catholic faith, enthusiasm for Aristotle’s telos (life goals), and promotion of Thomism, the philosophy of St Thomas Aquinas who married Christianity and Aristotle. Leo XIII (pope from 1878 to 1903), who revived Thomism while condemning communism and unfettered capitalism, is also an influence.
  • MacIntyre’s key moral and political idea is that to be human is to be an Aristotelian goal-driven, social animal. Being good, according to Aristotle, consists in a creature (whether plant, animal, or human) acting according to its nature—its telos, or purpose. The telos for human beings is to generate a communal life with others; and the good society is composed of many independent, self-reliant groups.
  • MacIntyre differs from all these influences and alliances, from Leo XIII onwards, in his residual respect for Marx’s critique of capitalism.
  • ...6 more annotations...
  • MacIntyre begins his Cambridge talk by asserting that the 2008 economic crisis was not due to a failure of business ethics.
  • he has argued that moral behaviour begins with the good practice of a profession, trade, or art: playing the violin, cutting hair, brick-laying, teaching philosophy.
  • In other words, the virtues necessary for human flourishing are not a result of the top-down application of abstract ethical principles, but the development of good character in everyday life.
  • After Virtue, which is in essence an attack on the failings of the Enlightenment, has in its sights a catalogue of modern assumptions of beneficence: liberalism, humanism, individualism, capitalism. MacIntyre yearns for a single, shared view of the good life as opposed to modern pluralism’s assumption that there can be many competing views of how to live well.
  • In philosophy he attacks consequentialism, the view that what matters about an action is its consequences, which is usually coupled with utilitarianism’s “greatest happiness” principle. He also rejects Kantianism—the identification of universal ethical maxims based on reason and applied to circumstances top down. MacIntyre’s critique routinely cites the contradictory moral principles adopted by the allies in the second world war. Britain invoked a Kantian reason for declaring war on Germany: that Hitler could not be allowed to invade his neighbours. But the bombing of Dresden (which for a Kantian involved the treatment of people as a means to an end, something that should never be countenanced) was justified under consequentialist or utilitarian arguments: to bring the war to a swift end.
  • MacIntyre seeks to oppose utilitarianism on the grounds that people are called on by their very nature to be good, not merely to perform acts that can be interpreted as good. The most damaging consequence of the Enlightenment, for MacIntyre, is the decline of the idea of a tradition within which an individual’s desires are disciplined by virtue. And that means being guided by internal rather than external “goods.” So the point of being a good footballer is the internal good of playing beautifully and scoring lots of goals, not the external good of earning a lot of money. The trend away from an Aristotelian perspective has been inexorable: from the empiricism of David Hume, to Darwin’s account of nature driven forward without a purpose, to the sterile analytical philosophy of AJ Ayer and the “demolition of metaphysics” in his 1936 book Language, Truth and Logic.
  •  
    The influential moral philosopher Alasdair MacIntyre has long stood outside the mainstream. Has the financial crisis finally vindicated his critique of global capitalism?
Weiye Loh

Genome Biology | Full text | A Faustian bargain - 0 views

  • on October 1st, you announced that the departments of French, Italian, Classics, Russian and Theater Arts were being eliminated. You gave several reasons for your decision, including that 'there are comparatively fewer students enrolled in these degree programs.' Of course, your decision was also, perhaps chiefly, a cost-cutting measure - in fact, you stated that this decision might not have been necessary had the state legislature passed a bill that would have allowed your university to set its own tuition rates. Finally, you asserted that the humanities were a drain on the institution financially, as opposed to the sciences, which bring in money in the form of grants and contracts.
  • I'm sure that relatively few students take classes in these subjects nowadays, just as you say. There wouldn't have been many in my day, either, if universities hadn't required students to take a distribution of courses in many different parts of the academy: humanities, social sciences, the fine arts, the physical and natural sciences, and to attain minimal proficiency in at least one foreign language. You see, the reason that humanities classes have low enrollment is not because students these days are clamoring for more relevant courses; it's because administrators like you, and spineless faculty, have stopped setting distribution requirements and started allowing students to choose their own academic programs - something I feel is a complete abrogation of the duty of university faculty as teachers and mentors. You could fix the enrollment problem tomorrow by instituting a mandatory core curriculum that included a wide range of courses.
  • the vast majority of humanity cannot handle freedom. In giving humans the freedom to choose, Christ has doomed humanity to a life of suffering.
  • ...7 more annotations...
  • in Dostoyevsky's parable of the Grand Inquisitor, which is told in Chapter Five of his great novel, The Brothers Karamazov. In the parable, Christ comes back to earth in Seville at the time of the Spanish Inquisition. He performs several miracles but is arrested by Inquisition leaders and sentenced to be burned at the stake. The Grand Inquisitor visits Him in his cell to tell Him that the Church no longer needs Him. The main portion of the text is the Inquisitor explaining why. The Inquisitor says that Jesus rejected the three temptations of Satan in the desert in favor of freedom, but he believes that Jesus has misjudged human nature.
  • I'm sure the budgetary problems you have to deal with are serious. They certainly are at Brandeis University, where I work. And we, too, faced critical strategic decisions because our income was no longer enough to meet our expenses. But we eschewed your draconian - and authoritarian - solution, and a team of faculty, with input from all parts of the university, came up with a plan to do more with fewer resources. I'm not saying that all the specifics of our solution would fit your institution, but the process sure would have. You did call a town meeting, but it was to discuss your plan, not let the university craft its own. And you called that meeting for Friday afternoon on October 1st, when few of your students or faculty would be around to attend. In your defense, you called the timing 'unfortunate', but pleaded that there was a 'limited availability of appropriate large venue options.' I find that rather surprising. If the President of Brandeis needed a lecture hall on short notice, he would get one. I guess you don't have much clout at your university.
  • As for the argument that the humanities don't pay their own way, well, I guess that's true, but it seems to me that there's a fallacy in assuming that a university should be run like a business. I'm not saying it shouldn't be managed prudently, but the notion that every part of it needs to be self-supporting is simply at variance with what a university is all about.
  • You seem to value entrepreneurial programs and practical subjects that might generate intellectual property more than you do 'old-fashioned' courses of study. But universities aren't just about discovering and capitalizing on new knowledge; they are also about preserving knowledge from being lost over time, and that requires a financial investment.
  • what seems to be archaic today can become vital in the future. I'll give you two examples of that. The first is the science of virology, which in the 1970s was dying out because people felt that infectious diseases were no longer a serious health problem in the developed world and other subjects, such as molecular biology, were much sexier. Then, in the early 1990s, a little problem called AIDS became the world's number 1 health concern. The virus that causes AIDS was first isolated and characterized at the National Institutes of Health in the USA and the Institute Pasteur in France, because these were among the few institutions that still had thriving virology programs. My second example you will probably be more familiar with. Middle Eastern Studies, including the study of foreign languages such as Arabic and Persian, was hardly a hot subject on most campuses in the 1990s. Then came September 11, 2001. Suddenly we realized that we needed a lot more people who understood something about that part of the world, especially its Muslim culture. Those universities that had preserved their Middle Eastern Studies departments, even in the face of declining enrollment, suddenly became very important places. Those that hadn't - well, I'm sure you get the picture.
  • one of your arguments is that not every place should try to do everything. Let other institutions have great programs in classics or theater arts, you say; we will focus on preparing students for jobs in the real world. Well, I hope I've just shown you that the real world is pretty fickle about what it wants. The best way for people to be prepared for the inevitable shock of change is to be as broadly educated as possible, because today's backwater is often tomorrow's hot field. And interdisciplinary research, which is all the rage these days, is only possible if people aren't too narrowly trained. If none of that convinces you, then I'm willing to let you turn your institution into a place that focuses on the practical, but only if you stop calling it a university and yourself the President of one. You see, the word 'university' derives from the Latin 'universitas', meaning 'the whole'. You can't be a university without having a thriving humanities program. You will need to call SUNY Albany a trade school, or perhaps a vocational college, but not a university. Not anymore.
  • I started out as a classics major. I'm now Professor of Biochemistry and Chemistry. Of all the courses I took in college and graduate school, the ones that have benefited me the most in my career as a scientist are the courses in classics, art history, sociology, and English literature. These courses didn't just give me a much better appreciation for my own culture; they taught me how to think, to analyze, and to write clearly. None of my sciences courses did any of that.
Weiye Loh

Arsenic bacteria - a post-mortem, a review, and some navel-gazing | Not Exactly Rocket Science | Discover Magazine - 0 views

  • t was the big news that wasn’t. Hyperbolic claims about the possible discovery of alien life, or a second branch of life on Earth, turned out to be nothing more than bacteria that can thrive on arsenic, using it in place of phosphorus in their DNA and other molecules. But after the initial layers of hype were peeled away, even this extraordinar
  • This is a chronological roundup of the criticism against the science in the paper itself, ending with some personal reflections on my own handling of the story (skip to Friday, December 10th for that bit).
  • Thursday, December 2nd: Felisa Wolfe-Simon published a paper in Science, claiming to have found bacteria in California’s Mono Lake that can grow using arsenic instead of phosphorus. Given that phosphorus is meant to be one of six irreplaceable elements, this would have been a big deal, not least because the bacteria apparently used arsenic to build the backbones of their DNA molecules.
  • ...14 more annotations...
  • In my post, I mentioned some caveats. Wolfe-Simon isolated the arsenic-loving strain, known as GFAJ-1, by growing Mono Lake bacteria in ever-increasing concentrations of arsenic while diluting out the phosphorus. It is possible that the bacteria’s arsenic molecules were an adaptation to the harsh environments within the experiment, rather than Mono Lake itself. More importantly, there were still detectable levels of phosphorus left in the cells at the end of the experiment, although Wolfe-Simon claimed that the bacteria shouldn’t have been able to grow on such small amounts.
  • signs emerged that NASA weren’t going to engage with the criticisms. Dwayne Brown, their senior public affairs officer, highlighted the fact that the paper was published in one of the “most prestigious scientific journals” and deemed it inappropriate to debate the science using the same media and bloggers who they relied on for press coverage of the science. Wolfe-Simon herself tweeted that “discussion about scientific details MUST be within a scientific venue so that we can come back to the public with a unified understanding.”
  • Jonathan Eisen says that “they carried out science by press release and press conference” and “are now hypocritical if they say that the only response should be in the scientific literature.” David Dobbs calls the attitude “a return to pre-Enlightenment thinking”, and rightly noted that “Rosie Redfield is a peer, and her blog is peer review”.
  • Chris Rowan agreed, saying that what happens after publication is what he considers to be “real peer review”. Rowan said, “The pre-publication stuff is just a quality filter, a check that the paper is not obviously wrong – and an imperfect filter at that. The real test is what happens in the months and years after publication.”Grant Jacobs and others post similar thoughts, while Nature and the Columbia Journalism Review both cover the fracas.
  • Jack Gilbert at the University of Chicago said that impatient though he is, peer-reviewed journals are the proper forum for criticism. Others were not so kind. At the Guardian, Martin Robbins says that “at almost every stage of this story the actors involved were collapsing under the weight of their own slavish obedience to a fundamentally broken… well… ’system’” And Ivan Oransky noted that NASA failed to follow its own code of conduct when announcing the study.
  • Dr Isis said, “If question remains about the voracity of these authors findings, then the only thing that is going to answer that doubt is data.  Data cannot be generated by blog discussion… Talking about digging a ditch never got it dug.”
  • it is astonishing how quickly these events unfolded and the sheer number of bloggers and media outlets that became involved in the criticism. This is indeed a brave new world, and one in which we are all the infamous Third Reviewer.
  • I tried to quell the hype around the study as best I could. I had the paper and I think that what I wrote was a fair representation of it. But, of course, that’s not necessarily enough. I’ve argued before that journalists should not be merely messengers – we should make the best possible efforts to cut through what’s being said in an attempt to uncover what’s actually true. Arguably, that didn’t happen although to clarify, I am not saying that the paper is rubbish or untrue. Despite the criticisms, I want to see the authors respond in a thorough way or to see another lab attempt replicate the experiments before jumping to conclusions.
  • the sheer amount of negative comment indicates that I could have been more critical of the paper in my piece. Others have been supportive in suggesting that this was more egg on the face of the peer reviewers and indeed, several practicing scientists took the findings on face value, speculating about everything from the implications for chemotherapy to whether the bacteria have special viruses. The counter-argument, which I have no good retort to, is that peer review is no guarantee of quality, and that writers should be able to see through the fog of whatever topic they write about.
  • my response was that we should expect people to make reasonable efforts to uncover truth and be skeptical, while appreciating that people can and will make mistakes.
  • it comes down to this: did I do enough? I was certainly cautious. I said that “there is room for doubt” and I brought up the fact that the arsenic-loving bacteria still contain measurable levels of phosphorus. But I didn’t run the paper past other sources for comment, which I typically do it for stories that contain extraordinary claims. There was certainly plenty of time to do so here and while there were various reasons that I didn’t, the bottom line is that I could have done more. That doesn’t always help, of course, but it was an important missed step. A lesson for next time.
  • I do believe that it you’re going to try to hold your profession to a higher standard, you have to be honest and open when you’ve made mistakes yourself. I also think that if you cover a story that turns out to be a bit dodgy, you have a certain responsibility in covering the follow-up
  • A basic problem with is the embargo. Specifically that journalists get early access, while peers – other specialists in the field – do not. It means that the journalist, like yourself, can rely only on the original authors, with no way of getting other views on the findings. And it means that peers can’t write about the paper when the journalists (who, inevitably, do a positive-only coverage due to the lack of other viewpoints) do, but will be able to voice only after they’ve been able to digest the paper and formulate a response.
  • No, that’s not true. The embargo doens’t preclude journalists from sending papers out to other authors for review and comment. I do this a lot and I have been critical about new papers as a result, but that’s the step that I missed for this story.
Weiye Loh

Skepticblog » Further Thoughts on Atheism - 0 views

  • Even before I started writing Evolution: How We and All Living Things Came to Be I knew that it would very briefly mention religion, make a mild assertion that religious questions are out of scope for science, and move on. I knew this was likely to provoke blow-back from some in the atheist community, and I knew mentioning that blow-back in my recent post “The Standard Pablum — Science and Atheism” would generate more.
  • Still, I was surprised by the quantity of the responses to the blog post (208 comments as of this moment, many of them substantial letters), and also by the fierceness of some of those responses. For example, according to one poster, “you not only pandered, you lied. And even if you weren’t lying, you lied.” (Several took up this “lying” theme.) Another, disappointed that my children’s book does not tell a general youth audience to look to “secular humanism for guidance,” declared  that “I’d have to tear out that page if I bought the book.”
  • I don’t mean to suggest that there are not points of legitimate disagreement in the mix — there are, many of them stated powerfully. There are also statements of support, vigorous debate, and (for me at least) a good deal of food for thought. I invite anyone to browse the thread, although I’d urge you to skim some of it. (The internet is after all a hyperbole-generating machine.)
  • ...10 more annotations...
  • I lack any belief in any deity. More than that, I am persuaded (by philosophical argument, not scientific evidence) to a high degree of confidence that gods and an afterlife do not exist.
  • do try to distinguish between my work as a science writer and skeptical activist on the one hand, and my personal opinions about religion and humanism on the other.
  • Atheism is a practical handicap for science outreach. I’m not naive about this, but I’m not cynical either. I’m a writer. I’m in the business of communicating ideas about science, not throwing up roadblocks and distractions. It’s good communication to keep things as clear, focused, and on-topic as possible.
  • Atheism is divisive for the skeptical community, and it distracts us from our core mandate. I was blunt about this in my 2007 essay “Where Do We Go From Here?”, writing, I’m both an atheist and a secular humanist, but it is clear to me that atheism is an albatross for the skeptical movement. It divides us, it distracts us, and it marginalizes us. Frankly, we can’t afford that. We need all the help we can get.
  • In What Do I Do Next? I urged skeptics to remember that there are many other skeptics who do hold or identify with some religion. Indeed, the modern skeptical movement is built partly on the work of people of faith (including giants like Harry Houdini and Martin Gardner). You don’t, after all, have to be against god to be against fraud.
  • In my Skeptical Inquirer article “The Paradoxical Future of Skepticism” I argued that skeptics must set aside the conceit that our goal is a cultural revolution or the dawning of a new Enlightenment. … When we focus on that distant, receding, and perhaps illusory goal, we fail to see the practical good we can do, the harm-reduction opportunities right in front of us. The long view subverts our understanding of the scale and hazard of paranormal beliefs, leading to sentiments that the paranormal is “trivial” or “played out.” By contrast, the immediate, local, human view — the view that asks “Will this help someone?” — sees obvious opportunities for every local group and grassroots skeptic to make a meaningful difference.
  • This practical argument, that skepticism can get more done if we keep our mandate tight and avoid alienating our best friends, seems to me an important one. Even so, it is not my main reason for arguing that atheism and skepticism are different projects.
  • In my opinion, Metaphysics and ethics are out of scope for science — and therefore out of scope for skepticism. This is by far the most important reason I set aside my own atheism when I put on my “skeptic” hat. It’s not that I don’t think atheism is rational — I do. That’s why I’m an atheist. But I know that I cannot claim scientific authority for a conclusion that science cannot test, confirm, or disprove. And so, I restrict myself as much as possible, in my role as a skeptic and science writer, to investigable claims. I’ve become a cheerleader for this “testable claims” criterion (and I’ll discuss it further in future posts) but it’s not a new or radical constriction of the scope of skepticism. It’s the traditional position occupied by skeptical organizations for decades.
  • In much of the commentary, I see an assumption that I must not really believe that testable paranormal and pseudoscientific claims (“I can read minds”) are different in kind from the untestable claims we often find at the core of religion (“god exists”). I acknowledge that many smart people disagree on this point, but I assure you that this is indeed what I think.
  • I’d like to call out one blogger’s response to my “Standard Pablum” post. The author certainly disagrees with me (we’ve discussed the topic often on Twitter), but I thank him for describing my position fairly: From what I’ve read of Daniel’s writings before, this seems to be a very consistent position that he has always maintained, not a new one he adopted for the book release. It appears to me that when Daniel says that science has nothing to say about religion, he really means it. I have nothing to say to that. It also appears to me that when he says skepticism is a “different project than atheism” he also means it.
  •  
    FURTHER THOUGHTS ON ATHEISM by DANIEL LOXTON, Mar 05 2010
Weiye Loh

Meet the Ethical Placebo: A Story that Heals | NeuroTribes - 0 views

  • In modern medicine, placebos are associated with another form of deception — a kind that has long been thought essential for conducting randomized clinical trials of new drugs, the statistical rock upon which the global pharmaceutical industry was built. One group of volunteers in an RCT gets the novel medication; another group (the “control” group) gets pills or capsules that look identical to the allegedly active drug, but contain only an inert substance like milk sugar. These faux drugs are called placebos.
  • Inevitably, the health of some people in both groups improves, while the health of others grows worse. Symptoms of illness fluctuate for all sorts of reasons, including regression to the mean.
  • Since the goal of an RCT, from Big Pharma’s perspective, is to demonstrate the effectiveness of a new drug, the return to robust health of a volunteer in the control group is considered a statistical distraction. If too many people in the trial get better after downing sugar pills, the real drug will look worse by comparison — sometimes fatally so for the purpose of earning approval from the Food and Drug Adminstration.
  • ...12 more annotations...
  • For a complex and somewhat mysterious set of reasons, it is becoming increasingly difficult for experimental drugs to prove their superiority to sugar pills in RCTs
  • in recent years, however, has it become obvious that the abatement of symptoms in control-group volunteers — the so-called placebo effect — is worthy of study outside the context of drug trials, and is in fact profoundly good news to anyone but investors in Pfizer, Roche, and GlaxoSmithKline.
  • The emerging field of placebo research has revealed that the body’s repertoire of resilience contains a powerful self-healing network that can help reduce pain and inflammation, lower the production of stress chemicals like cortisol, and even tame high blood pressure and the tremors of Parkinson’s disease.
  • more and more studies each year — by researchers like Fabrizio Benedetti at the University of Turin, author of a superb new book called The Patient’s Brain, and neuroscientist Tor Wager at the University of Colorado — demonstrate that the placebo effect might be potentially useful in treating a wide range of ills. Then why aren’t doctors supposed to use it?
  • The medical establishment’s ethical problem with placebo treatment boils down to the notion that for fake drugs to be effective, doctors must lie to their patients. It has been widely assumed that if a patient discovers that he or she is taking a placebo, the mind/body password will no longer unlock the network, and the magic pills will cease to do their job.
  • For “Placebos Without Deception,” the researchers tracked the health of 80 volunteers with irritable bowel syndrome for three weeks as half of them took placebos and the other half didn’t.
  • In a previous study published in the British Medical Journal in 2008, Kaptchuk and Kirsch demonstrated that placebo treatment can be highly effective for alleviating the symptoms of IBS. This time, however, instead of the trial being “blinded,” it was “open.” That is, the volunteers in the placebo group knew that they were getting only inert pills — which they were instructed to take religiously, twice a day. They were also informed that, just as Ivan Pavlov trained his dogs to drool at the sound of a bell, the body could be trained to activate its own built-in healing network by the act of swallowing a pill.
  • In other words, in addition to the bogus medication, the volunteers were given a true story — the story of the placebo effect. They also received the care and attention of clinicians, which have been found in many other studies to be crucial for eliciting placebo effects. The combination of the story and a supportive clinical environment were enough to prevail over the knowledge that there was really nothing in the pills. People in the placebo arm of the trial got better — clinically, measurably, significantly better — on standard scales of symptom severity and overall quality of life. In fact, the volunteers in the placebo group experienced improvement comparable to patients taking a drug called alosetron, the standard of care for IBS. Meet the ethical placebo: a powerfully effective faux medication that meets all the standards of informed consent.
  • The study is hardly the last word on the subject, but more like one of the first. Its modest sample size and brief duration leave plenty of room for followup research. (What if “ethical” placebos wear off more quickly than deceptive ones? Does the fact that most of the volunteers in this study were women have any bearing on the outcome? Were any of the volunteers skeptical that the placebo effect is real, and did that affect their response to treatment?) Before some eager editor out there composes a tweet-baiting headline suggesting that placebos are about to drive Big Pharma out of business, he or she should appreciate the fact that the advent of AMA-approved placebo treatments would open numerous cans of fascinatingly tangled worms. For example, since the precise nature of placebo effects is shaped largely by patients’ expectations, would the advertised potency and side effects of theoretical products like Placebex and Therastim be subject to change by Internet rumors, requiring perpetual updating?
  • It’s common to use the word “placebo” as a synonym for “scam.” Economists talk about placebo solutions to our economic catastrophe (tax cuts for the rich, anyone?). Online skeptics mock the billion-dollar herbal-medicine industry by calling it Big Placebo. The fact that our brains and bodies respond vigorously to placebos given in warm and supportive clinical environments, however, turns out to be very real.
  • We’re also discovering that the power of narrative is embedded deeply in our physiology.
  • in the real world of doctoring, many physicians prescribe medications at dosages too low to have an effect on their own, hoping to tap into the body’s own healing resources — though this is mostly acknowledged only in whispers, as a kind of trade secret.
Weiye Loh

On Forgiveness - NYTimes.com - 0 views

  • What is forgiveness? When is it appropriate? Why is it considered to be commendable?  Some claim that forgiveness is merely about ridding oneself of vengeful anger; do that, and you have forgiven.  But if you were able to banish anger from your soul simply by taking a pill, would the result really be forgiveness?
  • The timing of forgiveness is also disputed. Some say that it should wait for the offender to take responsibility and suffer due punishment, others hold that the victim must first overcome anger altogether, and still others that forgiveness should be unilaterally bestowed at the earliest possible moment.  But what if you have every good reason to be angry and even to take your sweet revenge as well?  Is forgiveness then really to be commended? Some object that it lets the offender off the hook, confesses to one’s own weakness and vulnerability, and papers over the legitimate demands of vengeful anger.  And yet, legions praise forgiveness and think of it as an indispensable virtue
  • Many people assume that the notion of forgiveness is Christian in origin, at least in the West, and that the contemporary understanding of interpersonal forgiveness has always been the core Christian teaching on the subject.  These contestable assumptions are explored by David Konstan in “Before Forgiveness: The Origins of a Moral Idea.”  Religious origins of the notion would not invalidate a secular philosophical approach to the topic, any more than a secular origin of some idea precludes a religious appropriation of it.  While religious and secular perspectives on forgiveness are not necessarily consistent with each other, however, they agree in their attempt to address the painful fact of the pervasiveness of moral wrong in human life. They also agree on this: few of us are altogether innocent of the need for forgiveness.
  • ...2 more annotations...
  • It’s not simply a matter of lifting the burden of toxic resentment or of immobilizing guilt, however beneficial that may be ethically and psychologically.  It is not a merely therapeutic matter, as though this were just about you.  Rather, when the requisite conditions are met, forgiveness is what a good person would seek because it expresses fundamental moral ideals.  These include ideals of spiritual growth and renewal; truth-telling; mutual respectful address; responsibility and respect; reconciliation and peace.
  • Are any wrongdoers unforgivable?  People who have committed heinous acts such as torture or child molestation are often cited as examples.  The question is not primarily about the psychological ability of the victim to forswear anger, but whether a wrongdoer can rightly be judged not-to-be-forgiven no matter what offender and victim say or do.  I do not see that a persuasive argument for that thesis can be made; there is no such thing as the unconditionally unforgivable.  For else we would be faced with the bizarre situation of declaring illegitimate the forgiveness reached by victim and perpetrator after each has taken every step one could possibly wish for.  The implication may distress you: Osama bin Laden, for example, is not unconditionally unforgivable for his role in the attacks of 9/11.  That being said, given the extent of the injury done by grave wrongs, their author may be rightly unforgiven for an appropriate period even if he or she has taken all reasonable steps.  There is no mathematically precise formula for determining when it is appropriate to forgive.
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Rationally Speaking: Does the Academy discriminate against conservatives? - 0 views

  • The latest from University of Virginia cognitive scientist Jonathan Haidt is that people holding to conservative values may be discriminated against in academia. The New York Times’ John Tierney — who is usually a bit more discriminating in his columns than this — reports of a talk that Haidt had given at the conference of the Society for Personality and Social Psychology (this is the same Society whose journal recently published a new study “demonstrating” people’s clairvoyance when it comes to erotic images, so there). Haidt polled his audience and discovered the absolutely unastounding fact that 80% were liberal, with only a scatter of centrists and libertarians, and very, very few conservatives.
  • “This is a statistically impossible lack of diversity,” said Haidt, noting that according to polls, 40% of Americans are conservative and only 20% liberal. He then went on to make the (truly astounding) suggestion that this is just the same as discrimination against women or minorities, and that the poor conservative academics are forced to live in closets just like gays “used to” in the 1980s (because as we all know, that problem has been solved since).
  • I have criticized Haidt before for his contention that progressives and conservatives have a different set of moral criteria, implying that because progressives don’t include criteria of “purity,” in-group loyalty and respect for authority, their moral spectrum is more limited than that of conservatives. My point there was that Haidt simply confuses character traits (respect for authority) with moral values (fairness, or avoidance of harm).
  • ...3 more annotations...
  • suppose that — as I think is highly probable — the overwhelming majority of people with high positions in Wall Street hold to libertarian or conservative views. Would Haidt therefore claim that liberals are being discriminated against in the financial sector? I think not, because the obvious and far more more parsimonious explanation is that if your politics are really to the left of the spectrum, the last thing you want to do is work for Wall Street in helping make the few outrageously rich at the expense of the many.
  • Similarly, I suspect the obvious reason for the “imbalance” of political views in academia is that the low pay, long time before one gets to tenure (if ever), frequent rejection rates from journals and funding agencies, and the necessity to constantly engage one’s critical thinking skills naturally select against conservatives. (Okay, the last bit about critical thinking was a conscious slip that got in there just for fun.)
  • A serious social scientist doesn’t go around crying out discrimination just on the basis of unequal numbers. If that were the case, the NBA would be sued for discriminating against short people, dance companies against people without spatial coordination, and newspapers against dyslexics. Claims of discrimination are sensibly made only if one has a reasonable and detailed understanding of the causal factors behind the numbers. We claim that women and minorities are discriminated against in their access to certain jobs because we can investigate and demonstrate the discriminating practices that result in those numbers. Haidt hasn’t done any such thing. He simply got numbers and then ran wild with speculation about closeted libertarians. It was pretty silly of him, and down right irresponsible of Tierney to republish that garbage without critical comment. Then again, the New York Times is a known bastion of liberal journalism...
Weiye Loh

Learn to love uncertainty and failure, say leading thinkers | Edge question | Science | The Guardian - 0 views

  • Being comfortable with uncertainty, knowing the limits of what science can tell us, and understanding the worth of failure are all valuable tools that would improve people's lives, according to some of the world's leading thinkers.
  • he ideas were submitted as part of an annual exercise by the web magazine Edge, which invites scientists, philosophers and artists to opine on a major question of the moment. This year it was, "What scientific concept would improve everybody's cognitive toolkit?"
  • the public often misunderstands the scientific process and the nature of scientific doubt. This can fuel public rows over the significance of disagreements between scientists about controversial issues such as climate change and vaccine safety.
  • ...13 more annotations...
  • Carlo Rovelli, a physicist at the University of Aix-Marseille, emphasised the uselessness of certainty. He said that the idea of something being "scientifically proven" was practically an oxymoron and that the very foundation of science is to keep the door open to doubt.
  • "A good scientist is never 'certain'. Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain: because the good scientist will be ready to shift to a different point of view if better elements of evidence, or novel arguments emerge. Therefore certainty is not only something of no use, but is in fact damaging, if we value reliability."
  • physicist Lawrence Krauss of Arizona State University agreed. "In the public parlance, uncertainty is a bad thing, implying a lack of rigour and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time," he said.
  • however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, no measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning."
  • Neil Gershenfeld, director of the Massachusetts Institute of Technology's Centre for Bits and Atoms wants everyone to know that "truth" is just a model. "The most common misunderstanding about science is that scientists seek and find truth. They don't – they make and test models," he said.
  • Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features – violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom."
  • writer and web commentator Clay Shirky suggested that people should think more carefully about how they see the world. His suggestion was the Pareto principle, a pattern whereby the top 1% of the population control 35% of the wealth or, on Twitter, the top 2% of users send 60% of the messages. Sometimes known as the "80/20 rule", the Pareto principle means that the average is far from the middle.It is applicable to many complex systems, "And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world," said Shirky. "We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming."
  • Kevin Kelly, editor-at-large of Wired, pointed to the value of negative results. "We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced."
  • Michael Shermer, publisher of the Skeptic Magazine, wrote about the importance of thinking "bottom up not top down", since almost everything in nature and society happens this way.
  • But most people don't see things that way, said Shermer. "Bottom up reasoning is counterintuitive. This is why so many people believe that life was designed from the top down, and why so many think that economies must be designed and that countries should be ruled from the top down."
  • Roger Schank, a psychologist and computer scientist, proposed that we should all know the true meaning of "experimentation", which he said had been ruined by bad schooling, where pupils learn that scientists conduct experiments and if we copy exactly what they did in our high school labs we will get the results they got. "In effect we learn that experimentation is boring, is something done by scientists and has nothing to do with our daily lives."Instead, he said, proper experiments are all about assessing and gathering evidence. "In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment. But people who don't see their actions as experiments, and those who don't know how to reason carefully from data, will continue to learn less well from their own experiences than those who do
  • Lisa Randall, a physicist at Harvard University, argued that perhaps "science" itself would be a useful concept for wider appreciation. "The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned – while appreciating and categorising the extent and limitations of what we know – plays a big role in how we think.
  • "Many words that summarise the nature of science such as 'cause and effect', 'predictions', and 'experiments', as well as words that describe probabilistic results such as 'mean', 'median', 'standard deviation', and the notion of 'probability' itself help us understand more specifically what this means and how to interpret the world and behaviour within it."
Weiye Loh

In the Dock, in Paris « EJIL: Talk! - 0 views

  • My entire professional life has been in the law, but nothing had prepared me for this. I have been a tenured faculty member  at the finest institutions, most recently Harvard and NYU.  I have held visiting appointments from Florence to Singapore, from Melbourne to Jerusalem. I have acted as legal counsel to governments on four continents, handled cases before the highest jurisdictions and arbitrated the most complex disputes among economic ‘super powers.’
  • Last week, for the first  time I found myself  in the dock, as a criminal defendant. The French Republic v Weiler on a charge of Criminal Defamation.
  • As Editor-in-Chief of the European Journal of International Law and its associated Book Reviewing website, I commissioned and then published a review of a book on the International Criminal Court. It was not a particularly favorable review. You may see all details here.  The author of the book, claiming defamation, demanded I remove it. I examined carefully the claim and concluded that the accusation was fanciful. Unflattering? Yes. Defamatory, by no stretch of imagination. It was my ‘Voltairian’ moment. I refused the request. I did offer to publish a reply by the author. This offer was declined.
  • ...6 more annotations...
  • Three months later I was summoned to appear before an Examining Magistrate in Paris based on a complaint of criminal defamation lodged by the author. Why Paris you might ask? Indeed. The author of the book was an Israeli academic. The book was in English. The publisher was Dutch. The reviewer was a distinguished German professor. The review was published on a New York website.
  • Beyond doubt, once a text or image go online, they become available worldwide, including France. But should that alone give jurisdiction to French courts in circumstances such as this? Does the fact that the author of the book, it turned out, retained her French nationality before going to live and work in Israel make a difference? Libel tourism – libel terrorism to some — is typically associated with London, where notorious high legal fees and punitive damages coerce many to throw in the towel even before going to trial. Paris, as we would expect, is more egalitarian and less materialist. It is very plaintiff friendly.
  • In France an attack on one’s honor is taken as seriously as a bodily attack. Substantively, if someone is defamed, the bad faith of the defamer is presumed just as in our system, if someone slaps you in the face, it will be assumed that he intended to do so. Procedurally it is open to anyone who feels defamed, to avoid the costly civil route, and simply lodge a criminal complaint.  At this point the machinery of the State swings into action. For the defendant it is not without cost, I discovered. Even if I win I will not recover my considerable legal expenses and conviction results in a fine the size of which may depend on one’s income (the egalitarian reflex at its best). But money is not the principal currency here. It is honor and shame. If I lose, I will stand convicted of a crime, branded a criminal. The complainant will not enjoy a windfall as in London, but considerable moral satisfaction. The chilling effect on book reviewing well beyond France will be considerable.
  • The case was otiose for two reasons: It was in our view an egregious instance of ‘forum shopping,’ legalese for libel tourism. We wanted it thrown out. But if successful, the Court would never get to the merits –  and it was important to challenge this hugely dangerous attack on academic freedom and liberty of expression. Reversing custom, we specifically asked the Court not to examine our jurisdictional challenge as a preliminary matter but to join it to the case on the merits so that it would have the possibility to pronounce on both issues.
  • The trial was impeccable by any standard with which I am familiar. The Court, comprised three judges specialized in defamation and the Public Prosecutor. Being a criminal case within the Inquisitorial System, the case began by my interrogation by the President of the Court. I was essentially asked to explain the reasons for refusing to remove the article. The President was patient with my French – fluent but bad!  I was then interrogated by the other judges, the Public Prosecutor and the lawyers for the complainant. The complainant was then subjected to the same procedure after which the lawyers made their (passionate) legal arguments. The Public Prosecutor then expressed her Opinion to the Court. I was allowed the last word. It was a strange mélange of the criminal and civil virtually unknown in the Common Law world. The procedure was less formal, aimed at establishing the truth, and far less hemmed down by rules of evidence and procedure. Due process was definitely served. It was a fair trial.
  • we steadfastly refused to engage the complainants challenges to the veracity of the critical statements made by the reviewer. The thrust of our argument was that absent bad faith and malice, so long as the review in question addressed the book and did not make false statement about the author such as plagiarism, it should be shielded from libel claims, let alone criminal libel. Sorting out of the truth should be left to academic discourse, even if academic discourse has its own biases and imperfections.
Weiye Loh

Adventures in Flay-land: James Delingpole and the "Science" of Denialism - 0 views

  • Perhaps like me, you watched the BBC Two Horizons program Monday night presented by Sir Paul Nurse, president of the Royal Society and Nobel Prize winning geneticist for his discovery of the genes of cell division.
  • James. He really believes there's some kind of mainstream science "warmist" conspiracy against the brave outliers who dare to challenge the consensus. He really believes that "climategate" is a real scandal. He fails to understand that it is a common practice in statistics to splice together two or more datasets where you know that the quality of data is patchy. In the case of "climategate", researchers found that indirect temperature measurements based on tree ring widths (the tree ring temperature proxy) is consistent with other proxy methods of recording temperature from before the start of the instrumental temperature record (around 1950) but begins to show a decline in temperature after that for reasons which are unclear. Actual temperature measurements however show the opposite. The researcher at the head of the climategate affair, Phil Jones, created a graph of the temperature record to include on the cover of a report for policy makers and journalists. For this graph he simply spliced together the tree ring proxy data up until 1950 with the recorded data after that using statistical techniques to bring them into agreement. What made this seem particularly dodgy was an email intercepted by a hacker in which Jones referred to this practice as a "Mike's Nature trick", referring to a paper published by his colleague Mike Hulme Michael Mann in the journal Nature. It is however nothing out of the ordinary. Delingpole and others have talked about how this "trick" was used here to "hide the decline" revealed by the other dataset, as though this was some sort of deception. The fact that all parties were found to have behaved ethically is simply further evidence of the global warmist conspiracy. Delingpole takes it further and casts aspersions on scientific consensus and the entire peer review process.
  • When Nurse asked Delingpole the very straightforward question of whether he would be willing to trust a scientific consensus if he required treatment for cancer, he could have said "Gee, that's an interesting question. Let me think about that and why it's different."
  • ...7 more annotations...
  • Instead, he became defensive and lost his focus. Eventually he would make such regrettable statements as this one: "It is not my job to sit down and read peer-reviewed papers because I simply haven’t got the time, I haven’t got the scientific expertise… I am an interpreter of interpretation."
  • In a parallel universe where James Delingpole is not the "penis" that Ben Goldacre describes him to be, he might have said the following: Gee, that's an interesting question. Let me think about why it's different. (Thinks) Well, it seems to me that when evaluating a scientifically agreed treatment for a disease such as cancer, we have not only all the theory to peruse and the randomized and blinded trials, but also thousands if not millions of case studies where people have undergone the intervention. We have enough data to estimate a person's chances of recovery and know that on average they will do better. When discussing climate change, we really only have the one case study. Just the one earth. And it's a patient that has not undergone any intervention. The scientific consensus is therfore entirely theoretical and intangible. This makes it more difficult for the lay person such as myself to trust it.
  • Sir Paul ended the program saying "Scientists have got to get out there… if we do not do that it will be filled by others who don’t understand the science, and who may be driven by politics and ideology."
  • f proxy tracks instrumental from 1850 to 1960 but then diverges for unknown reasons, how do we know that the proxy is valid for reconstructing temperatures in periods prior to 1850?
  • This is a good question and one I'm not sure I can answer it to anyone's satisfaction. We seem to have good agreement among several forms of temperature proxy going back centuries and with direct measurements back to 1880. There is divergence in more recent years and there are several theories as to why that might be. Some possible explanations here:http://www.skepticalscience.com/Tree-ring-proxies-divergence-problem.htm
  • In the physical world we can never be absolutely certain of anything. Rene Des Cartes showed it was impossible to prove that everything he sensed wasn't manipulated by some invisible demon.
  • It is necessary to first make certain assumptions about the universe that we observe. After that, we can only go with the best theories available that allow us to make scientific progress.
Weiye Loh

BBC News - Should victims have a say in sentencing criminals? - 0 views

  • If someone does you wrong, should you have a say in their punishment?
  • Should victims have a say in sentencing criminals? That partly depends upon what you mean by "have a say". A weak form of involvement would have a judge listen to a statement from victims, but ensure the judge alone does the sentencing. A slightly stronger form would be when the impact on victims is considered as part of assessing the moral seriousness of the crime. The strongest form would be when victims have a direct say in the type of sentence. So which is the more just?
  • A utilitarian approach, which seeks people's greatest happiness and is associated with the British philosopher Jeremy Bentham, can provide one reason why victims should, in part, play judge. It can be called the therapeutic argument.
  • ...6 more annotations...
  • However, this might backfire. Given the choice, many victims might desire longer sentences than the judiciary would allow. When that desire is not satisfied, their anguish might be exacerbated. The therapeutic argument has also been called the "Oprahisation" of sentencing.
  • The second, Kantian approach emphasises reason and rights.
  • It stresses the law should be rational, and that includes keeping careful tabs on the irrational feelings that are inevitably present during legal proceedings. This would be harder to do, the more the voice of victims is heard.
  • More seriously still, strong forms of victim sentencing would reflect the capabilities of the victim. A victim who could powerfully express their feelings might win a longer sentence. That would be irrational because it would suggest that a crime is more serious if the victim is more articulate.
  • Taking considerations of moral seriousness into account would fit within a third approach, the one that stresses the common good and virtue and is associated with Aristotle. Would you want to meet the person who did this to you? Understanding the moral seriousness of a crime is important because it helps the criminal to take responsibility for what they've done. Victim feelings are also a crucial component in so-called restorative justice, in which the criminal is confronted with their crime, perhaps by meeting the victim.
  • virtue ethics approach would be concerned with the moral state of the victim too. Victims may need to forgive those who have wronged them, in order that they might flourish in the future. An impersonal legal system, that does not allow victims a say, might actually help with that, as it ensures objectivity.
« First ‹ Previous 41 - 60 of 200 Next › Last »
Showing 20 items per page