Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged conference

Rss Feed Group items tagged

Weiye Loh

Google's Next Mission: Fighting Violent Extremism | Fast Company - 0 views

  • Technology, of course, is playing a role both in recruiting members to extremist groups, as well as fueling pro-democracy and other movements--and that’s where Google’s interest lies. "Technology is a part of every challenge in the world, and a part of every solution,” Cohen tells Fast Company. "To the extent that we can bring that technology expertise, and mesh it with the Council on Foreign Relations’ academic expertise--and mesh all of that with the expertise of those who have had these experiences--that's a valuable network to explore these questions."
  • Cohen is the former State Department staffer who is best known for his efforts to bring technology into the country’s diplomatic efforts. But he was originally hired by Condaleezza Rice back in 2006 for a different--though related--purpose: to help Foggy Bottom better understand Middle Eastern youths (many of whom were big technology adopters) and how they could best "deradicalized." Last fall, Cohen joined Google as head of its nascent Google Ideas, which the company is labeling a "think/do tank."
  • This summer’s conference, "Summit Against Violent Extremism," takes place June 26-29 and will bring together about 50 former members of extremist groups--including former neo-Nazis, Muslim fundamentalists, and U.S. gang members--along with another 200 representatives from civil society organizations, academia, private corporations, and victims groups. The hope is to identify some common factors that cause young people to join violent organizations, and to form a network of people working on the issue who can collaborate going forward.
  • ...1 more annotation...
  • One of the technologies where extremism is playing out these days is in Google’s own backyard. While citizen empowerment movements have made use of YouTube to broadcast their messages, so have Terrorist and other groups. Just this week, anti-Hamas extremists kidnapped an Italian peace activist and posted their hostage video to YouTube first before eventually murdering him. YouTube has been criticized in the past for not removing violent videos quick enough. But Cohen says the conference is looking at the root causes that prompt a young person to join one of the groups in the first place. "There are a lot of different dimensions to this challenge," he says. "It’s important not to conflate everything."
  •  
    Neo-Nazi groups and al Qaeda might not seem to have much in common, but they do in one key respect: their recruits tend to be very young. The head of Google's new think tank, Jared Cohen, believes there might be some common reasons why young people are drawn to violent extremist groups, no matter their ideological or philosophical bent. So this summer, Cohen is spearheading a conference, in Dublin, Ireland, to explore what it is that draws young people to these groups and what can be done to redirect them.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Too Hot for TED: Income Inequality - Jim Tankersley - NationalJournal.com - 0 views

  • TED organizers invited a multimillionaire Seattle venture capitalist named Nick Hanauer – the first nonfamily investor in Amazon.com – to give a speech on March 1 at their TED University conference. Inequality was the topic – specifically, Hanauer’s contention that the middle class, and not wealthy innovators like himself, are America’s true “job creators.”
  • You can’t find that speech online. TED officials told Hanauer initially they were eager to distribute it. “I want to put this talk out into the world!” one of them wrote him in an e-mail in late April. But early this month they changed course, telling Hanauer that his remarks were too “political” and too controversial for posting.
  • "Many of the talks given at the conference or at TED-U are not released,” Anderson wrote. “We only release one a day on TED.com and there's a backlog of amazing talks from all over the world. We do not comment publicly on reasons to release or not release [a] talk. It's unfair on the speakers concerned. But we have a general policy to avoid talks that are overtly partisan, and to avoid talks that have received mediocre audience ratings."
  •  
    There's one idea, though, that TED's organizers recently decided was too controversial to spread: the notion that widening income inequality is a bad thing for America, and that as a result, the rich should pay more in taxes.
Weiye Loh

Edge 324 - 0 views

  •  
    THE NEW SCIENCE OF MORALITY An Edge Conference
Weiye Loh

RealClimate: Feedback on Cloud Feedback - 0 views

  • I have a paper in this week’s issue of Science on the cloud feedback
  • clouds are important regulators of the amount of energy in and out of the climate system. Clouds both reflect sunlight back to space and trap infrared radiation and keep it from escaping to space. Changes in clouds can therefore have profound impacts on our climate.
  • A positive cloud feedback loop posits a scenario whereby an initial warming of the planet, caused, for example, by increases in greenhouse gases, causes clouds to trap more energy and lead to further warming. Such a process amplifies the direct heating by greenhouse gases. Models have been long predicted this, but testing the models has proved difficult.
  • ...8 more annotations...
  • Making the issue even more contentious, some of the more credible skeptics out there (e.g., Lindzen, Spencer) have been arguing that clouds behave quite differently from that predicted by models. In fact, they argue, clouds will stabilize the climate and prevent climate change from occurring (i.e., clouds will provide a negative feedback).
  • In my new paper, I calculate the energy trapped by clouds and observe how it varies as the climate warms and cools during El Nino-Southern Oscillation (ENSO) cycles. I find that, as the climate warms, clouds trap an additional 0.54±0.74W/m2 for every degree of warming. Thus, the cloud feedback is likely positive, but I cannot rule out a slight negative feedback.
  • while a slight negative feedback cannot be ruled out, the data do not support a negative feedback large enough to substantially cancel the well-established positive feedbacks, such as water vapor, as Lindzen and Spencer would argue.
  • I have also compared the results to climate models. Taken as a group, the models substantially reproduce the observations. This increases my confidence that the models are accurately simulating the variations of clouds with climate change.
  • Dr. Spencer is arguing that clouds are causing ENSO cycles, so the direction of causality in my analysis is incorrect and my conclusions are in error. After reading this, I initiated a cordial and useful exchange of e-mails with Dr. Spencer (you can read the full e-mail exchange here). We ultimately agreed that the fundamental disagreement between us is over what causes ENSO. Short paraphrase: Spencer: ENSO is caused by clouds. You cannot infer the response of clouds to surface temperature in such a situation. Dessler: ENSO is not caused by clouds, but is driven by internal dynamics of the ocean-atmosphere system. Clouds may amplify the warming, and that’s the cloud feedback I’m trying to measure.
  • My position is the mainstream one, backed up by decades of research. This mainstream theory is quite successful at simulating almost all of the aspects of ENSO. Dr. Spencer, on the other hand, is as far out of the mainstream when it comes to ENSO as he is when it comes to climate change. He is advancing here a completely new and untested theory of ENSO — based on just one figure in one of his papers (and, as I told him in one of our e-mails, there are other interpretations of those data that do not agree with his interpretation). Thus, the burden of proof is Dr. Spencer to show that his theory of causality during ENSO is correct. He is, at present, far from meeting that burden. And until Dr. Spencer satisfies this burden, I don’t think anyone can take his criticisms seriously.
  • It’s also worth noting that the picture I’m painting of our disagreement (and backed up by the e-mail exchange linked above) is quite different from the picture provided by Dr. Spencer on his blog. His blog is full of conspiracies and purposeful suppression of the truth. In particular, he accuses me of ignoring his work. But as you can see, I have not ignored it — I have dismissed it because I think it has no merit. That’s quite different. I would also like to respond to his accusation that the timing of the paper is somehow connected to the IPCC’s meeting in Cancun. I can assure everyone that no one pressured me in any aspect of the publication of this paper. As Dr. Spencer knows well, authors have no control over when a paper ultimately gets published. And as far as my interest in influencing the policy debate goes, I’ll just say that I’m in College Station this week, while Dr. Spencer is in Cancun. In fact, Dr. Spencer had a press conference in Cancun — about my paper. I didn’t have a press conference about my paper. Draw your own conclusion.
  • This is but another example of how climate scientists are being played by the denialists. You attempted to discuss the issue with Spencer as if he were only doing science. But he is not. He is doing science and politics, and he has no compunction about sandbagging you. There is no gain to you in trying to deal with people like Spencer and Lindzen as colleagues. They are not trustworthy.
Weiye Loh

Meta-analysis - PsychWiki - A Collaborative Psychology Wiki - 0 views

  • A meta-analysis is only informative if it adequately summarizes the existing literature, so a thorough literature search is critical to retrieve every relevant study, such as database searches, ancestry approach, descendancy approach, hand searching, and the invisible college (i.e., network of researchers who know about unpublished studies, conference proceedings, etc). For more information see (Johnson & Eagly, 2000) (Handbook of Research Methods in Social and Personality Psychology) which details five general ways to retrieve relevant articles.
    • Weiye Loh
       
      How is one able to know that one has exhausted the "invisible college?" Perhaps we need an official record or a database of unpublished studies, conference proceedings, etc. 
Weiye Loh

Talking Philosophy | Ethicists, Courtesy & Morals - 0 views

  • research raises questions about the extent to which studying ethics improves moral behavior. To the extent that practical effect is among one’s aims in studying (or as an administrator, in requiring) philosophy, I think there is reason for concern. I’m inclined to think that either philosophy should be justified differently, or we should work harder to try to figure out whether there is a *way* of studying philosophy that is more effective in changing moral behavior than the ordinary (21st century, Anglophone) way of studying philosophy is.”
  • I think it’s fairly common that professionals in any field are skeptical about it. Professional politicians are much more skeptical or even cynical about politics than your average informed citizen. Most of the doctors whom I’ve talked to off the record are fairly skeptical about the merits of medical care. Those who specialize in giving investment “advice” will generally admit that they have no idea about the future of markets with the inevitable comment: “if I really knew how the market will react, I’d be on my yacht, not advising you”.
  •  
    For all their pondering on matters moral, ethicists are no better mannered than other philosophers, and they behave no better morally than other philosophers or other academics either. Or such, at least, are the conclusions suggested by the research of philosophers Eric Schwitzgebel (at the University of California, at Riverside) and Joshua Rust (of Stetson University, Florida). On Ethicists' courtesy at philosophy conferences as recently published in Philosophical Psychology', Schwitzgebel & Rust report on a study that suggests that audiences in ethics sessions do not behave any better than those attending seminars on other areas of philosophy. Not when it comes to talking audibly whilst a speaker is addressing the room and not when it comes to 'allowing the door to slam shut while entering or exiting mid-session'. And though, appropriately enough "audiences in environmental ethics sessions … appear to leave behind less trash" generally speaking, the ethicists are just as likely to leave a mess as the epistemologists and metaphysicians.
Weiye Loh

Climate Researchers Urged To Use 'Plain Language' - Science News - redOrbit - 0 views

  • James White of the University of Colorado at Boulder told fellow researchers to use plain language when describing their research to a general audience. Focusing on the reports technical details could obscure the basic science. To put it bluntly, “if you put more greenhouse gases in the atmosphere, it will get warmer,” he said. US climate scientist Robert Corell said it was pertinent to try to reach out to all members of society to spread awareness of Arctic melt and the impact it has on the whole world. “Stop speaking in code. Rather than 'anthropogenic,' you could say 'human caused,” Corell said at the conference of nearly 400 scientists.
Weiye Loh

Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Refl... - 0 views

  • The ideal that these nerdy revolutionaries are pursuing is not, as with previous generations—justice, freedom, democracy—rather it is “openness” as in Open Data, Open Information, Open Government. Precisely what is meant by “openness” is never (at least certainly not in the context of this conference) really defined in a form that an outsider could grapple with (and perhaps critique). 
  • the “open data/open government” movement begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations.
  • further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public.
  • ...3 more annotations...
  • A lot of the conference took place in specialized workshops where the technical details on how to link various sets of this newly available data together with other sets, how to structure this data so that it could serve various purposes and perhaps most importantly how to design the architecture and ontology (ultimately the management policies and procedures) of the data itself within government so that it is “born open” rather than only liberated after the fact with this latter process making the usefulness of the data in the larger world of open and universally accessible data much much greater.
  • t’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. And (it is argued), solutions are available for putting into the hands of these citizens the means/technical tools for sifting and sorting and making critical analyses of government activities if only the key could be turned and government data was “accessible” (“open”).
  • it matters very much who the (anticipated) user is since what is being put in place are the frameworks for the data environment  of the future and these will include for the most part some assumptions about who the ultimate user is or will be and whether or not a new “data divide” will emerge written more deeply into the fabric of the Information Society than even the earlier “digital (access) divide”.
Weiye Loh

Writing and Speaking - 0 views

  • As you decrease the intelligence of the audience, being a good speaker is increasingly a matter of being a good bullshitter. That's true in writing too of course, but the descent is steeper with talks. Any given person is dumber as a member of an audience than as a reader. Just as a speaker ad libbing can only spend as long thinking about each sentence as it takes to say it, a person hearing a talk can only spend as long thinking about each sentence as it takes to hear it. Plus people in an audience are always affected by the reactions of those around them, and the reactions that spread from person to person in an audience are disproportionately the more brutish sort, just as low notes travel through walls better than high ones. Every audience is an incipient mob, and a good speaker uses that. Part of the reason I laughed so much at the talk by the good speaker at that conference was that everyone else did.
  •  
    I'm not a very good speaker. I say "um" a lot. Sometimes I have to pause when I lose my train of thought. I wish I were a better speaker. But I don't wish I were a better speaker like I wish I were a better writer. What I really want is to have good ideas, and that's a much bigger part of being a good writer than being a good speaker. Having good ideas is most of writing well. If you know what you're talking about, you can say it in the plainest words and you'll be perceived as having a good style. With speaking it's the opposite: having good ideas is an alarmingly small component of being a good speaker.
Weiye Loh

Hermits and Cranks: Lessons from Martin Gardner on Recognizing Pseudoscientists: Scient... - 0 views

  • In 1950 Martin Gardner published an article in the Antioch Review entitled "The Hermit Scientist," about what we would today call pseudoscientists.
  • there has been some progress since Gardner offered his first criticisms of pseudoscience. Now largely antiquated are his chapters on believers in a flat Earth, a hollow Earth, Atlantis and Lemuria, Alfred William Lawson, Roger Babson, Trofim Lysenko, Wilhelm Reich and Alfred Korzybski. But disturbingly, a good two thirds of the book's contents are relevant today, including Gardner's discussions of homeopathy, naturopathy, osteopathy, iridiagnosis (reading the iris of the eye to deter- mine bodily malfunctions), food faddists, cancer cures and other forms of medical quackery, Edgar Cayce, the Great Pyramid's alleged mystical powers, handwriting analysis, ESP and PK (psychokinesis), reincarnation, dowsing rods, eccentric sexual theories, and theories of group racial differences.
  • The "hermit scientist," a youthful Gardner wrote, works alone and is ignored by mainstream scientists. "Such neglect, of course, only strengthens the convictions of the self-declared genius."
  • ...5 more annotations...
  • Even then Gardner was bemoaning that some beliefs never seem to go out of vogue, as he recalled an H. L. Mencken quip from the 1920s: "Heave an egg out of a Pullman window, and you will hit a Fundamentalist almost anywhere in the U.S. today." Gardner cautions that when religious superstition should be on the wane, it is easy "to forget that thousands of high school teachers of biology, in many of our southern states, are still afraid to teach the theory of evolution for fear of losing their jobs." Today creationism has spread northward and mutated into the oxymoronic form of "creation science."
  • the differences between science and pseudoscience. On the one extreme we have ideas that are most certainly false, "such as the dianetic view that a one-day-old embryo can make sound recordings of its mother's conversation." In the borderlands between the two "are theories advanced as working hypotheses, but highly debatable because of the lack of sufficient data." Of these Gardner selects a most propitious propitious example: "the theory that the universe is expanding." That theory would now fall at the other extreme end of the spectrum, where lie "theories al- most certainly true, such as the belief that the Earth is round or that men and beasts are distant cousins."
  • How can we tell if someone is a scientific crank? Gardner offers this advice: (1) "First and most important of these traits is that cranks work in almost total isolation from their colleagues." Cranks typically do not understand how the scientific process operates—that they need to try out their ideas on colleagues, attend conferences and publish their hypotheses in peer-reviewed journals before announcing to the world their startling discovery. Of course, when you explain this to them they say that their ideas are too radical for the conservative scientific establishment to accept.
  • (2) "A second characteristic of the pseudo-scientist, which greatly strengthens his isolation, is a tendency toward paranoia," which manifests itself in several ways: (1) He considers himself a genius. (2) He regards his colleagues, without exception, as ignorant blockheads....(3) He believes himself unjustly persecuted and discriminated against. The recognized societies refuse to let him lecture. The journals reject his papers and either ignore his books or assign them to "enemies" for review. It is all part of a dastardly plot. It never occurs to the crank that this opposition may be due to error in his work....(4) He has strong compulsions to focus his attacks on the greatest scientists and the best-established theories. When Newton was the outstanding name in physics, eccentric works in that science were violently anti-Newton. Today, with Einstein the father-symbol of authority, a crank theory of physics is likely to attack Einstein....(5) He often has a tendency to write in a complex jargon, in many cases making use of terms and phrases he himself has coined.
  • "If the present trend continues," Gardner concludes, "we can expect a wide variety of these men, with theories yet unimaginable, to put in their appearance in the years immediately ahead. They will write impressive books, give inspiring lectures, organize exciting cults. They may achieve a following of one—or one million. In any case, it will be well for ourselves and for society if we are on our guard against them."
  •  
    May 23, 2010 | 31 comments Hermits and Cranks: Lessons from Martin Gardner on Recognizing Pseudoscientists Fifty years ago Gardner launched the modern skeptical movement. Unfortunately, much of what he wrote about is still current today By Michael Shermer   
Weiye Loh

Liberal Democrat conference | Libel laws silence scientists | Richard Dawkins | Comment... - 0 views

  • Scientists often disagree with one another, sometimes passionately. But they don't go to court to sort out their differences, they go into the lab, repeat the experiments, carefully examine the controls and the statistical analysis. We care about whether something is true, supported by the evidence. We are not interested in whether somebody sincerely believes he is right.
    • Weiye Loh
       
      Exactly the reason why appeals to faith cannot work in secularism!!! Unfortunately, people who are unable to prove their point usually resort to underhand straw-in-nose methods; throw enough shit and hopefully some will stay.
  • Why doesn't it submit its case to the higher court of scientific test? I think we all know the answer.
Magdaleine

Workplace Surveillance - 5 views

Link: http://news.cnet.com/Judges-protest-workplace-surveillance/2100-1023_3-271457.html Summary: A panel of influential judges are taking a closer look at the issue of electronic monitoring at ...

workplace surveillance

Elaine Ong

Turning dolls into babies - 6 views

http://www.chroniclelive.co.uk/north-east-news/todays-evening-chronicle/2007/09/11/when-does-a-doll-become-a-baby-72703-19770082/ Just to share an interesting article about how dolls nowadays are ...

started by Elaine Ong on 25 Aug 09 no follow-up yet
Weiye Loh

Op-Ed Columnist - The Moral Naturalists - NYTimes.com - 0 views

  • Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
  • By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
  • Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it. At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.
  • ...6 more annotations...
  • This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
  • If you ask for donations with the photo and name of one sick child, you are likely to get twice as much money than if you had asked for donations with a photo and the names of eight children. Our minds respond more powerfully to the plight of an individual than the plight of a group.
  • If you are in a bad mood you will make harsher moral judgments than if you’re in a good mood or have just seen a comedy. As Elizabeth Phelps of New York University points out, feelings of disgust will evoke a desire to expel things, even those things unrelated to your original mood. General fear makes people risk-averse. Anger makes them risk-seeking.
  • People who behave morally don’t generally do it because they have greater knowledge; they do it because they have a greater sensitivity to other people’s points of view.
  • The moral naturalists differ over what role reason plays in moral judgments. Some, like Haidt, believe that we make moral judgments intuitively and then construct justifications after the fact. Others, like Joshua Greene of Harvard, liken moral thinking to a camera. Most of the time we rely on the automatic point-and-shoot process, but occasionally we use deliberation to override the quick and easy method.
  • For people wary of abstract theorizing, it’s nice to see people investigating morality in ways that are concrete and empirical. But their approach does have certain implicit tendencies. They emphasize group cohesion over individual dissent. They emphasize the cooperative virtues, like empathy, over the competitive virtues, like the thirst for recognition and superiority. At this conference, they barely mentioned the yearning for transcendence and the sacred, which plays such a major role in every human society. Their implied description of the moral life is gentle, fair and grounded. But it is all lower case. So far, at least, it might not satisfy those who want their morality to be awesome, formidable, transcendent or great.
  •  
    The Moral Naturalists By DAVID BROOKS Published: July 22, 2010
Weiye Loh

CancerGuide: The Median Isn't the Message - 0 views

  • Statistics recognizes different measures of an "average," or central tendency. The mean is our usual concept of an overall average - add up the items and divide them by the number of sharers
  • The median, a different measure of central tendency, is the half-way point.
  • A politician in power might say with pride, "The mean income of our citizens is $15,000 per year." The leader of the opposition might retort, "But half our citizens make less than $10,000 per year." Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean; but he can balance only one mendicant in calculating a median).
  • ...7 more annotations...
  • The larger issue that creates a common distrust or contempt for statistics is more troubling. Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more "real" and the only proper basis for action - if it feels good, do it - while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, "Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death."
  • This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science. Heart and head are focal points of one body, one personality.
  • We still carry the historical baggage of a Platonic heritage that seeks sharp essences and definite boundaries. (Thus we hope to find an unambiguous "beginning of life" or "definition of death," although nature often comes to us as irreducible continua.) This Platonic heritage, with its emphasis in clear distinctions and separated immutable entities, leads us to view statistical measures of central tendency wrongly, indeed opposite to the appropriate interpretation in our actual world of variation, shadings, and continua. In short, we view means and medians as the hard "realities," and the variation that permits their calculation as a set of transient and imperfect measurements of this hidden essence. If the median is the reality and variation around the median just a device for its calculation, the "I will probably be dead in eight months" may pass as a reasonable interpretation.
  • But all evolutionary biologists know that variation itself is nature's only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently - and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation. When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good. I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation's best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.
  • Another technical point then added even more solace. I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call "right skewed." (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right. In skewed distributions, variation to one side of the central tendency is more stretched out - left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned. After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn't much room for the distribution's lower (or left) half - it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives. The distribution must be right skewed, and I needed to know how long the extended tail ran - for I had already concluded that my favorable profile made me a good candidate for that part of the curve.
  • The distribution was indeed, strongly right skewed, with a long tail (however small) that extended for several years above the eight month median. I saw no reason why I shouldn't be in that small tail, and I breathed a very long sigh of relief. My technical knowledge had helped. I had read the graph correctly. I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances - substantial time.
  • One final point about statistical distributions. They apply only to a prescribed set of circumstances - in this case to survival with mesothelioma under conventional modes of treatment. If circumstances change, the distribution may alter. I was placed on an experimental protocol of treatment and, if fortune holds, will be in the first cohort of a new distribution with high median and a right tail extending to death by natural causes at advanced old age.
  •  
    The Median Isn't the Message by Stephen Jay Gould
Weiye Loh

A geophysiologist's thoughts on geoengineering - Philosophical Transactions A - 0 views

  • The Earth is now recognized as a self-regulating system that includes a reactive biosphere; the system maintains a long-term steady-state climate and surface chemical composition favourable for life. We are perturbing the steady state by changing the land surface from mainly forests to farm land and by adding greenhouse gases and aerosol pollutants to the air. We appear to have exceeded the natural capacity to counter our perturbation and consequently the system is changing to a new and as yet unknown but probably adverse state. I suggest here that we regard the Earth as a physiological system and consider amelioration techniques, geoengineering, as comparable to nineteenth century medicine.
  • Organisms change their world locally for purely personal selfish reasons; if the advantage conferred by the ‘engineering’ is sufficiently favourable, it allows them and their environment to expand until dominant on a planetary scale.
  • Our use of fires as a biocide to clear land of natural forests and replace them with farmland was our second act of geoengineering; together these acts have led the Earth to evolve to its current state. As a consequence, most of us are now urban and our environment is an artefact of engineering.
  • ...7 more annotations...
  • Physical means of amelioration, such as changing the planetary albedo, are the subject of other papers of this theme issue and I thought it would be useful here to describe physiological methods for geoengineering. These include tree planting, the fertilization of ocean algal ecosystems with iron, the direct synthesis of food from inorganic raw materials and the production of biofuels.
  • Tree planting would seem to be a sensible way to remove CO2 naturally from the air, at least for the time it takes for the tree to reach maturity. But in practice the clearance of forests for farm land and biofuels is now proceeding so rapidly that there is little chance that tree planting could keep pace.
  • Oceans cover over 70 per cent of the Earth's surface and are uninhabited by humans. In addition, most of the ocean surface waters carry only a sparse population of photosynthetic organisms, mainly because the mineral and other nutrients in the water below the thermocline do not readily mix with the warmer surface layer. Some essential nutrients such as iron are present in suboptimal abundance even where other nutrients are present and this led to the suggestion by John Martin in a lecture in 1991 that fertilization with the trace nutrient iron would allow algal blooms to develop that would cool the Earth by removing CO2
  • The Earth system is dynamically stable but with strong feedbacks. Its behaviour resembles more the physiology of a living organism than that of the equilibrium box models of the last century
  • For almost all other ailments, there was nothing available but nostrums and comforting words. At that time, despite a well-founded science of physiology, we were still ignorant about the human body or the host–parasite relationship it had with other organisms. Wise physicians knew that letting nature take its course without intervention would often allow natural self-regulation to make the cure. They were not averse to claiming credit for their skill when this happened.
  • The alternative is the acceptance of a massive natural cull of humanity and a return to an Earth that freely regulates itself but in the hot state.
  • Global heating would not have happened but for the rapid expansion in numbers and wealth of humanity. Had we heeded Malthus's warning and kept the human population to less than one billion, we would not now be facing a torrid future. Whether or not we go for Bali or use geoengineering, the planet is likely, massively and cruelly, to cull us, in the same merciless way that we have eliminated so many species by changing their environment into one where survival is difficult.
  •  
    A geophysiologist's thoughts on geoengineering
Weiye Loh

Julian Baggini: If science has not actually killed God, it has rendered Him unrecognisa... - 0 views

  • If top scientists such as John Polkinghorne and Bernard d'Espagnat believe in God, that challenges the simplistic claim that science and religion are completely incompatible. It doesn't hurt that this message is being pushed with the help of the enormous wealth of the Templeton Foundation, which funds innumerable research programmes, conferences, seminars and prizes as a kind of marriage guidance service to religion and science.
  • why on earth should physicists hold this exalted place in the theological firmament?
  • it can almost be reduced to a linguistic mistake: thinking that because both physicists and theologians study fundamental forces of some kind, they must study fundamental forces of the same kind.
  • ...9 more annotations...
  • If, as Sacks argues, science is about the how and religion the why, then scientists are not authorities on religion at all. Hawking's opinions about God would carry no more weight than his taxi driver's. Believers and atheists should remove physicists from the front line and send in the philosophers and theologians as cannon fodder once again.
  • But is Sacks right? Science certainly trails a destructive path through a lot of what has traditionally passed for religion. People accuse Richard Dawkins of attacking a baby version of religion, but the fact is that there are still millions of people who do believe in the literal truth of Genesis, Noah's Ark and all. Clearly science does destroy this kind of religious faith, totally and mercilessly. Scientists are authorities on religion when they declare the earth is considerably more than 6,000 years old.
  • But they insist that religion is no longer, if it ever was, in the business of trying to come up with proto-scientific explanations of how the universe works. If that is accepted, science and religion can make their peace and both rule over their different magisteria, as the biologist Stephen Jay Gould put it.
  • People have been making a lot in the past few days of Hawking's famous sentence in A Brief History of Time: "If we discover a complete theory, it would be a triumph of human reason – for then we should know the mind of God."
  • Hawking's "mind of God" was never anything more than a metaphor for an understanding of the universe which is complete and objective. Indeed, it has been evident for some time that Hawking does not believe in anything like the traditional God of religion. "You can call the laws of science 'God' if you like," he told Channel 4 earlier this year, "but it wouldn't be a personal God that you could meet, and ask questions."
  • there is no room in the universe of Hawking or most other scientists for the activist God of the Bible. That's why so few leading scientists are religious in any traditional sense.
  • This point is often overlooked by apologists who grasp at any straw science will hold out for them. Such desperate clinging happened, disgracefully, in the last years of the philosopher Antony Flew's life. A famous atheist, Flew was said to have changed his mind, persuaded that the best explanation for the "fine-tuning"of the universe – very precise way that its conditions make life possible – was some kind of intentional design. But what was glossed over was that he was very clear that this designer was nothing like the traditional God of the Abrahamic faiths. It was, he clearly said, rather the Deist Deist God, or the God of Aristotle, one who might set the ball rolling but then did no more than watch it trundle off over the horizon. This is no mere quibble. The deist God does not occupy some halfway house between atheism and theism. Replace Yaweh with the deist God and the Bible would make less sense than if you'd substituted Brian for Jesus.
  • it is not true that science challenges only the most primitive, literal forms of religion. It is probably going too far to say that sciencemakes the God of Christianity, Judaism and Islam impossible, but it certainly makes him very unlikely indeed.
  • to think that their findings, and those of other scientists, have nothing to say about the credibility of religious faith is just wishful thinking. In the scientific universe, God is squeezed until his pips squeak. If he survives, then he can't do so without changing his form. Only faith makes it possible to look at such a distorted, scientifically respectable deity and claim to recognise the same chap depicted on the ceiling of the Sistine Chapel. For those without faith, that God is clearly dead, and, yes, science helped to kill him.
  •  
    Julian Baggini: If science has not actually killed God, it has rendered Him unrecognisable There is no room in the universe of Hawking or most other scientists for the activist God of the Bible
Weiye Loh

Arsenic bacteria - a post-mortem, a review, and some navel-gazing | Not Exactly Rocket ... - 0 views

  • t was the big news that wasn’t. Hyperbolic claims about the possible discovery of alien life, or a second branch of life on Earth, turned out to be nothing more than bacteria that can thrive on arsenic, using it in place of phosphorus in their DNA and other molecules. But after the initial layers of hype were peeled away, even this extraordinar
  • This is a chronological roundup of the criticism against the science in the paper itself, ending with some personal reflections on my own handling of the story (skip to Friday, December 10th for that bit).
  • Thursday, December 2nd: Felisa Wolfe-Simon published a paper in Science, claiming to have found bacteria in California’s Mono Lake that can grow using arsenic instead of phosphorus. Given that phosphorus is meant to be one of six irreplaceable elements, this would have been a big deal, not least because the bacteria apparently used arsenic to build the backbones of their DNA molecules.
  • ...14 more annotations...
  • In my post, I mentioned some caveats. Wolfe-Simon isolated the arsenic-loving strain, known as GFAJ-1, by growing Mono Lake bacteria in ever-increasing concentrations of arsenic while diluting out the phosphorus. It is possible that the bacteria’s arsenic molecules were an adaptation to the harsh environments within the experiment, rather than Mono Lake itself. More importantly, there were still detectable levels of phosphorus left in the cells at the end of the experiment, although Wolfe-Simon claimed that the bacteria shouldn’t have been able to grow on such small amounts.
  • signs emerged that NASA weren’t going to engage with the criticisms. Dwayne Brown, their senior public affairs officer, highlighted the fact that the paper was published in one of the “most prestigious scientific journals” and deemed it inappropriate to debate the science using the same media and bloggers who they relied on for press coverage of the science. Wolfe-Simon herself tweeted that “discussion about scientific details MUST be within a scientific venue so that we can come back to the public with a unified understanding.”
  • Jonathan Eisen says that “they carried out science by press release and press conference” and “are now hypocritical if they say that the only response should be in the scientific literature.” David Dobbs calls the attitude “a return to pre-Enlightenment thinking”, and rightly noted that “Rosie Redfield is a peer, and her blog is peer review”.
  • Chris Rowan agreed, saying that what happens after publication is what he considers to be “real peer review”. Rowan said, “The pre-publication stuff is just a quality filter, a check that the paper is not obviously wrong – and an imperfect filter at that. The real test is what happens in the months and years after publication.”Grant Jacobs and others post similar thoughts, while Nature and the Columbia Journalism Review both cover the fracas.
  • Jack Gilbert at the University of Chicago said that impatient though he is, peer-reviewed journals are the proper forum for criticism. Others were not so kind. At the Guardian, Martin Robbins says that “at almost every stage of this story the actors involved were collapsing under the weight of their own slavish obedience to a fundamentally broken… well… ’system’” And Ivan Oransky noted that NASA failed to follow its own code of conduct when announcing the study.
  • Dr Isis said, “If question remains about the voracity of these authors findings, then the only thing that is going to answer that doubt is data.  Data cannot be generated by blog discussion… Talking about digging a ditch never got it dug.”
  • it is astonishing how quickly these events unfolded and the sheer number of bloggers and media outlets that became involved in the criticism. This is indeed a brave new world, and one in which we are all the infamous Third Reviewer.
  • I tried to quell the hype around the study as best I could. I had the paper and I think that what I wrote was a fair representation of it. But, of course, that’s not necessarily enough. I’ve argued before that journalists should not be merely messengers – we should make the best possible efforts to cut through what’s being said in an attempt to uncover what’s actually true. Arguably, that didn’t happen although to clarify, I am not saying that the paper is rubbish or untrue. Despite the criticisms, I want to see the authors respond in a thorough way or to see another lab attempt replicate the experiments before jumping to conclusions.
  • the sheer amount of negative comment indicates that I could have been more critical of the paper in my piece. Others have been supportive in suggesting that this was more egg on the face of the peer reviewers and indeed, several practicing scientists took the findings on face value, speculating about everything from the implications for chemotherapy to whether the bacteria have special viruses. The counter-argument, which I have no good retort to, is that peer review is no guarantee of quality, and that writers should be able to see through the fog of whatever topic they write about.
  • my response was that we should expect people to make reasonable efforts to uncover truth and be skeptical, while appreciating that people can and will make mistakes.
  • it comes down to this: did I do enough? I was certainly cautious. I said that “there is room for doubt” and I brought up the fact that the arsenic-loving bacteria still contain measurable levels of phosphorus. But I didn’t run the paper past other sources for comment, which I typically do it for stories that contain extraordinary claims. There was certainly plenty of time to do so here and while there were various reasons that I didn’t, the bottom line is that I could have done more. That doesn’t always help, of course, but it was an important missed step. A lesson for next time.
  • I do believe that it you’re going to try to hold your profession to a higher standard, you have to be honest and open when you’ve made mistakes yourself. I also think that if you cover a story that turns out to be a bit dodgy, you have a certain responsibility in covering the follow-up
  • A basic problem with is the embargo. Specifically that journalists get early access, while peers – other specialists in the field – do not. It means that the journalist, like yourself, can rely only on the original authors, with no way of getting other views on the findings. And it means that peers can’t write about the paper when the journalists (who, inevitably, do a positive-only coverage due to the lack of other viewpoints) do, but will be able to voice only after they’ve been able to digest the paper and formulate a response.
  • No, that’s not true. The embargo doens’t preclude journalists from sending papers out to other authors for review and comment. I do this a lot and I have been critical about new papers as a result, but that’s the step that I missed for this story.
1 - 20 of 47 Next › Last »
Showing 20 items per page