Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "England" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Jianwei Tan

Banksy, Vandalism & Copyright - 3 views

http://peteashton.com/2006/10/infringing_the_bankster/ I came across this story some time back when Banksy's works were more popular. Banksy is the handle used by an anonymous graffiti artist in E...

Banksy Graffiti Vandalism Copyright Art England

started by Jianwei Tan on 25 Aug 09 no follow-up yet
Weiye Loh

BBC News - Muslim challenge to tuition fee interest charges - 0 views

  • Repayments will be structured so that higher-earning graduates are paying higher levels of interest rates, up to 3% above inflation. Only those who earn below £21,000 will remain paying an effective zero rate of interest.
  • There are concerns that such interest charges are against Muslim teaching on finance and will prevent young Muslims from getting the finance needed to go to university.
  • "Many Muslim students are averse to interest due to teachings in the Islamic faith - such interest derails accessibility to higher education," says Nabil Ahmed, president of the FOSIS student group.
  • ...2 more annotations...
  • Mr Ahmed says there is a wider principle about the raising of interest rates and increasing debt for students, which he describes as "unethical". "People are already drowning in debt," he says. "We don't want people to be priced out of university."
  • Mr Ahmed highlighted how this debt would stretch across generations. Many students will be in their fifties when they finish paying for their degree courses - at which point they might then be expected to support their own children at university.
  •  
    Muslim student leaders say changes to tuition fees in England could breach Islamic rules on finance, which do not permit interest charges.
Weiye Loh

Climate change and extreme flooding linked by new evidence | George Monbiot | Environment | guardian.co.uk - 0 views

  • Two studies suggest for the first time a clear link between global warming and extreme precipitation
  • There's a sound rule for reporting weather events that may be related to climate change. You can't say that a particular heatwave or a particular downpour – or even a particular freeze – was definitely caused by human emissions of greenhouse gases. But you can say whether these events are consistent with predictions, or that their likelihood rises or falls in a warming world.
  • Weather is a complex system. Long-running trends, natural fluctuations and random patterns are fed into the global weather machine, and it spews out a series of events. All these events will be influenced to some degree by global temperatures, but it's impossible to say with certainty that any of them would not have happened in the absence of man-made global warming.
  • ...5 more annotations...
  • over time, as the data build up, we begin to see trends which suggest that rising temperatures are making a particular kind of weather more likely to occur. One such trend has now become clearer. Two new papers, published by Nature, should make us sit up, as they suggest for the first time a clear link between global warming and extreme precipitation (precipitation means water falling out of the sky in any form: rain, hail or snow).
  • We still can't say that any given weather event is definitely caused by man-made global warming. But we can say, with an even higher degree of confidence than before, that climate change makes extreme events more likely to happen.
  • One paper, by Seung-Ki Min and others, shows that rising concentrations of greenhouse gases in the atmosphere have caused an intensification of heavy rainfall events over some two-thirds of the weather stations on land in the northern hemisphere. The climate models appear to have underestimated the contribution of global warming on extreme rainfall: it's worse than we thought it would be.
  • The other paper, by Pardeep Pall and others, shows that man-made global warming is very likely to have increased the probability of severe flooding in England and Wales, and could well have been behind the extreme events in 2000. The researchers ran thousands of simulations of the weather in autumn 2000 (using idle time on computers made available by a network of volunteers) with and without the temperature rises caused by man-made global warming. They found that, in nine out of 10 cases, man-made greenhouse gases increased the risks of flooding. This is probably as solid a signal as simulations can produce, and it gives us a clear warning that more global heating is likely to cause more floods here.
  • As Richard Allan points out, also in Nature, the warmer the atmosphere is, the more water vapour it can carry. There's even a formula which quantifies this: 6-7% more moisture in the air for every degree of warming near the Earth's surface. But both models and observations also show changes in the distribution of rainfall, with moisture concentrating in some parts of the world and fleeing from others: climate change is likely to produce both more floods and more droughts.
Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

The Data-Driven Life - NYTimes.com - 0 views

  • Humans make errors. We make errors of fact and errors of judgment. We have blind spots in our field of vision and gaps in our stream of attention.
  • These weaknesses put us at a disadvantage. We make decisions with partial information. We are forced to steer by guesswork. We go with our gut.
  • Others use data.
  • ...3 more annotations...
  • Others use data. A timer running on Robin Barooah’s computer tells him that he has been living in the United States for 8 years, 2 months and 10 days. At various times in his life, Barooah — a 38-year-old self-employed software designer from England who now lives in Oakland, Calif. — has also made careful records of his work, his sleep and his diet.
  • A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration. Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.
  • “People have such very poor sense of time,” Barooah says, and without good time calibration, it is much harder to see the consequences of your actions. If you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

When Insurers Put Profits Before People - NYTimes.com - 0 views

  • Late in 2007
  • A 17-year-old girl named Nataline Sarkisyan was in desperate need of a transplant after receiving aggressive treatment that cured her recurrent leukemia but caused her liver to fail. Without a new organ, she would die in a matter of a days; with one, she had a 65 percent chance of surviving. Her doctors placed her on the liver transplant waiting list.
  • She was critically ill, as close to death as one could possibly be while technically still alive, and her fate was inextricably linked to another’s. Somewhere, someone with a compatible organ had to die in time for Nataline to live.
  • ...9 more annotations...
  • But even when the perfect liver became available a few days after she was put on the list, doctors could not operate. What made Nataline different from most transplant patients, and what eventually brought her case to the attention of much of the country, was that her survival did not depend on the availability of an organ or her clinicians or even the quality of care she received. It rested on her health insurance company.
  • Cigna had denied the initial request to cover the costs of the liver transplant. And the insurer persisted in its refusal, claiming that the treatment was “experimental” and unproven, and despite numerous pleas from Nataline’s physicians to the contrary.
  • But as relatives and friends organized campaigns to draw public attention to Nataline’s plight, the insurance conglomerate found itself embroiled in a public relations nightmare, one that could jeopardize its very existence. The company reversed its decision. But the change came too late. Nataline died just a few hours after Cigna authorized the transplant.
  • Mr. Potter was the head of corporate communications at two major insurers, first at Humana and then at Cigna. Now Mr. Potter has written a fascinating book that details the methods he and his colleagues used to manipulate public opinion
  • Mr. Potter goes on to describe the myth-making he did, interspersing descriptions of front groups, paid spies and jiggered studies with a deft retelling of the convoluted (and usually eye-glazing) history of health care insurance policies.
  • We learn that executives at Cigna worried that Nataline’s situation would only add fire to the growing public discontent with a health care system anchored by private insurance. As the case drew more national attention, the threat of a legislative overhaul that would ban for-profit insurers became real, and Mr. Potter found himself working on the biggest P.R. campaign of his career.
  • Cigna hired a large international law firm and a P.R. firm already well known to them from previous work aimed at discrediting Michael Moore and his film “Sicko.” Together with Cigna, these outside firms waged a campaign that would eventually include the aggressive placement of articles with friendly “third party” reporters, editors and producers who would “disabuse the media, politicians and the public of the notion that Nataline would have gotten the transplant if she had lived in Canada or France or England or any other developed country.” A “spy” was dispatched to Nataline’s funeral; and when the Sarkisyan family filed a lawsuit against the insurer, a team of lawyers was assigned to keep track of actions and comments by the family’s lawyer.
  • In the end, however, Nataline’s death proved to be the final straw for Mr. Potter. “It became clearer to me than ever that I was part of an industry that would do whatever it took to perpetuate its extraordinarily profitable existence,” he writes. “I had sold my soul.” He left corporate public relations for good less than six months after her death.
  • “I don’t mean to imply that all people who work for health insurance companies are greedier or more evil than other Americans,” he writes. “In fact, many of them feel — and justifiably so — that they are helping millions of people get they care they need.” The real problem, he says, lies in the fact that the United States “has entrusted one of the most important societal functions, providing health care, to private health insurance companies.” Therefore, the top executives of these companies become beholden not to the patients they have pledged to cover, but to the shareholders who hold them responsible for the bottom line.
Weiye Loh

Arianna Huffington: The Media Gets It Wrong on WikiLeaks: It's About Broken Trust, Not Broken Condoms - 0 views

  • Too much of the coverage has been meta -- focusing on questions about whether the leaks were justified, while too little has dealt with the details of what has actually been revealed and what those revelations say about the wisdom of our ongoing effort in Afghanistan. There's a reason why the administration is so upset about these leaks.
  • True, there hasn't been one smoking-gun, bombshell revelation -- but that's certainly not to say the cables haven't been revealing. What there has been instead is more of the consistent drip, drip, drip of damning details we keep getting about the war.
  • It's notable that the latest leaks came out the same week President Obama went to Afghanistan for his surprise visit to the troops -- and made a speech about how we are "succeeding" and "making important progress" and bound to "prevail."
  • ...16 more annotations...
  • The WikiLeaks cables present quite a different picture. What emerges is one reality (the real one) colliding with another (the official one). We see smart, good-faith diplomats and foreign service personnel trying to make the truth on the ground match up to the one the administration has proclaimed to the public. The cables show the widening disconnect. It's like a foreign policy Ponzi scheme -- this one fueled not by the public's money, but the public's acquiescence.
  • The second aspect of the story -- the one that was the focus of the symposium -- is the changing relationship to government that technology has made possible.
  • Back in the year 2007, B.W. (Before WikiLeaks), Barack Obama waxed lyrical about government and the internet: "We have to use technology to open up our democracy. It's no coincidence that one of the most secretive administrations in our history has favored special interest and pursued policy that could not stand up to the sunlight."
  • Not long after the election, in announcing his "Transparency and Open Government" policy, the president proclaimed: "Transparency promotes accountability and provides information for citizens about what their Government is doing. Information maintained by the Federal Government is a national asset." Cut to a few years later. Now that he's defending a reality that doesn't match up to, well, reality, he's suddenly not so keen on the people having a chance to access this "national asset."
  • Even more wikironic are the statements by his Secretary of State who, less than a year ago, was lecturing other nations about the value of an unfettered and free internet. Given her description of the WikiLeaks as "an attack on America's foreign policy interests" that have put in danger "innocent people," her comments take on a whole different light. Some highlights: In authoritarian countries, information networks are helping people discover new facts and making governments more accountable... technologies with the potential to open up access to government and promote transparency can also be hijacked by governments to crush dissent and deny human rights... As in the dictatorships of the past, governments are targeting independent thinkers who use these tools. Now "making government accountable" is, as White House spokesman Robert Gibbs put it, a "reckless and dangerous action."
  • ay Rosen, one of the participants in the symposium, wrote a brilliant essay entitled "From Judith Miller to Julian Assange." He writes: For the portion of the American press that still looks to Watergate and the Pentagon Papers for inspiration, and that considers itself a check on state power, the hour of its greatest humiliation can, I think, be located with some precision: it happened on Sunday, September 8, 2002. That was when the New York Times published Judith Miller and Michael Gordon's breathless, spoon-fed -- and ultimately inaccurate -- account of Iraqi attempts to buy aluminum tubes to produce fuel for a nuclear bomb.
  • Miller's after-the-facts-proved-wrong response, as quoted in a Michael Massing piece in the New York Review of Books, was: "My job isn't to assess the government's information and be an independent intelligence analyst myself. My job is to tell readers of The New York Times what the government thought about Iraq's arsenal." In other words, her job is to tell citizens what their government is saying, not, as Obama called for in his transparency initiative, what their government is doing.
  • As Jay Rosen put it: Today it is recognized at the Times and in the journalism world that Judy Miller was a bad actor who did a lot of damage and had to go. But it has never been recognized that secrecy was itself a bad actor in the events that led to the collapse, that it did a lot of damage, and parts of it might have to go. Our press has never come to terms with the ways in which it got itself on the wrong side of secrecy as the national security state swelled in size after September 11th.
  • And in the WikiLeaks case, much of media has again found itself on the wrong side of secrecy -- and so much of the reporting about WikiLeaks has served to obscure, to conflate, to mislead. For instance, how many stories have you heard or read about all the cables being "dumped" in "indiscriminate" ways with no attempt to "vet" and "redact" the stories first. In truth, only just over 1,200 of the 250,000 cables have been released, and WikiLeaks is now publishing only those cables vetted and redacted by their media partners, which includes the New York Times here and the Guardian in England.
  • The establishment media may be part of the media, but they're also part of the establishment. And they're circling the wagons. One method they're using, as Andrew Rasiej put it after the symposium, is to conflate the secrecy that governments use to operate and the secrecy that is used to hide the truth and allow governments to mislead us.
  • Nobody, including WikiLeaks, is promoting the idea that government should exist in total transparency,
  • Assange himself would not disagree. "Secrecy is important for many things," he told Time's Richard Stengel. "We keep secret the identity of our sources, as an example, take great pains to do it." At the same time, however, secrecy "shouldn't be used to cover up abuses."
  • Decentralizing government power, limiting it, and challenging it was the Founders' intent and these have always been core conservative principles. Conservatives should prefer an explosion of whistleblower groups like WikiLeaks to a federal government powerful enough to take them down. Government officials who now attack WikiLeaks don't fear national endangerment, they fear personal embarrassment. And while scores of conservatives have long promised to undermine or challenge the current monstrosity in Washington, D.C., it is now an organization not recognizably conservative that best undermines the political establishment and challenges its very foundations.
  • It is not, as Simon Jenkins put it in the Guardian, the job of the media to protect the powerful from embarrassment. As I said at the symposium, its job is to play the role of the little boy in The Emperor's New Clothes -- brave enough to point out what nobody else is willing to say.
  • When the press trades truth for access, it is WikiLeaks that acts like the little boy. "Power," wrote Jenkins, "loathes truth revealed. When the public interest is undermined by the lies and paranoia of power, it is disclosure that takes sanity by the scruff of its neck and sets it back on its feet."
  • A final aspect of the story is Julian Assange himself. Is he a visionary? Is he an anarchist? Is he a jerk? This is fun speculation, but why does it have an impact on the value of the WikiLeaks revelations?
Weiye Loh

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
Weiye Loh

News Clips: Pinning down acupuncture: It's a placebo - 0 views

  • some doctors seem to have embraced even disproven remedies. Take, for instance, a review of acupuncture research that appeared last July in the New England Journal of Medicine. This highly respected journal is one of the most widely read by doctors across specialities.In Acupuncture For Chronic Low Back Pain, the authors reviewed clinical trials done to assess if acupuncture actually helps in chronic low back pain. The most important meta-analysis available was a 2008 study involving 6,359 patients, which 'showed that real acupuncture treatments were no more effective than sham acupuncture treatments'.
  • The authors then editorialised: 'There was nevertheless evidence that both real acupuncture and sham acupuncture were more effective than no treatment and that acupuncture can be a useful supplement to other forms of conventional therapy for low back pain.'
  • First, they admit that pooled clinical trials of the best sort show that real acupuncture does no better than sham acupuncture. This should mean that acupuncture does not work - full stop. But then they say that both sham and real acupuncture work as well as the other and thus is useful. Translation: Please use acupuncture as a placebo on your patients; just don't let them know it is a placebo.
  • ...6 more annotations...
  • I should add that I am not criticising TCM per se. Only acupuncture, a facet of TCM, albeit its most dramatic, is being scrutinised here. Chinese herbology must be analysed on its own merits.Interestingly, although acupuncture may be TCM's poster boy today, the Chinese physician in days of yore would have looked askance at it. Instead, his practice and prestige were based upon his grasp of the Chinese pharmacopoeia.
  • Acupuncture was left to the shamans and blood letters. After all, it was grounded, not in the knowledge of which herbs were best for what conditions, but astrology.
  • In Giovanni Maciocia's 2005 book, The Foundations Of Chinese Medicine: A Comprehensive Text For Acupuncturists And Herbalists, there is a chart showing the astrological provenance of acupuncture. The chart shows how the 12 main acupuncture meridians and the 12 main body segments correspond to the 12 Houses of the Chinese zodiac.
  • In Chinese cosmology, all life is animated by a numinous force called qi, the flow of which mirrors the sun's apparent 'movement' during the year through the ecliptic. (The ecliptic is the imaginary plane of the earth's orbit around the sun).Moreover, everything in the Chinese zodiac is mirrored on Earth and in Man. This was taught even in the earliest systematised TCM text, the Yellow Emperor's Canon Of Medicine, thus: 'Heaven is covered with constellations, Earth with waterways, and man with channels.'This 'as above, so below' doctrine means that if there is qi flowing around in the imaginary closed loop of the zodiac, there is qi flowing correspondingly in the body's closed loop of imaginary meridians as well.
  • Note that not only is acupuncture astrological in origin but also the astrology is based on a model of the universe which has the earth at its centre. This geocentric model was an erroneous idea widely accepted before the Copernican revolution.
  • So should doctors check the daily horoscopes of their patients?
Weiye Loh

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
Weiye Loh

Humanist census posters banned from railway stations | UK news | The Guardian - 0 views

  • The posters, bearing the slogan "If you're not religious, for God's sake say so", have been refused by the companies that own the advertising space, which say they are likely to cause offence.
  • The British Humanist Association (BHA), which published the posters, said it was astonished that such an everyday phrase should be deemed too contentious for public display. "It is a little tongue-in-cheek," said the BHA chief executive, Andrew Copson, "but in the same way that saying 'bless you' has no religious implication for many, 'for God's sake' is used to express urgency and not to invoke a deity.
  • "This censorship of a legitimate advert is frustrating and ridiculous: the blasphemy laws in England have been abolished but we are seeing the same principle being enforced nonetheless."
  • ...3 more annotations...
  • The posters ask those who are not religious to tick the "no religion" box when they fill in forms for the 2011 census."We used to tick 'Christian' but we're not really religious. We'll tick 'No Religion' this time. We're sick of hearing politicians say this is a religious country and giving millions to religious organisations and the pope's state visit. Money like that should go where it is needed," says one of the banned posters.
  • The ban followed advice from the Advertising Standards Authority's committee of advertising practice that the advert had the potential to cause widespread and serious offence.The poster display company involved also said it did not want to take adverts relating to religion.
  • British Humanist Association has amended the campaign slogan on the adverts to read simply: "Not religious? In this year's census say so." The posters are being displayed from this weekend on 200 buses in London, Manchester, Leeds, Newcastle, Birmingham, Cardiff and Exeter.
  •  
    The posters, which encourage people to tick the 'no religion' box if they do not believe in God, were judged too likely to offend
Weiye Loh

RealClimate: E&E threatens a libel suit - 0 views

  • From: Bill Hughes Cc: Sonja Boehmer-Christiansen Subject:: E&E libel Date: 02/18/11 10:48:01 Gavin, your comment about Energy & Environment which you made on RealClimate has been brought to my attention: “The evidence for this is in precisely what happens in venues like E&E that have effectively dispensed with substantive peer review for any papers that follow the editor’s political line. ” To assert, without knowing, as you cannot possibly know, not being connected with the journal yourself, that an academic journal does not bother with peer review, is a terribly damaging charge, and one I’m really quite surprised that you’re prepared to make. And to further assert that peer review is abandoned precisely in order to let the editor publish papers which support her political position, is even more damaging, not to mention being completely ridiculous. At the moment, I’m prepared to settle merely for a retraction posted on RealClimate. I’m quite happy to work with you to find a mutually satisfactory form of words: I appreciate you might find it difficult. I look forward to hearing from you. With best wishes Bill Hughes Director Multi-Science Publsihing [sic] Co Ltd
  • The comment in question was made in the post “From blog to Science”
  • The point being that if the ‘peer-review’ bar gets lowered, the result is worse submissions, less impact and a declining reputation. Something that fits E&E in spades. This conclusion is based on multiple years of evidence of shoddy peer-review at E&E and, obviously, on the statements of the editor, Sonja Boehmer-Christiansen. She was quoted by Richard Monastersky in the Chronicle of Higher Education (3 Sep 2003) in the wake of the Soon and Baliunas fiasco: The journal’s editor, Sonja Boehmer-Christiansen, a reader in geography at the University of Hull, in England, says she sometimes publishes scientific papers challenging the view that global warming is a problem, because that position is often stifled in other outlets. “I’m following my political agenda — a bit, anyway,” she says. “But isn’t that the right of the editor?”
  • ...4 more annotations...
  • the claim that the ‘an editor publishes papers based on her political position’ while certainly ‘terribly damaging’ to the journal’s reputation is, unfortunately, far from ridiculous.
  • Other people have investigated the peer-review practices of E&E and found them wanting. Greenfyre, dissecting a list of supposedly ‘peer-reviewed’ papers from E&E found that: A given paper in E&E may have been peer reviewed (but unlikely). If it was, the review process might have been up to the normal standards for science (but unlikely). Hence E&E’s exclusion from the ISI Journal Master list, and why many (including Scopus) do not consider E&E a peer reviewed journal at all. Further, even the editor states that it is not a science journal and that it is politically motivated/influenced. Finally, at least some of what it publishes is just plain loony.
  • Also, see comments from John Hunter and John Lynch. Nexus6 claimed to found the worst climate paper ever published in its pages, and that one doesn’t even appear to have been proof-read (a little like Bill’s email). A one-time author, Roger Pielke Jr, said “…had we known then how that outlet would evolve beyond 1999 we certainly wouldn’t have published there. “, and Ralph Keeling once asked, “Is it really the intent of E&E to provide a forum for laundering pseudo-science?”. We report, you decide.
  • We are not surprised to find that Bill Hughes (the publisher) is concerned about his journal’s evidently appalling reputation. However, perhaps the way to fix that is to start applying a higher level of quality control rather than by threatening libel suits against people who publicly point out the problems?
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The Guardian - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Scientists Are Cleared of Misuse of Data - NYTimes.com - 0 views

  • The inquiry, by the Commerce Department’s inspector general, focused on e-mail messages between climate scientists that were stolen and circulated on the Internet in late 2009 (NOAA is part of the Commerce Department). Some of the e-mails involved scientists from NOAA.
  • Climate change skeptics contended that the correspondence showed that scientists were manipulating or withholding information to advance the theory that the earth is warming as a result of human activity.
  • In a report dated Feb. 18 and circulated by the Obama administration on Thursday, the inspector general said, “We did not find any evidence that NOAA inappropriately manipulated data.”
  • ...6 more annotations...
  • The finding comes at a critical moment for NOAA as some newly empowered Republican House members seek to rein in the Environmental Protection Agency’s plans to regulate greenhouse gas emissions, often contending that the science underpinning global warming is flawed. NOAA is the federal agency tasked with monitoring climate data.
  • The inquiry into NOAA’s conduct was requested last May by Senator James M. Inhofe, Republican of Oklahoma, who has challenged the science underlying human-induced climate change. Mr. Inhofe was acting in response to the controversy over the e-mail messages, which were stolen from the Climatic Research Unit at the University of East Anglia in England, a major hub of climate research. Mr. Inhofe asked the inspector general of the Commerce Department to investigate how NOAA scientists responded internally to the leaked e-mails. Of 1,073 messages, 289 were exchanges with NOAA scientists.
  • The inspector general reviewed the 1,073 e-mails, and interviewed Dr. Lubchenco and staff members about their exchanges. The report did not find scientific misconduct; it did however, challenge the agency over its handling of some Freedom of Information Act requests in 2007. And it noted the inappropriateness of e-mailing a collage cartoon depicting Senator Inhofe and five other climate skeptics marooned on a melting iceberg that passed between two NOAA scientists.
  • The report was not a review of the climate data itself. It joins a series of investigations by the British House of Commons, Pennsylvania State University, the InterAcademy Council and the National Research Council into the leaked e-mails that have exonerated the scientists involved of scientific wrongdoing.
  • But Mr. Inhofe said the report was far from a clean bill of health for the agency and that contrary to its executive summary, showed that the scientists “engaged in data manipulation.”
  • “It also appears that one senior NOAA employee possibly thwarted the release of important federal scientific information for the public to assess and analyze,” he said, referring to an employee’s failure to provide material related to work for the Intergovernmental Panel on Climate Change, a different body that compiles research, in response to a Freedom of Information request.
Weiye Loh

Science-Based Medicine » Skepticism versus nihilism about cancer and science-based medicine - 0 views

  • I’m a John Ioannidis convert, and I accept that there is a lot of medical literature that is erroneous. (Just search for Dr. Ioannidis’ last name on this blog, and you’ll find copious posts praising him and discussing his work.) In fact, as I’ve pointed out, most medical researchers instinctively know that most new scientific findings will not hold up to scrutiny, which is why we rarely accept the results of a single study, except in unusual circumstances, as being enough to change practice. I also have pointed out many times that this is not necessarily a bad thing. Replication is key to verification of scientific findings, and more often than not provocative scientific findings are not replicated. Does that mean they shouldn’t be published?
  • As for pseudoscience, I’m half tempted to agree with Dr. Spector, but just not in the way he thinks. Unfortunately, over the last 20 years or so, there has been an increasing amount of pseudoscience in the medical literature in the form of “complementary and alternative medicine” (CAM) studies of highly improbable remedies or even virtually impossible ones (i.e., homeopathy). However, that does not appear to be what Dr. Spector is talking about, which is why I looked up his references. The second reference is to an SI article from 2009 entitled Science and Pseudoscience in Adult Nutrition Research and Practice. There, and only there, did I find out just what it is that Dr. Spector apparently means by “pseudoscience”: By pseudoscience, I mean the use of inappropriate methods that frequently yield wrong or misleading answers for the type of question asked. In nutrition research, such methods also often misuse statistical evaluations.
  • Dr. Spector doesn’t really know the difference between inadequately rigorous science and pseudoscience! Now, don’t get me wrong. I know that it’s not always easy to distinguish science from pseudoscience, especially at the fringes, but in general bad science has to go a lot further than Dr. Spector thinks to merit the the term “pseudoscience.” It is clear (to me, at least) from his articles that Dr. Spector throws around the term “pseudoscience” around rather more loosely than he should, using it as a pejorative for any clinical science less rigorous than a randomized, double-blind, placebo-controlled trial that meets FDA standards for approval of a drug (his pharma background coming to the fore, no doubt). Pseudoscience, Dr. Spector. You keep using that word. I do not think it means what you think it means. Indeed, I almost get the impression from his articles that Dr. Spector views any study that doesn’t reach FDA-level standards for drug approval to be pseudoscience.
  • ...4 more annotations...
  • Medical science, when it works well, tends to progress from basic science, to small pilot studies, to larger randomized studies, and then–only then–to those big, rigorous, insanely expensive randomized, double-blind, placebo-controlled trials. Dr. Spector mentions hierarchies of evidence, but he seems to fall into a false dichotomy, namely that if it’s not Level I evidence, it’s crap. The problem is, as Mark pointed out, in medicine we often don’t have Level I evidence for many questions. Indeed, for some questions, we will never have Level I evidence. Clinical medicine involves making decisions in the midst of uncertainty, sometimes extreme uncertainty.
  • Dr. Spector then proceeds to paint a picture of reckless physicians proceeding on crappy studies to pump women full of hormones. Actually, it was more than a bit more complicated on than that. That was the time when I was in my medical training, and I remember the discussions we had regarding the strength (or lack thereof) of the epidemiological data and the lack of good RCTs looking at HRT. I also remember that nothing works as well to relieve menopausal symptoms as HRT, an observation we have been reminded of again since 2003, which is the year when the first big study came out implicating HRT in increasing the risk of breast cancer (more later).
  • I found a rather fascinating editorial in the New England Journal of Medicine from more than 20 years ago that discussed the state of the evidence back then with regard to estrogen and breast cancer: Evidence that estrogen increases the risk of breast cancer has been surprisingly difficult to obtain. Clinical and epidemiologic studies and studies in animals strongly suggest that endogenous estrogen plays a part in causing breast cancer. If so, exogenous estrogen should be a potent promoter of breast cancer. Although more than 20 case–control and prospective studies of the relation of breast cancer and noncontraceptive estrogen use have failed to demonstrate the expected association, relatively few women in these studies used estrogen for extended periods. Studies of the use of diethylstilbestrol and oral contraceptives suggest that a long exposure or latency may be necessary to show any association between hormone use and breast cancer. In the Swedish study, only six years of follow-up was needed to demonstrate an increased risk of breast cancer with the postmenopausal use of estradiol. It should be noted, however, that half the women in the subgroup that provided detailed data on the duration of hormone use had taken estrogen for many years before their base-line prescription status was defined. The duration of estrogen exposure in these women before the diagnosis of breast cancer was probably seriously underestimated; a short latency cannot be attributed to estradiol on the basis of these data. Other recent studies of the use of noncontraceptive estrogen suggest a slightly increased risk of breast cancer after 15 to 20 years’ use.
  • even now, the evidence is conflicting regarding HRT and breast cancer, with the preponderance of evidence suggesting that mixed HRT (estrogen and progestin) significantly increases the risk of breast cancer, while estrogen-alone HRT very well might not increase the risk of breast cancer at all or (more likely) only very little. Indeed, I was just at a conference all day Saturday where data demonstrating this very point were discussed by one of the speakers. None of this stops Dr. Spector from categorically labeling estrogen as a “carcinogen that causes breast cancers that kill women.” Maybe. Maybe not. It’s actually not that clear. The problem, of course, is that, consistent with the first primary reports of WHI results, the preponderance of evidence finding health risks due to HRT have indicted the combined progestin/estrogen combinations as unsafe.
Weiye Loh

Epiphenom: People: not as nice as they think they are - 0 views

  • Just how far divorced from reality we are was shown recently in an elegant study by Oriel Feldmanhall, a PhD candidate at the MRC Cognition and Brain Sciences Unit at Cambridge University, England. She's just presented the research at the Annual Meeting of the Cognitive Neuroscience Society in San Francisco, California.
  • she studied two groups of people. The first group she asked them to imagine a scenario where they would get paid a small sum to deliver painful but harmless electric shocks. 64% said they would never deliver a shock, and on average the participants would only deliver enough shocks to earn a paltry £4. The second group got the real deal. They actually administered the shocks, and saw the response on video (they were in an MRI scanner at the time). This time, a massive 96% of participants administered shocks. Those who saw video of the grimacing faces of their victims pocketed £11.55. Those who were spared that and only saw the hands walked away with a cool £15.77.
  • Brains scans vividly illuminated the emotional turmoil going on in the subjects who participated in the real experiment. They had a lot of activity in their insula, a deep, primitive part of the brain thought to be linked to moral intuition. People who did the pen-and-paper, hypothetical version had no such turmoil.
  • ...1 more annotation...
  • So, does this mean that we should throw away all those pen-and-paper and survey-based studies of religion. Well no - they still tell us something. It's just not entirely clear what they are telling us!
Weiye Loh

The Mechanic Muse - What Is Distant Reading? - NYTimes.com - 0 views

  • Lit Lab tackles literary problems by scientific means: hypothesis-testing, computational modeling, quantitative analysis. Similar efforts are currently proliferating under the broad rubric of “digital humanities,” but Moretti’s approach is among the more radical. He advocates what he terms “distant reading”: understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.
  • People recognize, say, Gothic literature based on castles, revenants, brooding atmospheres, and the greater frequency of words like “tremble” and “ruin.” Computers recognize Gothic literature based on the greater frequency of words like . . . “the.” Now, that’s interesting. It suggests that genres “possess distinctive features at every possible scale of analysis.” More important for the Lit Lab, it suggests that there are formal aspects of literature that people, unaided, cannot detect.
  • Distant reading might prove to be a powerful tool for studying literature, and I’m intrigued by some of the lab’s other projects, from analyzing the evolution of chapter breaks to quantifying the difference between Irish and English prose styles. But whatever’s happening in this paper is neither powerful nor distant. (The plot networks were assembled by hand; try doing that without reading Hamlet.) By the end, even Moretti concedes that things didn’t unfold as planned. Somewhere along the line, he writes, he “drifted from quantification to the qualitative analysis of plot.”
  • ...5 more annotations...
  • most scholars, whatever their disciplinary background, do not publish negative results.
  • I would admire it more if he didn’t elsewhere dismiss qualitative literary analysis as “a theological exercise.” (Moretti does not subscribe to literary-analytic pluralism: he has suggested that distant reading should supplant, not supplement, close reading.) The counterpoint to theology is science, and reading Moretti, it’s impossible not to notice him jockeying for scientific status. He appears now as literature’s Linnaeus (taxonomizing a vast new trove of data), now as Vesalius (exposing its essential skeleton), now as Galileo (revealing and reordering the universe of books), now as Darwin (seeking “a law of literary ­evolution”).
  • Literature is an artificial universe, and the written word, unlike the natural world, can’t be counted on to obey a set of laws. Indeed, Moretti often mistakes metaphor for fact. Those “skeletons” he perceives inside stories are as imposed as exposed; and literary evolution, unlike the biological kind, is largely an analogy. (As the author and critic Elif Batuman pointed out in an n+1 essay on Moretti’s earlier work, books actually are the result of intelligent design.)
  • Literature, he argues, is “a collective system that should be grasped as such.” But this, too, is a theology of sorts — if not the claim that literature is a system, at least the conviction that we can find meaning only in its totality.
  • The idea that truth can best be revealed through quantitative models dates back to the development of statistics (and boasts a less-than-benign legacy). And the idea that data is gold waiting to be mined; that all entities (including people) are best understood as nodes in a network; that things are at their clearest when they are least particular, most interchangeable, most aggregated — well, perhaps that is not the theology of the average lit department (yet). But it is surely the theology of the 21st century.
Weiye Loh

Ian Burrell: 'Hackgate' is a story that refuses to go away - Commentators, Opinion - The Independent - 0 views

  • Mr Murdoch's close henchman Les Hinton assured MPs that the affair had been dealt with and when, two years later, Mr Coulson – by now director of communications for David Cameron – appeared before a renewed parliamentary inquiry he seemed confident of being fireproof. "We did not use subterfuge of any kind unless there was a clear public interest in doing so," he told MPs. When Scotland Yard concluded that, despite more allegations of hacking, there was nothing new to investigate, Wapping and Mr Coulson must again have concluded the affair was over.
  • But after an election campaign in which the Conservatives were roundly supported by Mr Murdoch's papers, a succession of further claimants against the News of the World has come forward. Sienna Miller, among others, seems determined to take her case to court, compelling Mulcaire to reveal his handlers and naming in court documents Ian Edmondson, once one of Coulson's executives. Mr Edmondson is now suspended. But the story is unlikely to end there
  • When Rupert Murdoch came to England last October to deliver a lecture, there were some in the audience who raised eyebrows when the media mogul broke off from a paean to Baroness Thatcher to say of his journalists: "We will vigorously pursue the truth – and we will not tolerate wrongdoing." The latter comment seemed to refer to the long-running phone-hacking scandal involving the News of the World, the tabloid he has owned for 41 years. Mr Murdoch's executives at his British headquarters in Wapping, east London, tried to draw a veil over the paper's own dirty secrets in 2007 and had no doubt assured him that the matter was history. Yet here was the boss, four years later, having to vouch for his organisation's honesty. Related articles
  •  
    The news agenda changes fast in tabloid journalism but Hackgate has been a story that refuses to go away. When the private investigator Glenn Mulcaire and the News of the World journalist Clive Goodman were jailed for conspiring to intercept the voicemails of members of the royal household, Wapping quickly closed ranks. The editor Andy Coulson was obliged to fall on his sword - while denying knowledge of illegality - and Goodman was condemned as a rogue operator.
Weiye Loh

American Medical Association Officially Condemns Photoshopping - Health - GOOD - 0 views

  • The AMA this week formally denounced retouching pictures and asked ad agencies to consider setting stricter guidelines for how photos are manipulated before becoming advertisements.
  • Last year in France, members of parliament advocated attaching warning labels to imagery that had been digitally enhanced; lawmakers in England have also dabbled with the idea. Perhaps the AMA's new stance will be the nudge America needs to follow our European friends' lead. Unfortunately, our staggering eating disorder statistics seem to not be enough.
1 - 20 of 20
Showing 20 items per page