Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged groups

Rss Feed Group items tagged

Weiye Loh

Open Letter to Richard Dawkins: Why Are You Still In Denial About Group Selection? : Ev... - 0 views

  • Dear Richard, I do not agree with the cynical adage "science progresses--funeral by funeral", but I fear that it might be true in your case for the subject of group selection.
  • Edward Wilson was misunderstanding kin selection as far back as Sociobiology, where he treated it as a subset of group selection ... Kin selection is not a subset of group selection, it is a logical consequence of gene selection. And gene selection is (everything that Nowak et al ought to mean by) 'standard natural selection' theory: has been ever since the neo-Darwinian synthesis of the 1930s.
  • I do not agree with the Nowak et al. article in every respect and will articulate some of my disagreements in subsequent posts. For the moment, I want to stress how alone you are in your statement about group selection. Your view is essentially pre-1975, a date that is notable not only for the publication of Sociobiology but also a paper by W.D. Hamilton, one of your heroes, who correctly saw the relationship between kin selection and group selection thanks to the work of George Price. Ever since, knowledgeable theoretical biologists have known that inclusive fitness theory includes the logic of multilevel selection, which means that altruism is selectively disadvantageous within kin groups and evolves only by virtue of groups with more altruists contributing more to the gene pool than groups with fewer altruists. The significance of relatedness is that it clusters the genes coding for altruistic and selfish behaviors into different groups.
  • ...3 more annotations...
  • Even the contemporary theoretical biologists most critical of multilevel selection, such as Stuart West and Andy Gardner, acknowledge what you still deny. In an earlier feature on group selection published in Nature, Andy Gardner is quoted as saying "Everyone agrees that group selection occurs"--everyone except you, that is.
  • You correctly say that gene selection is standard natural selection theory. Essentially, it is a popularization of the concept of average effects in population genetics theory, which averages the fitness of alternative genes across all contexts to calculate what evolves in the total population. For that reason, it is an elementary mistake to regard gene selection as an alternative to group selection. Whenever a gene evolves in the total population on the strength of group selection, despite being selectively disadvantageous within groups, it has the highest average effect compared to the genes that it replaced. Please consult the installment of my "Truth and Reconciliation for Group Selection" series titled "Naïve Gene Selectionism" for a refresher course. While you're at it, check out the installment titled "Dawkins Protests--Too Much".
  • The Nowak et al. article includes several critiques of inclusive fitness theory that need to be distinguished from each other. One issue is whether inclusive fitness theory is truly equivalent to explicit models of evolution in multi-group populations, or whether it makes so many simplifying assumptions that it restricts itself to a small region of the parameter space. A second issue is whether benefiting collateral kin is required for the evolution of eusociality and other forms of prosociality. A third issue is whether inclusive fitness theory, as understood by the average evolutionary biologist and the general public, bears any resemblance to inclusive fitness theory as understood by the cognoscenti.
  •  
    Open Letter to Richard Dawkins: Why Are You Still In Denial About Group Selection?
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Google's Next Mission: Fighting Violent Extremism | Fast Company - 0 views

  • Technology, of course, is playing a role both in recruiting members to extremist groups, as well as fueling pro-democracy and other movements--and that’s where Google’s interest lies. "Technology is a part of every challenge in the world, and a part of every solution,” Cohen tells Fast Company. "To the extent that we can bring that technology expertise, and mesh it with the Council on Foreign Relations’ academic expertise--and mesh all of that with the expertise of those who have had these experiences--that's a valuable network to explore these questions."
  • Cohen is the former State Department staffer who is best known for his efforts to bring technology into the country’s diplomatic efforts. But he was originally hired by Condaleezza Rice back in 2006 for a different--though related--purpose: to help Foggy Bottom better understand Middle Eastern youths (many of whom were big technology adopters) and how they could best "deradicalized." Last fall, Cohen joined Google as head of its nascent Google Ideas, which the company is labeling a "think/do tank."
  • This summer’s conference, "Summit Against Violent Extremism," takes place June 26-29 and will bring together about 50 former members of extremist groups--including former neo-Nazis, Muslim fundamentalists, and U.S. gang members--along with another 200 representatives from civil society organizations, academia, private corporations, and victims groups. The hope is to identify some common factors that cause young people to join violent organizations, and to form a network of people working on the issue who can collaborate going forward.
  • ...1 more annotation...
  • One of the technologies where extremism is playing out these days is in Google’s own backyard. While citizen empowerment movements have made use of YouTube to broadcast their messages, so have Terrorist and other groups. Just this week, anti-Hamas extremists kidnapped an Italian peace activist and posted their hostage video to YouTube first before eventually murdering him. YouTube has been criticized in the past for not removing violent videos quick enough. But Cohen says the conference is looking at the root causes that prompt a young person to join one of the groups in the first place. "There are a lot of different dimensions to this challenge," he says. "It’s important not to conflate everything."
  •  
    Neo-Nazi groups and al Qaeda might not seem to have much in common, but they do in one key respect: their recruits tend to be very young. The head of Google's new think tank, Jared Cohen, believes there might be some common reasons why young people are drawn to violent extremist groups, no matter their ideological or philosophical bent. So this summer, Cohen is spearheading a conference, in Dublin, Ireland, to explore what it is that draws young people to these groups and what can be done to redirect them.
Weiye Loh

Crashing Into Stereotypes, Bryan Caplan | EconLog | Library of Economics and Liberty - 0 views

  • The trite official theme of the movie - the evils of narrow-minded prejudice - could have sunk the whole project. But as in a lot of compelling fiction, the official theme of Crash contradicts the details of the story. If you are paying attention, it soon becomes obvious that virtually none of the characters suffer from "narrow-minded prejudice." No one makes up their grievances out of thin air. Instead, the characters mostly engage in statistical discrimination. They generalize from their experience to form stereotypes about the members of different ethnic groups (including their own!), and act on those stereotypes when it is costly to make case-by-case judgments (as it usually is). In the story, moreover, stereotypes are almost invariably depicted as statistically accurate. Young black men are more likely to be car thieves; white cops are more likely to abuse black suspects; and Persians have bad tempers. Of course, the story also makes the point that some members of these groups violate the stereotype. But that "insight" is basic to all statistical reasoning.
  • the rule in Crash is that busy people see others as average members of their groups until proven otherwise.
  • It is particularly interesting that Crash illustrates one of the deep truths of models of statistical discrimination: The real social conflict is not between groups, but within groups. People who are below-average for their group make life worse for people who are above-average for their group. Women who get job training and then quit to have children hurt the careers of single-minded career women, because they reduce the profitability of the average woman. This lesson is beautifully expressed in the scene where the successful black t.v. producer (Terrence Howard) chews out the black teen-ager (Chris "Ludacris" Bridges) who unsuccessfully tried to car-jack him: You embarrass me. You embarrass yourself.
  •  
    If you really want to improve your group's image, telling other groups to stop stereotyping won't work. The stereotype is based on the underlying distribution of fact. It is far more realistic to turn your complaining inward, and pressure the bad apples in your group to stop pulling down the average.
Weiye Loh

Meet the Ethical Placebo: A Story that Heals | NeuroTribes - 0 views

  • In modern medicine, placebos are associated with another form of deception — a kind that has long been thought essential for conducting randomized clinical trials of new drugs, the statistical rock upon which the global pharmaceutical industry was built. One group of volunteers in an RCT gets the novel medication; another group (the “control” group) gets pills or capsules that look identical to the allegedly active drug, but contain only an inert substance like milk sugar. These faux drugs are called placebos.
  • Inevitably, the health of some people in both groups improves, while the health of others grows worse. Symptoms of illness fluctuate for all sorts of reasons, including regression to the mean.
  • Since the goal of an RCT, from Big Pharma’s perspective, is to demonstrate the effectiveness of a new drug, the return to robust health of a volunteer in the control group is considered a statistical distraction. If too many people in the trial get better after downing sugar pills, the real drug will look worse by comparison — sometimes fatally so for the purpose of earning approval from the Food and Drug Adminstration.
  • ...12 more annotations...
  • For a complex and somewhat mysterious set of reasons, it is becoming increasingly difficult for experimental drugs to prove their superiority to sugar pills in RCTs
  • in recent years, however, has it become obvious that the abatement of symptoms in control-group volunteers — the so-called placebo effect — is worthy of study outside the context of drug trials, and is in fact profoundly good news to anyone but investors in Pfizer, Roche, and GlaxoSmithKline.
  • The emerging field of placebo research has revealed that the body’s repertoire of resilience contains a powerful self-healing network that can help reduce pain and inflammation, lower the production of stress chemicals like cortisol, and even tame high blood pressure and the tremors of Parkinson’s disease.
  • more and more studies each year — by researchers like Fabrizio Benedetti at the University of Turin, author of a superb new book called The Patient’s Brain, and neuroscientist Tor Wager at the University of Colorado — demonstrate that the placebo effect might be potentially useful in treating a wide range of ills. Then why aren’t doctors supposed to use it?
  • The medical establishment’s ethical problem with placebo treatment boils down to the notion that for fake drugs to be effective, doctors must lie to their patients. It has been widely assumed that if a patient discovers that he or she is taking a placebo, the mind/body password will no longer unlock the network, and the magic pills will cease to do their job.
  • For “Placebos Without Deception,” the researchers tracked the health of 80 volunteers with irritable bowel syndrome for three weeks as half of them took placebos and the other half didn’t.
  • In a previous study published in the British Medical Journal in 2008, Kaptchuk and Kirsch demonstrated that placebo treatment can be highly effective for alleviating the symptoms of IBS. This time, however, instead of the trial being “blinded,” it was “open.” That is, the volunteers in the placebo group knew that they were getting only inert pills — which they were instructed to take religiously, twice a day. They were also informed that, just as Ivan Pavlov trained his dogs to drool at the sound of a bell, the body could be trained to activate its own built-in healing network by the act of swallowing a pill.
  • In other words, in addition to the bogus medication, the volunteers were given a true story — the story of the placebo effect. They also received the care and attention of clinicians, which have been found in many other studies to be crucial for eliciting placebo effects. The combination of the story and a supportive clinical environment were enough to prevail over the knowledge that there was really nothing in the pills. People in the placebo arm of the trial got better — clinically, measurably, significantly better — on standard scales of symptom severity and overall quality of life. In fact, the volunteers in the placebo group experienced improvement comparable to patients taking a drug called alosetron, the standard of care for IBS. Meet the ethical placebo: a powerfully effective faux medication that meets all the standards of informed consent.
  • The study is hardly the last word on the subject, but more like one of the first. Its modest sample size and brief duration leave plenty of room for followup research. (What if “ethical” placebos wear off more quickly than deceptive ones? Does the fact that most of the volunteers in this study were women have any bearing on the outcome? Were any of the volunteers skeptical that the placebo effect is real, and did that affect their response to treatment?) Before some eager editor out there composes a tweet-baiting headline suggesting that placebos are about to drive Big Pharma out of business, he or she should appreciate the fact that the advent of AMA-approved placebo treatments would open numerous cans of fascinatingly tangled worms. For example, since the precise nature of placebo effects is shaped largely by patients’ expectations, would the advertised potency and side effects of theoretical products like Placebex and Therastim be subject to change by Internet rumors, requiring perpetual updating?
  • It’s common to use the word “placebo” as a synonym for “scam.” Economists talk about placebo solutions to our economic catastrophe (tax cuts for the rich, anyone?). Online skeptics mock the billion-dollar herbal-medicine industry by calling it Big Placebo. The fact that our brains and bodies respond vigorously to placebos given in warm and supportive clinical environments, however, turns out to be very real.
  • We’re also discovering that the power of narrative is embedded deeply in our physiology.
  • in the real world of doctoring, many physicians prescribe medications at dosages too low to have an effect on their own, hoping to tap into the body’s own healing resources — though this is mostly acknowledged only in whispers, as a kind of trade secret.
Weiye Loh

Facebook groups hijacked - 1 views

  • ACTIVISTS claimed on Tuesday to have seized control of nearly 300 Facebook community groups in a self-proclaimed effort to expose how vulnerable online reputations are to tampering.
  • CYI claimed its motives were pure and that the move was more of a 'take-over' than a computer hack of Facebook groups.
    • Weiye Loh
       
      Sure, the end/ purpose is good... but the means? Questionable. Yet, it may be the only way to get people to formally recognize a flaw that everyone is well (sub)conscious about but refuses to do anything.  Freedom of expression perhaps? We're back to the issue of what is right and what is wrong. 
  • 'Facebook Groups suffer from a major flaw,' said a message on the CYI blog. 'If an administrator of a group leaves, anyone can register as a new admin. So, in order to take control of a Facebook group, all you really have to do is a quick search on Google.' Once CYI accessed groups as administrators it had authority to change anything, including pictures, descriptions and settings.
Weiye Loh

Politics and self-confidence trump education on climate change - 0 views

  • One set of polls, conducted by the University of New Hampshire, focused on a set of rural areas, including Alaska, the Gulf Coast, and Appalachia. These probably don't reflect the US as a whole, but the pollsters had about 9,500 respondents. The second, published in the The Sociological Quarterly, took advantage of a decade's worth of Earth Day polls conducted by Gallup.
  • Both surveys asked similar questions, however, including whether climate change has occurred and whether humans were likely to be the primary cause. The scientific community, including all the major scientific organizations that have issued statements on the matter, has said yes to both of these questions, and the authors interpret their findings in light of that.
  • The UNH poll shows that a strong majority—in the 80-90 percent range—accepts that climate change is happening. The Gallup polls explicitly asked about global warming and got lower percentages, although it still found that a majority of the US thinks the climate is changing. Those who label themselves conservatives, however, are notably less likely to even accept that basic point; less than half of them do, while the majority of liberals and independents do.
  • ...7 more annotations...
  • Although there was widespread acceptance that climate change was occurring, Democrats were much more likely to ascribe it to human causes (margins ranged from 20 to 50 percent). Independents were somewhere in the middle. Among those who claimed to understand the topic well, the gap actually increased.
  • Republicans with a high degree of confidence in their knowledge of the climate were more likely to dismiss the scientific community's opinion; the highly confident Democrats were more likely to embrace it. The authors caution, however, that "The survey answers thus reflect self-confidence, which has an untested relation to knowledge."
  • The people working with Gallup data performed the same analysis, and found precisely the same thing: the more registered Republicans and those who describe themselves as conservatives thought they knew about anthropogenic climate change, the less likely they were to accept the evidence for it. For Democrats and independents, the opposite was true (same for self-styled moderates and liberals). This group also did a slightly different check, and broke out opinions on global warming based on education and political leanings. For Democrats and independents, increased education boosted their readiness to accept the scientific community's conclusions. For self-styled conservatives, education had almost no effect (it gave a slight boost in registered Republicans).
  • Because this group had temporal data, they could track the progression of this liberal/conservative gap. It existed back in the first year they had data, 2001, but the gap was relatively stable until about 2008. At that point, acceptance among conservatives plunged, leading to the current gap of over 40 percentage points (up from less than 20) between these groups.
  • Both groups also come to similar conclusions about why this gap has developed. The piece in The Sociological Quarterly is appropriately sociological, suggesting that modernizing forces have compelled most societies to deal with the "negative consequences of industrial capitalism," such as pollution. Climate change, for these authors, is a case where the elites of conservative politics have convinced their followers to protect capitalism from any negative associations.
  • The UNH group takes a more nuanced, psychological view of matters. "'Biased assimilation' has been demonstrated in experiments that find people reject information about the existence of a problem if they object to its possible solutions," they note, before later stating that many appear to be "basing their beliefs about science and physical reality on what they thought would be the political implications if human-caused climate change were true."
  • neither group offers a satisfying solution. The sociologists simply warn that the culture wars have reached potentially dangerous proportions when it comes to climate science, while the group from New Hampshire suggests we might have to wait until an unambiguous consequence, like the loss of Arctic ice in the summer, for some segments of society to come around.
  •  
    when it comes to climate change, politics dominates, eclipsing self-assessed knowledge and general education. In fact, it appears that your political persuasion might determine whether an education will make you more or less likely to believe the scientific community.
Weiye Loh

Roger Pielke Jr.'s Blog: IPCC and COI: Flashback 2004 - 0 views

  • In this case the NGOs and other groups represent environmental and humanitarian groups that have put together a report (in PDF) on what they see as needed and unnecessary policy actions related to climate change. They put together a nice glossy report with findings and recommendations such as: *Limit global temperature rise to 2 degrees (Celsius, p. 4) *Extracting the World Bank from fossil fuels (p. 15) *Opposing the inclusion of carbon sinks in the [Kyoto] Protocol (p. 22)
  • It is troubling that the Chair of the IPCC would lend his name and organizational affiliation to a set of groups with members engaged actively in political advocacy on climate change. Even if Dr. Pachauri feels strongly about the merit of the political agenda proposed by these groups, at a minimum his endorsement creates a potential perception that the IPCC has an unstated political agenda. This is compounded by the fact that the report Dr. Pachauri tacitly endorses contains statements that are scientifically at odds with those of the IPCC.
  • perhaps most troubling is that by endorsing this group’s agenda he has opened the door for those who would seek to discredit the IPCC by alleging exactly such a bias. (And don’t be surprised to see such statements forthcoming.) If the IPCC’s role is indeed to act as an honest broker, then it would seem to make sense that its leadership ought not blur that role by endorsing, tacitly or otherwise, the agendas of particular groups. There are plenty of appropriate places for political advocacy on climate change, but the IPCC does not seem to me to be among those places.
  • ...1 more annotation...
  • Organized by the New Economics Foundation and the Working Group on Climate and Development, the report (in PDF) is actually pretty good and contains much valuable information on climate change and development (that is, once you get past the hype of the press release and its lack of precision in disaggregating climate and vulnerability as sources of climate-related impacts). The participating organizations have done a nice job integrating considerations of climate change and development, a perspective that is certainly needed. More generally, the IPCC suffers because it no longer considers “policy options” under its mandate. Since its First Assessment Report when it did consider policy options, the IPCC has eschewed responsibility for developing and evaluating a wide range of possible policy options on climate change. By deciding to policy outside of its mandate since 1992, the IPCC, ironically, leaves itself more open to charges of political bias. It is time for the IPCC to bring policy back in, both because we need new and innovative options on climate, but also because the IPCC has great potential to serve as an honest broker. But until it does, its leadership would be well served to avoid either the perception or the reality of endorsing particular political perspectives.
  •  
    Consider the following imaginary scenario. NGOs and a few other representatives of the oil and gas industry decide to band together to produce a report on what they see as needed and unnecessary policy actions related to climate change. They put together a nice glossy report with findings and recommendations such as: *Coal is the fuel of the future, we must mine more. *CO2 regulations are too costly. *Climate change will be good for agriculture. In addition, the report contains some questionable scientific statements and associations. Imagine further that the report contains a preface authored by a prominent scientist who though unpaid for his work lends his name and credibility to the report. How might that scientist be viewed by the larger community? Answers that come to mind include: "A tool of industry," "Discredited," "Biased," "Political Advocate." It is likely that in such a scenario that connection of the scientist to the political advocacy efforts of the oil and gas industry would provide considerable grist for opponents of the oil and gas industry, and specifically a basis for highlighting the appearance or reality of a compromised position of the scientist. Fair enough?
Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Low Yunying

Private educational provider threatens online forum with defamation - 23 views

The case study can be found here: http://theonlinecitizen.com/2009/06/report-private-educational-provider-threatens-online-forum-with-defamation/ Summary: Private educational provider (Harriet...

defamation education

started by Low Yunying on 15 Aug 09 no follow-up yet
Weiye Loh

Epiphenom: Religion and suicide - a patchy global picture - 0 views

  • The main objective of this study is to understand the factors that contribute to suicide in different countries, and what can be done to reduce them. In each country, people who have attempted suicide are brought into the study and given a questionnaire to fill out. Another group of people, randomly chosen, are given the same questionnaire. That allows the team to compare religious affiliation, involvement in organised religion, and individual religiosity in suicide attempters and the general population. When they looked at the data, and adjusted them for a host of factors known to affect suicide risk (age, gender, marital status, employment, and education), a complex picture emerged.
  • In Iran, religion was highly protective, whether religion was measured as the rate of mosque attendance or as whether the individual thought of themselves as a religious person. In Brazil, going to religious services and personal religiosity were both highly protective. Bizarrely, however, religious affiliation was not. That might be because being Protestant was linked to greater risk, and Catholicism to lower risk. Put the two together, and it may balance out. In Estonia, suicides were lower in those who were affiliated to a religion, and those who said they were religious. They were also a bit lower in those who In India, there wasn't much effect of religion at all - a bit lower in those who go to religious services at least occasionally. Vietnam was similar. Those who went to religious services yearly were less likely to have attempted suicide, but no other measure of religion had any effect. In Sri Lanka, going to religious services had no protective effect, but subjective religiosity did. In South Africa, those who go to Church were no less likely to attempt suicide. In fact, those who said they were religious were actually nearly three times more likely to attempt suicide, and those who were affiliated to a religion were an incredible six times more likely!
  • In Brazil, religious people are six times less likely to commit suicide than the non religious. In South Africa, they are three times more likely. How to explain these national differences?
  • ...5 more annotations...
  • Part of it might be differences in the predominant religion. The protective effect of religion seems to be higher in monotheistic countries, and it's particularly high in the most fervently monotheistic country, Iran. In India, Sri Lanka, and Vietnam, the protective effect is smaller or non-existent.
  • But that doesn't explain South Africa. South Africa is unusual in that it is a highly diverse country, fractured by ethnic, social and religious boundaries. The researchers think that this might be a factor: South Africa has been described as ‘‘The Rainbow Nation’’ because of its cultural diversity. There are a variety of ethnic groups and a greater variety of cultures within each of these groups. While cultural diversity is seen as a national asset, the interaction of cultures results in the blurring of cultural norms and boundaries at the individual, family and cultural group levels. Subsequently, there is a large diversity of religious denominations and this does not seem favorable in terms of providing protection against attempted suicide.
  • earlier studies have shown that religious homogeneity is linked to lower suicide rates, and they suggest that the reverse might well be happening in South Africa.
  • this also could explain why, in Brazil, Protestants have a higher suicide rate than the unaffiliated. That too could be linked to their status as a religious minority.
  • we've got a study showing the double-edged nature of religion. For those inside the group, it provides support and comfort. But once fractures appear, religion just seems to turn up the heat!
  •  
     Religion and suicide
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

The importance of culture change in open government | Government In The Lab - 0 views

  • Open government cannot succeed through technology only.  Open data, ideation platforms, cloud solutions, and social media are great tools but when they are used to deliver government services using existing models they can only deliver partial value, value which can not be measured and value that is unclear to anyone but the technology practitioners that are delivering the services.
  • It is this thinking that has led a small group of us to launch a new Group on Govloop called Culture Change and Open Government.  Bill Brantley wrote a great overview of the group which notes that “The purpose of this group is to create an international community of practice devoted to discussing how to use cultural change to bring about open government and to use this site to plan and stage unconferences devoted to cultural change“
  • “Open government is a citizen-centric philosophy and strategy that believes the best results are usually driven by partnerships between citizens and government, at all levels. It is focused entirely on achieving goals through increased efficiency, better management, information transparency, and citizen engagement and most often leverages newer technologies to achieve the desired outcomes. This is bringing business approaches, business technologies, to government“.
  •  
    open government has primarily been the domain of the technologist.  Other parts of the organization have not been considered, have not been educated, have not been organized around a new way of thinking, a new way of delivering value.  The organizational model, the culture itself, has not been addressed, the value of open government is not understood, it is not measurable, and it is not an approach that the majority of those in and around government have bought into.
Weiye Loh

Read Aubrey McClendon's response to "misleading" New York Times article (1) - 0 views

  • Since the shale gas revolution and resulting confirmation of enormous domestic gas reserves, there has been a relatively small group of analysts and geologists who have doubted the future of shale gas.  Their doubts have become very convenient to the environmental activists I mentioned earlier. This particular NYT reporter has apparently sought out a few of the doubters to fashion together a negative view of the U.S. natural gas industry. We also believe certain media outlets, especially the once venerable NYT, are being manipulated by those whose environmental or economic interests are being threatened by abundant natural gas supplies. We have seen for example today an email from a leader of a group called the Environmental Working Group who claimed today’s articles as this NYT reporter’s "second great story" (the first one declaring that produced water disposal from shale gas wells was unsafe) and that “we've been working with him for over 8 months. Much more to come. . .”
  • this reporter’s claim of impending scarcity of natural gas supply contradicts the facts and the scientific extrapolation of those facts by the most sophisticated reservoir engineers and geoscientists in the world. Not just at Chesapeake, but by experts at many of the world’s leading energy companies that have made multi-billion-dollar, long-term investments in U.S. shale gas plays, with us and many other companies. Notable examples of these companies, besides the leading independents such as Chesapeake, Devon, Anadarko, EOG, EnCana, Talisman and others, include these leading global energy giants:  Exxon, Shell, BP, Chevron, Conoco, Statoil, BHP, Total, CNOOC, Marathon, BG, KNOC, Reliance, PetroChina, Mitsui, Mitsubishi and ENI, among others.  Is it really possible that all of these companies, with a combined market cap of almost $2 trillion, know less about shale gas than a NYT reporter, a few environmental activists and a handful of shale gas doubters?
  •  
    Administrator's Note: This email was sent to all Chesapeake employees from CEO Aubrey McClendon, in response to a Sunday New York Times piece by Ian Urbina entitled "Insiders Sound an Alarm Amid a Natural Gas Rush."   FW: CHK's response to 6.26.11 NYT article on shale gas   From: Aubrey McClendon Sent: Sunday, June 26, 2011 8:37 PM To: All Employees   Dear CHK Employees:  By now many of you may have read or heard about a story in today's New York Times (NYT) that questioned the productive capacity and economic quality of U.S. natural gas shale reserves, as well as energy reserve accounting practices used by E&P companies, including Chesapeake.  The story is misleading, at best, and is the latest in a series of articles produced by this publication that obviously have an anti-industry bias.  We know for a fact that today's NYT story is the handiwork of the same group of environmental activists who have been the driving force behind the NYT's ongoing series of negative articles about the use of fracking and its importance to the US natural gas supply growth revolution - which is changing the future of our nation for the better in multiple areas.  It is not clear to me exactly what these environmental activists are seeking to offer as their alternative energy plan, but most that I have talked to continue to naively presume that our great country need only rely on wind and solar energy to meet our current and future energy needs. They always seem to forget that wind and solar produce less than 2% of America electricity today and are completely non-economic without ongoing government and ratepayer subsidies.
Weiye Loh

**快乐天使**Happy Angel** - 0 views

  •  
    the reporter Mr.Chua Eng Wee of LianHe  ZaoBao told me on phone in the evening of 19 May 2012 that someone from your group had called to SPH and requested to add in one sentence for their news report. On 19 May 2012 4pm, PM Lee and his team are all with your group together for the visitation. Thus, if you don't investigate and clarify clearly, people may think it maybe PM Lee's team who did it.
Weiye Loh

Skepticblog » The Decline Effect - 0 views

  • The first group are those with an overly simplistic or naive sense of how science functions. This is a view of science similar to those films created in the 1950s and meant to be watched by students, with the jaunty music playing in the background. This view generally respects science, but has a significant underappreciation for the flaws and complexity of science as a human endeavor. Those with this view are easily scandalized by revelations of the messiness of science.
  • The second cluster is what I would call scientific skepticism – which combines a respect for science and empiricism as a method (really “the” method) for understanding the natural world, with a deep appreciation for all the myriad ways in which the endeavor of science can go wrong. Scientific skeptics, in fact, seek to formally understand the process of science as a human endeavor with all its flaws. It is therefore often skeptics pointing out phenomena such as publication bias, the placebo effect, the need for rigorous controls and blinding, and the many vagaries of statistical analysis. But at the end of the day, as complex and messy the process of science is, a reliable picture of reality is slowly ground out.
  • The third group, often frustrating to scientific skeptics, are the science-deniers (for lack of a better term). They may take a postmodernist approach to science – science is just one narrative with no special relationship to the truth. Whatever you call it, what the science-deniers in essence do is describe all of the features of science that the skeptics do (sometimes annoyingly pretending that they are pointing these features out to skeptics) but then come to a different conclusion at the end – that science (essentially) does not work.
  • ...13 more annotations...
  • this third group – the science deniers – started out in the naive group, and then were so scandalized by the realization that science is a messy human endeavor that the leap right to the nihilistic conclusion that science must therefore be bunk.
  • The article by Lehrer falls generally into this third category. He is discussing what has been called “the decline effect” – the fact that effect sizes in scientific studies tend to decrease over time, sometime to nothing.
  • This term was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time. 
  • Lehrer, however, applies this idea to all of science, not just parapsychology. He writes: And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
  • Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
  • Lehrer is describing the cutting edge or the fringe of science, and then acting as if it applies all the way down to the core. I think the problem is that there is so much scientific knowledge that we take for granted – so much so that we forget it is knowledge that derived from the scientific method, and at one point was not known.
  • It is telling that Lehrer uses as his primary examples of the decline effect studies from medicine, psychology, and ecology – areas where the signal to noise ratio is lowest in the sciences, because of the highly variable and complex human element. We don’t see as much of a decline effect in physics, for example, where phenomena are more objective and concrete.
  • If the truth itself does not “wear off”, as the headline of Lehrer’s article provocatively states, then what is responsible for this decline effect?
  • it is no surprise that effect science in preliminary studies tend to be positive. This can be explained on the basis of experimenter bias – scientists want to find positive results, and initial experiments are often flawed or less than rigorous. It takes time to figure out how to rigorously study a question, and so early studies will tend not to control for all the necessary variables. There is further publication bias in which positive studies tend to be published more than negative studies.
  • Further, some preliminary research may be based upon chance observations – a false pattern based upon a quirky cluster of events. If these initial observations are used in the preliminary studies, then the statistical fluke will be carried forward. Later studies are then likely to exhibit a regression to the mean, or a return to more statistically likely results (which is exactly why you shouldn’t use initial data when replicating a result, but should use entirely fresh data – a mistake for which astrologers are infamous).
  • skeptics are frequently cautioning against new or preliminary scientific research. Don’t get excited by every new study touted in the lay press, or even by a university’s press release. Most new findings turn out to be wrong. In science, replication is king. Consensus and reliable conclusions are built upon multiple independent lines of evidence, replicated over time, all converging on one conclusion.
  • Lehrer does make some good points in his article, but they are points that skeptics are fond of making. In order to have a  mature and functional appreciation for the process and findings of science, it is necessary to understand how science works in the real world, as practiced by flawed scientists and scientific institutions. This is the skeptical message.
  • But at the same time reliable findings in science are possible, and happen frequently – when results can be replicated and when they fit into the expanding intricate weave of the picture of the natural world being generated by scientific investigation.
Weiye Loh

Climate Emails Stoke Debate - WSJ.com - 0 views

  • The scientific community is buzzing over thousands of emails and documents -- posted on the Internet last week after being hacked from a prominent climate-change research center -- that some say raise ethical questions about a group of scientists who contend humans are responsible for global warming.
  • Some emails also refer to efforts by scientists who believe man is causing global warming to exclude contrary views from important scientific publications.
  • "This is what everyone feared. Over the years, it has become increasingly difficult for anyone who does not view global warming as an end-of-the-world issue to publish papers. This isn't questionable practice, this is unethical."
  • ...4 more annotations...
  • "The selective publication of some stolen emails and other papers taken out of context is mischievous and cannot be considered a genuine attempt to engage with this issue in a responsible way," the university said.
  • A partial review of the hacked material suggests there was an effort at East Anglia, which houses an important center of global climate research, to shut out dissenters and their points of view. In the emails, which date to 1996, researchers in the U.S. and the U.K. repeatedly take issue with climate research at odds with their own findings. In some cases, they discuss ways to rebut what they call "disinformation" using new articles in scientific journals or popular Web sites. The emails include discussions of apparent efforts to make sure that reports from the Intergovernmental Panel on Climate Change, a United Nations group that monitors climate science, include their own views and exclude others. In addition, emails show that climate scientists declined to make their data available to scientists whose views they disagreed with.
  • Phil Jones, the director of the East Anglia climate center, suggested to climate scientist Michael Mann of Penn State University that skeptics' research was unwelcome: We "will keep them out somehow -- even if we have to redefine what the peer-review literature is!"
  • John Christy, a scientist at the University of Alabama at Huntsville attacked in the emails for asking that an IPCC report include dissenting viewpoints, said, "It's disconcerting to realize that legislative actions this nation is preparing to take, and which will cost trillions of dollars, are based upon a view of climate that has not been completely scientifically tested."
  •  
    The scientific community is buzzing over thousands of emails and documents -- posted on the Internet last week after being hacked from a prominent climate-change research center -- that some say raise ethical questions about a group of scientists who contend humans are responsible for global warming.
Weiye Loh

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
Weiye Loh

Adventures in Flay-land: Dealing with Denialists - Delingpole Part III - 0 views

  • This post is about how one should deal with a denialist of Delingpole's ilk.
  • I saw someone I follow on Twitter retweet an update from another Twitter user called @AGW_IS_A_HOAX, which was this: "NZ #Climate Scientists Admit Faking Temperatures http://bit.ly/fHbdPI RT @admrich #AGW #Climategate #Cop16 #ClimateChange #GlobalWarming".
  • So I click on it. And this is how you deal with a denialist claim. You actually look into it. Here is the text of that article reproduced in full: New Zealand Climate Scientists Admit To Faking Temperatures: The Actual Temps Show Little Warming Over Last 50 YearsRead here and here. Climate "scientists" across the world have been blatantly fabricating temperatures in hopes of convincing the public and politicians that modern global warming is unprecedented and accelerating. The scientists doing the fabrication are usually employed by the government agencies or universities, which thrive and exist on taxpayer research dollars dedicated to global warming research. A classic example of this is the New Zealand climate agency, which is now admitting their scientists produced bogus "warming" temperatures for New Zealand. "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century. For all their talk about warming, for all their rushed invention of the “Eleven-Station Series” to prove warming, this new series shows that no warming has occurred here since about 1960. Almost all the warming took place from 1940-60, when the IPCC says that the effect of CO2 concentrations was trivial. Indeed, global temperatures were falling during that period.....Almost all of the 34 adjustments made by Dr Jim Salinger to the 7SS have been abandoned, along with his version of the comparative station methodology."A collection of temperature-fabrication charts.
  • ...10 more annotations...
  • I check out the first link, the first "here" where the article says "Read here and here". I can see that there's been some sort of dispute between two New Zealand groups associated with climate change. One is New Zealand’s Climate Science Coalition (NZCSC) and the other is New Zealand’s National Institute of Water and Atmospheric Research (NIWA), but it doesn't tell me a whole lot more than I already got from the other article.
  • I check the second source behind that article. The second article, I now realize, is published on the website of a person called Andrew Montford with whom I've been speaking recently and who is the author of a book titled The Hockey Stick Illusion. I would not label Andrew a denialist. He makes some good points and seems to be a decent guy and geniune sceptic (This is not to suggest all denialists are outwardly dishonest; however, they do tend to be hard to reason with). Again, this article doesn't give me anything that I haven't already seen, except a link to another background source. I go there.
  • From this piece written up on Scoop NZNEWSUK I discover that a coalition group consisting of the NZCSC and the Climate Conversation Group (CCG) has pressured the NIWA into abandoning a set of temperature record adjustments of which the coalition dispute the validity. This was the culmination of a court proceeding in December 2010, last month. In dispute were 34 adjustments that had been made by Dr Jim Salinger to the 7SS temperature series, though I don't know what that is exactly. I also discover that there is a guy called Richard Treadgold, Convenor of the CCG, who is quoted several times. Some of the statements he makes are quoted in the articles I've already seen. They are of a somewhat snide tenor. The CSC object to the methodology used by the NIWA to adjust temperature measurements (one developed as part of a PhD thesis), which they critique in a paper in November 2009 with the title "Are we feeling warmer yet?", and are concerned about how this public agency is spending its money. I'm going to have to dig a bit deeper if I want to find out more. There is a section with links under the heading "Related Stories on Scoop". I click on a few of those.
  • One of these leads me to more. Of particular interest is a fairly neutral article outlining the progress of the court action. I get some more background: For the last ten years, visitors to NIWA’s official website have been greeted by a graph of the “seven-station series” (7SS), under the bold heading “New Zealand Temperature Record”. The graph covers the period from 1853 to the present, and is adorned by a prominent trend-line sloping sharply upwards. Accompanying text informs the world that “New Zealand has experienced a warming trend of approximately 0.9°C over the past 100 years.” The 7SS has been updated and used in every monthly issue of NIWA’s “Climate Digest” since January 1993. Its 0.9°C (sometimes 1.0°C) of warming has appeared in the Australia/NZ Chapter of the IPCC’s 2001 and 2007 Assessment Reports. It has been offered as sworn evidence in countless tribunals and judicial enquiries, and provides the historical base for all of NIWA’s reports to both Central and Local Governments on climate science issues and future projections.
  • now I can see why this is so important. The temperature record informs the conclusions of the IPCC assessment reports and provides crucial evidence for global warming.
  • Further down we get: NIWA announces that it has now completed a full internal examination of the Salinger adjustments in the 7SS, and has forwarded its “review papers” to its Australian counterpart, the Bureau of Meteorology (BOM) for peer review.and: So the old 7SS has already been repudiated. A replacement NZTR [New Zealand Temperature Record] is being prepared by NIWA – presumably the best effort they are capable of producing. NZCSC is about to receive what it asked for. On the face of it, there’s nothing much left for the Court to adjudicate.
  • NIWA has been forced to withdraw its earlier temperature record and replace it with a new one. Treadgold quite clearly states that "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century" and that "the new temperature record shows no evidence of a connection with global warming." Earlier in the article he also stresses the role of the CSC in achieving these revisions, saying "after 12 months of futile attempts to persuade the public, misleading answers to questions in the Parliament from ACT and reluctant but gradual capitulation from NIWA, their relentless defence of the old temperature series has simply evaporated. They’ve finally given in, but without our efforts the faulty graph would still be there."
  • All this leads me to believe that if I look at the website of NIWA I will see a retraction of the earlier position and a new position that New Zealand has experienced no unusual warming. This is easy enough to check. I go there. Actually, I search for it to find the exact page. Here is the 7SS page on the NIWA site. Am I surprised that NIWA have retracted nothing and that in fact their revised graph shows similar results? Not really. However, I am somewhat surprised by this page on the Climate Conversation Group website which claims that the 7SS temperature record is as dead as the parrot in the Monty Python sketch. It says "On the eve of Christmas, when nobody was looking, NIWA declared that New Zealand had a new official temperature record (the NZT7) and whipped the 7SS off its website." However, I've already seen that this is not true. Perhaps there was once a 7SS graph and information about the temperature record on the site's homepage that can no longer be seen. I don't know. I can only speculate. I know that there is a section on the NIWA site about the 7SS temperature record that contains a number of graphs and figures and discusses recent revisions. It has been updated as recently as December 2010, last month. The NIWA page talks all about the 7SS series and has a heading that reads "Our new analysis confirms the warming trend".
  • The CCG page claims that the new NZT7 is not in fact a revision but rather a replacement. Although it results in a similar curve, the adjustments that were made are very different. Frankly I can't see how that matters at the end of the day. Now, I don't really know whether I can believe that the NIWA analysis is true, but what I am in no doubt of whatsoever is that the statements made by Richard Treadgold that were quoted in so many places are at best misleading. The NIWA has not changed its position in the slightest. The assertion that the NIWA have admitted that New Zealand has not warmed much since 1960 is a politician's careful argument. Both analyses showed the same result. This is a fact that NIWA have not disputed; however, they still maintain a connection to global warming. A document explaining the revisions talks about why the warming has slowed after 1960: The unusually steep warming in the 1940-1960 period is paralleled by an unusually large increase in northerly flow* during this same period. On a longer timeframe, there has been a trend towards less northerly flow (more southerly) since about 1960. However, New Zealand temperatures have continued to increase over this time, albeit at a reduced rate compared with earlier in the 20th century. This is consistent with a warming of the whole region of the southwest Pacific within which New Zealand is situated.
  • Denialists have taken Treadgold's misleading mantra and spread it far and wide including on Twitter and fringe websites, but it is faulty as I've just demonstrated. Why do people do this? Perhaps they are hoping that others won't check the sources. Most people don't. I hope this serves as a lesson for why you always should.
Weiye Loh

A lesson in citing irrelevant statistics | The Online Citizen - 0 views

  • Statistics that are quoted, by themselves, may be quite meaningless, unless they are on a comparative basis. To illustrate this, if we want to say that Group A (poorer kids) is not significantly worse off than Group B (richer kids), then it may be pointless to just cite the statistics for Group A, without Group B’s.
  • “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” “One in six scores in the top one-third at PSLE” What we need to know for comparative purposes, is the percentage of richer kids who scores in the top two-thirds too.
  • “… one in five scores in the top 30% at O and A levels… One in five goes to university and polys” What’s the data for richer kids? Since the proportion of the entire population going to university and polys has increased substantially, this clearly shows that poorer kids are worse off!
  • ...4 more annotations...
  • The Minister was quoted as saying: “My  parents had six children.  My first home as a young boy was a rental flat in Zion Road.  We shared it as tenants with other families” Citing individuals who made it, may be of no “statistical” relevance, as what we need are the statistics as to the proportion of poorer kids to richer kids, who get scholarships, proportional to their representation in the population.
  • “More spent on primary and secondary/JC schools.  This means having significantly more and better teachers, and having more programmes to meet children’s specific needs” What has spending more money, which what most countries do, got to do with the argument whether poorer kids are disadvantaged?
  • Straits Times journalist, Li XueYing put the crux of the debate in the right perspective: “Dr Ng had noted that ensuring social mobility “cannot mean equal outcomes, because students are inherently different”, But can it be that those from low-income families are consistently “inherently different” to such an extent?”
  • Relevant statistics Perhaps the most damning statistics that poorer kids are disadvantaged was the chart from the Ministry of Education (provided by the Straits Times), which showed that the percentage of Primary 1 pupils who lived in 1 to 3-room HDB flats and subsequently progressed to University and/or Polytechnic, has been declining since around 1986.
1 - 20 of 192 Next › Last »
Showing 20 items per page