Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Interpretation

Rss Feed Group items tagged

Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

The Ashtray: The Ultimatum (Part 1) - NYTimes.com - 0 views

  • “Under no circumstances are you to go to those lectures. Do you hear me?” Kuhn, the head of the Program in the History and Philosophy of Science at Princeton where I was a graduate student, had issued an ultimatum. It concerned the philosopher Saul Kripke’s lectures — later to be called “Naming and Necessity” — which he had originally given at Princeton in 1970 and planned to give again in the Fall, 1972.
  • Whiggishness — in history of science, the tendency to evaluate and interpret past scientific theories not on their own terms, but in the context of current knowledge. The term comes from Herbert Butterfield’s “The Whig Interpretation of History,” written when Butterfield, a future Regius professor of history at Cambridge, was only 31 years old. Butterfield had complained about Whiggishness, describing it as “…the study of the past with direct and perpetual reference to the present” – the tendency to see all history as progressive, and in an extreme form, as an inexorable march to greater liberty and enlightenment. [3] For Butterfield, on the other hand, “…real historical understanding” can be achieved only by “attempting to see life with the eyes of another century than our own.” [4][5].
  • Kuhn had attacked my Whiggish use of the term “displacement current.” [6] I had failed, in his view, to put myself in the mindset of Maxwell’s first attempts at creating a theory of electricity and magnetism. I felt that Kuhn had misinterpreted my paper, and that he — not me — had provided a Whiggish interpretation of Maxwell. I said, “You refuse to look through my telescope.” And he said, “It’s not a telescope, Errol. It’s a kaleidoscope.” (In this respect, he was probably right.) [7].
  • ...9 more annotations...
  • I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ” [8] ¶He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.” ¶And then I added, “…except for someone who imagines himself to be God.” ¶It was at this point that Kuhn threw the ashtray at me.
  • I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.
  • That's the problem with relativism: Who's to say who's right and who's wrong? Somehow I'm not surprised to hear Kuhn was an ashtray-hurler. In the end, what other argument could he make?
  • For us to have a conversation and come to an agreement about the meaning of some word without having to refer to some outside authority like a dictionary, we would of necessity have to be satisfied that our agreement was genuine and not just a polite acknowledgement of each others' right to their opinion, can you agree with that? If so, then let's see if we can agree on the meaning of the word 'know' because that may be the crux of the matter. When I use the word 'know' I mean more than the capacity to apprehend some aspect of the world through language or some other represenational symbolism. Included in the word 'know' is the direct sensorial perception of some aspect of the world. For example, I sense the floor that my feet are now resting upon. I 'know' the floor is really there, I can sense it. Perhaps I don't 'know' what the floor is made of, who put it there, and other incidental facts one could know through the usual symbolism such as language as in a story someone tells me. Nevertheless, the reality I need to 'know' is that the floor, or whatever you may wish to call the solid - relative to my body - flat and level surface supported by more structure then the earth, is really there and reliably capable of supporting me. This is true and useful knowledge that goes directly from the floor itself to my knowing about it - via sensation - that has nothing to do with my interpretive system.
  • Now I am interested in 'knowing' my feet in the same way that my feet and the whole body they are connected to 'know' the floor. I sense my feet sensing the floor. My feet are as real as the floor and I know they are there, sensing the floor because I can sense them. Furthermore, now I 'know' that it is 'I' sensing my feet, sensing the floor. Do you see where I am going with this line of thought? I am including in the word 'know' more meaning than it is commonly given by everyday language. Perhaps it sounds as if I want to expand on the Cartesian formula of cogito ergo sum, and in truth I prefer to say I sense therefore I am. It is my sensations of the world first and foremost that my awareness, such as it is, is actively engaged with reality. Now, any healthy normal animal senses the world but we can't 'know' if they experience reality as we do since we can't have a conversation with them to arrive at agreement. But we humans can have this conversation and possibly agree that we can 'know' the world through sensation. We can even know what is 'I' through sensation. In fact, there is no other way to know 'I' except through sensation. Thought is symbolic representation, not direct sensing, so even though the thoughtful modality of regarding the world may be a far more reliable modality than sensation in predicting what might happen next, its very capacity for such accurate prediction is its biggest weakness, which is its capacity for error
  • Sensation cannot be 'wrong' unless it is used to predict outcomes. Thought can be wrong for both predicting outcomes and for 'knowing' reality. Sensation alone can 'know' reality even though it is relatively unreliable, useless even, for making predictions.
  • If we prioritize our interests by placing predictability over pure knowing through sensation, then of course we will not value the 'knowledge' to be gained through sensation. But if we can switch the priorities - out of sheer curiosity perhaps - then we can enter a realm of knowledge through sensation that is unbelievably spectacular. Our bodies are 'made of' reality, and by methodically exercising our nascent capacity for self sensing, we can connect our knowing 'I' to reality directly. We will not be able to 'know' what it is that we are experiencing in the way we might wish, which is to be able to predict what will happen next or to represent to ourselves symbolically what we might experience when we turn our attention to that sensation. But we can arrive at a depth and breadth of 'knowing' that is utterly unprecedented in our lives by operating that modality.
  • One of the impressions that comes from a sustained practice of self sensing is a clearer feeling for what "I" is and why we have a word for that self referential phenomenon, seemingly located somewhere behind our eyes and between our ears. The thing we call "I" or "me" depending on the context, turns out to be a moving point, a convergence vector for a variety of images, feelings and sensations. It is a reference point into which certain impressions flow and out of which certain impulses to act diverge and which may or may not animate certain muscle groups into action. Following this tricky exercize in attention and sensation, we can quickly see for ourselves that attention is more like a focused beam and awareness is more like a diffuse cloud, but both are composed of energy, and like all energy they vibrate, they oscillate with a certain frequency. That's it for now.
  • I loved the writer's efforts to find a fixed definition of “Incommensurability;” there was of course never a concrete meaning behind the word. Smoke and mirrors.
Weiye Loh

Online "Toon porn" - 20 views

I must correct that never in my arguments did I mentioned that the interpreter is the problem. I was merely answering YZ's question if cartoon characters can be deemed as representative of human be...

online cartoon anime pornography ethics

Weiye Loh

CancerGuide: The Median Isn't the Message - 0 views

  • Statistics recognizes different measures of an "average," or central tendency. The mean is our usual concept of an overall average - add up the items and divide them by the number of sharers
  • The median, a different measure of central tendency, is the half-way point.
  • A politician in power might say with pride, "The mean income of our citizens is $15,000 per year." The leader of the opposition might retort, "But half our citizens make less than $10,000 per year." Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean; but he can balance only one mendicant in calculating a median).
  • ...7 more annotations...
  • The larger issue that creates a common distrust or contempt for statistics is more troubling. Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more "real" and the only proper basis for action - if it feels good, do it - while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, "Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death."
  • This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science. Heart and head are focal points of one body, one personality.
  • We still carry the historical baggage of a Platonic heritage that seeks sharp essences and definite boundaries. (Thus we hope to find an unambiguous "beginning of life" or "definition of death," although nature often comes to us as irreducible continua.) This Platonic heritage, with its emphasis in clear distinctions and separated immutable entities, leads us to view statistical measures of central tendency wrongly, indeed opposite to the appropriate interpretation in our actual world of variation, shadings, and continua. In short, we view means and medians as the hard "realities," and the variation that permits their calculation as a set of transient and imperfect measurements of this hidden essence. If the median is the reality and variation around the median just a device for its calculation, the "I will probably be dead in eight months" may pass as a reasonable interpretation.
  • But all evolutionary biologists know that variation itself is nature's only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently - and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation. When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good. I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation's best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.
  • Another technical point then added even more solace. I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call "right skewed." (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right. In skewed distributions, variation to one side of the central tendency is more stretched out - left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned. After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn't much room for the distribution's lower (or left) half - it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives. The distribution must be right skewed, and I needed to know how long the extended tail ran - for I had already concluded that my favorable profile made me a good candidate for that part of the curve.
  • The distribution was indeed, strongly right skewed, with a long tail (however small) that extended for several years above the eight month median. I saw no reason why I shouldn't be in that small tail, and I breathed a very long sigh of relief. My technical knowledge had helped. I had read the graph correctly. I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances - substantial time.
  • One final point about statistical distributions. They apply only to a prescribed set of circumstances - in this case to survival with mesothelioma under conventional modes of treatment. If circumstances change, the distribution may alter. I was placed on an experimental protocol of treatment and, if fortune holds, will be in the first cohort of a new distribution with high median and a right tail extending to death by natural causes at advanced old age.
  •  
    The Median Isn't the Message by Stephen Jay Gould
Weiye Loh

Leong Sze Hian stands corrected? | The Online Citizen - 0 views

  • In your article, you make the argument that “Straits Times Forum Editor, was merely amending his (my) letter to cite the correct statistics. “For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”.” Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf I look forward to your reply, Mr Leong. Thank you for taking the time to read this message.
  • we should, wherever possible, try to agree to disagree, as it is healthy to have and to encourage different viewpoints.
    • Weiye Loh
       
      Does that mean that every viewpoint can and should be accepted as correct to encourage differences? 
  • ...4 more annotations...
  • If I say I think it is fair in Singapore, because half of the bottom one-third of the people make it to the top two-thirds, it does not mean that someone can quote me and say that I said what I said because half the bottom one-third of people did not make it. I think it is alright to say that I do not agree entirely with what was said, because does it also mean on the flip side that half of the bottom one-third of the people did not make it? This is what I mean by quoting one out of context, by using statistics that I did not say, and implying that I did, or by innuendo.
  • Moreover, depending on the methodology, definition, sampling, etc, half of the  bottom one-third of the people making it, does not necessary mean that half did not make it, because some may not be in the population because of various reasons, like emigration, not turning up, transfer, whether adjustments are made  for the mobility of people up or down the social strata over time, etc. If I did not use a particular statistic to state my case, for example, I don’t think it is appropriate to quote me and say that you agree with me by citing statistics from a third party source, like the MOE chart in the Straits Times article, instead of quoting the statistics that I said.
  • I cannot find anything in any of the media reports to say with certainty that the Minister backed up his remarks with direct reference to the MOE chart. There is also nothing in the narrative that only 10 per cent  of children from one-to-three room flats make it to university – which is to say, 90 per cent  of them don’t. The ’90 per cent’ cannot be attributed to what the minister said, as at best it is the writer’s interpretation of the MOE chart.
  • Interesting exchange of letters. Samuel’s interpretation of the statistics provided by Ng Eng Hen and ST is correct. There is little doubt about it. While I can see where Leong Sze Hian is coming from, I don’t totally agree with him. Specifically, Samuel’s first statement (only ~10% of students living in 1-3 room flat make it to university) is directed at ST’s report that education is a good social leveller but not at Ng. It is therefore a valid point to make.
Weiye Loh

Effect of alcohol on risk of coronary heart diseas... [Vasc Health Risk Manag. 2006] - ... - 0 views

  • Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good.
  • . The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real.
  • However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one.
  •  
    Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake.
Weiye Loh

From Abstract to News Release to Story, a Tilt to the 'Front-Page Thought' - NYTimes.com - 0 views

  •  
    "In the post on research on extreme rainfall and warming, Gavin Schmidt, the NASA climate scientist and Real Climate blogger, described the misinterpretation of some paper abstracts as mainly reflecting a cultural divide: "Here we show" statements are required by Nature and Science to clearly lay out the point of the paper. If you don't include it, they will write it in. The caveats/uncertainties/issues all come later. I think the confusion is more cultural than anything. No one at Nature or Science or any of the authors in any subject think that uncertainties are zero, but they require a clear statement of the point of the paper within their house style. I think that conclusion misses the reality that, particularly in the world of online communication of science, abstracts are not merely for colleagues who know the shorthand, but have different audiences who'll have different ways of interpreting phrases such as "here we show.""
Weiye Loh

Hacktivists as Gadflies - NYTimes.com - 0 views

  •  
    "Consider the case of Andrew Auernheimer, better known as "Weev." When Weev discovered in 2010 that AT&T had left private information about its customers vulnerable on the Internet, he and a colleague wrote a script to access it. Technically, he did not "hack" anything; he merely executed a simple version of what Google Web crawlers do every second of every day - sequentially walk through public URLs and extract the content. When he got the information (the e-mail addresses of 114,000 iPad users, including Mayor Michael Bloomberg and Rahm Emanuel, then the White House chief of staff), Weev did not try to profit from it; he notified the blog Gawker of the security hole. For this service Weev might have asked for free dinners for life, but instead he was recently sentenced to 41 months in prison and ordered to pay a fine of more than $73,000 in damages to AT&T to cover the cost of notifying its customers of its own security failure. When the federal judge Susan Wigenton sentenced Weev on March 18, she described him with prose that could have been lifted from the prosecutor Meletus in Plato's "Apology." "You consider yourself a hero of sorts," she said, and noted that Weev's "special skills" in computer coding called for a more draconian sentence. I was reminded of a line from an essay written in 1986 by a hacker called the Mentor: "My crime is that of outsmarting you, something that you will never forgive me for." When offered the chance to speak, Weev, like Socrates, did not back down: "I don't come here today to ask for forgiveness. I'm here to tell this court, if it has any foresight at all, that it should be thinking about what it can do to make amends to me for the harm and the violence that has been inflicted upon my life." He then went on to heap scorn upon the law being used to put him away - the Computer Fraud and Abuse Act, the same law that prosecutors used to go after the 26-year-old Internet activist Aaron Swart
Jody Poh

Subtitles, Lip Synching and Covers on YouTube - 13 views

I think that companies concerned over this issue due to the loss of potential income constitutes egoism. They mainly want to defend their interests without considering the beneficial impact of the ...

copyright youtube parody

Weiye Loh

Understanding the universe: Order of creation | The Economist - 0 views

  • In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
  • The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
  • It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
  • ...2 more annotations...
  • The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
  • Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
  •  
    Order of creation Even Stephen Hawking doesn't quite manage to explain why we are here
Weiye Loh

RealClimate: Feedback on Cloud Feedback - 0 views

  • I have a paper in this week’s issue of Science on the cloud feedback
  • clouds are important regulators of the amount of energy in and out of the climate system. Clouds both reflect sunlight back to space and trap infrared radiation and keep it from escaping to space. Changes in clouds can therefore have profound impacts on our climate.
  • A positive cloud feedback loop posits a scenario whereby an initial warming of the planet, caused, for example, by increases in greenhouse gases, causes clouds to trap more energy and lead to further warming. Such a process amplifies the direct heating by greenhouse gases. Models have been long predicted this, but testing the models has proved difficult.
  • ...8 more annotations...
  • Making the issue even more contentious, some of the more credible skeptics out there (e.g., Lindzen, Spencer) have been arguing that clouds behave quite differently from that predicted by models. In fact, they argue, clouds will stabilize the climate and prevent climate change from occurring (i.e., clouds will provide a negative feedback).
  • In my new paper, I calculate the energy trapped by clouds and observe how it varies as the climate warms and cools during El Nino-Southern Oscillation (ENSO) cycles. I find that, as the climate warms, clouds trap an additional 0.54±0.74W/m2 for every degree of warming. Thus, the cloud feedback is likely positive, but I cannot rule out a slight negative feedback.
  • while a slight negative feedback cannot be ruled out, the data do not support a negative feedback large enough to substantially cancel the well-established positive feedbacks, such as water vapor, as Lindzen and Spencer would argue.
  • I have also compared the results to climate models. Taken as a group, the models substantially reproduce the observations. This increases my confidence that the models are accurately simulating the variations of clouds with climate change.
  • Dr. Spencer is arguing that clouds are causing ENSO cycles, so the direction of causality in my analysis is incorrect and my conclusions are in error. After reading this, I initiated a cordial and useful exchange of e-mails with Dr. Spencer (you can read the full e-mail exchange here). We ultimately agreed that the fundamental disagreement between us is over what causes ENSO. Short paraphrase: Spencer: ENSO is caused by clouds. You cannot infer the response of clouds to surface temperature in such a situation. Dessler: ENSO is not caused by clouds, but is driven by internal dynamics of the ocean-atmosphere system. Clouds may amplify the warming, and that’s the cloud feedback I’m trying to measure.
  • My position is the mainstream one, backed up by decades of research. This mainstream theory is quite successful at simulating almost all of the aspects of ENSO. Dr. Spencer, on the other hand, is as far out of the mainstream when it comes to ENSO as he is when it comes to climate change. He is advancing here a completely new and untested theory of ENSO — based on just one figure in one of his papers (and, as I told him in one of our e-mails, there are other interpretations of those data that do not agree with his interpretation). Thus, the burden of proof is Dr. Spencer to show that his theory of causality during ENSO is correct. He is, at present, far from meeting that burden. And until Dr. Spencer satisfies this burden, I don’t think anyone can take his criticisms seriously.
  • It’s also worth noting that the picture I’m painting of our disagreement (and backed up by the e-mail exchange linked above) is quite different from the picture provided by Dr. Spencer on his blog. His blog is full of conspiracies and purposeful suppression of the truth. In particular, he accuses me of ignoring his work. But as you can see, I have not ignored it — I have dismissed it because I think it has no merit. That’s quite different. I would also like to respond to his accusation that the timing of the paper is somehow connected to the IPCC’s meeting in Cancun. I can assure everyone that no one pressured me in any aspect of the publication of this paper. As Dr. Spencer knows well, authors have no control over when a paper ultimately gets published. And as far as my interest in influencing the policy debate goes, I’ll just say that I’m in College Station this week, while Dr. Spencer is in Cancun. In fact, Dr. Spencer had a press conference in Cancun — about my paper. I didn’t have a press conference about my paper. Draw your own conclusion.
  • This is but another example of how climate scientists are being played by the denialists. You attempted to discuss the issue with Spencer as if he were only doing science. But he is not. He is doing science and politics, and he has no compunction about sandbagging you. There is no gain to you in trying to deal with people like Spencer and Lindzen as colleagues. They are not trustworthy.
Weiye Loh

The Mysterious Decline Effect | Wired Science | Wired.com - 0 views

  • Question #1: Does this mean I don’t have to believe in climate change? Me: I’m afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn’t mean, of course, that such theories won’t change or get modified – the strength of science is that nothing is settled.) Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study. The larger point is that we need to be a better job of considering the context behind every claim. In 1952, the Harvard philosopher Willard Von Orman published “The Two Dogmas of Empiricism.” In the essay, Quine compared the truths of science to a spider’s web, in which the strength of the lattice depends upon its interconnectedness. (Quine: “The unit of empirical significance is the whole of science.”) One of the implications of Quine’s paper is that, when evaluating the power of a given study, we need to also consider the other studies and untested assumptions that it depends upon. Don’t just fixate on the effect size – look at the web. Unfortunately for the denialists, climate change and natural selection have very sturdy webs.
  • biases are not fraud. We sometimes forget that science is a human pursuit, mingled with all of our flaws and failings. (Perhaps that explains why an episode like Climategate gets so much attention.) If there’s a single theme that runs through the article it’s that finding the truth is really hard. It’s hard because reality is complicated, shaped by a surreal excess of variables. But it’s also hard because scientists aren’t robots: the act of observation is simultaneously an act of interpretation.
  • (As Paul Simon sang, “A man sees what he wants to see and disregards the rest.”) Most of the time, these distortions are unconscious – we don’t know even we are misperceiving the data. However, even when the distortion is intentional it’s still rarely rises to the level of outright fraud. Consider the story of Mike Rossner. He’s executive director of the Rockefeller University Press, and helps oversee several scientific publications, including The Journal of Cell Biology.  In 2002, while trying to format a scientific image in Photoshop that was going to appear in one of the journals, Rossner noticed that the background of the image contained distinct intensities of pixels. “That’s a hallmark of image manipulation,” Rossner told me. “It means the scientist has gone in and deliberately changed what the data looks like. What’s disturbing is just how easy this is to do.” This led Rossner and his colleagues to begin analyzing every image in every accepted paper. They soon discovered that approximately 25 percent of all papers contained at least one “inappropriately manipulated” picture. Interestingly, the vast, vast majority of these manipulations (~99 percent) didn’t affect the interpretation of the results. Instead, the scientists seemed to be photoshopping the pictures for aesthetic reasons: perhaps a line on a gel was erased, or a background blur was deleted, or the contrast was exaggerated. In other words, they wanted to publish pretty images. That’s a perfectly understandable desire, but it gets problematic when that same basic instinct – we want our data to be neat, our pictures to be clean, our charts to be clear – is transposed across the entire scientific process.
  • ...2 more annotations...
  • One of the philosophy papers that I kept on thinking about while writing the article was Nancy Cartwright’s essay “Do the Laws of Physics State the Facts?” Cartwright used numerous examples from modern physics to argue that there is often a basic trade-off between scientific “truth” and experimental validity, so that the laws that are the most true are also the most useless. “Despite their great explanatory power, these laws [such as gravity] do not describe reality,” Cartwright writes. “Instead, fundamental laws describe highly idealized objects in models.”  The problem, of course, is that experiments don’t test models. They test reality.
  • Cartwright’s larger point is that many essential scientific theories – those laws that explain things – are not actually provable, at least in the conventional sense. This doesn’t mean that gravity isn’t true or real. There is, perhaps, no truer idea in all of science. (Feynman famously referred to gravity as the “greatest generalization achieved by the human mind.”) Instead, what the anomalies of physics demonstrate is that there is no single test that can define the truth. Although we often pretend that experiments and peer-review and clinical trials settle the truth for us – that we are mere passive observers, dutifully recording the results – the actuality of science is a lot messier than that. Richard Rorty said it best: “To say that we should drop the idea of truth as out there waiting to be discovered is not to say that we have discovered that, out there, there is no truth.” Of course, the very fact that the facts aren’t obvious, that the truth isn’t “waiting to be discovered,” means that science is intensely human. It requires us to look, to search, to plead with nature for an answer.
Weiye Loh

"Asian Values": a credible alternative to a universal conception of human rig... - 0 views

  • Singapore has not ratified the International Covenant on Civil and Political Rights, but as a member state of the United Nations is bound to respect “fundamental human rights”. But who decides these rights? Many commentators will argue that they are those enshrined in the Universal Declaration on Human Rights, in which Freedom of Expression is guaranteed by Article 19.
  • The United Nations Human Rights Committee has stressed that freedom of expression ensures the free political debate essential to democracy[ii] and has expressed concern that overbearing government controls of the media are incompatible with Freedom of Expression.
  • The Singapore government’s view is different. They have long asserted that human rights principles and conceptions are dominated by Western perceptions and argue for an “Asian Values” interpretation of human rights. This has been characterised as the assertion of the primacy of duty to the community over individual rights and the expectation of trust in authority and dominance of the state leaders.
  • ...4 more annotations...
  • The “Asian Values” hypothesis is equally suspect. The UDHR recognises the universal applicability of human rights and any nation party to this treaty is not permitted to restrict rights purely on cultural, religious or political grounds.
  • “Asian governments are justified in restricting civil and political rights in some circumstances in favour of social stability and economic growth. Civil and political rights are immaterial when people are destitute and society is unstable.  Accordingly, as luxuries to be enjoyed once there is social order, civil and political liberties must be temporarily suspended so as to not inhibit the government’s delivery of economic and social necessities and so as to not threaten or destroy future development plans.” Whilst this argument may have been slightly more palatable if Singapore’s citizens were, in fact destitute, the reality is that Singapore is ranked as one of the world’s wealthiest countries and boasts a high life expectancy. Thus in Singapore’s case, arguments made in favour of a “liberty trade-off” are rendered completely untenable.
  • these cultural and religious justifications for violating rights are as unacceptable as Singapore’s purported assertion of an “Asian Values” conception of human rights. Even though the Singapore government’s language is more subtle, their arguments amount to same basic tenet: the purported justification of the denial of fundamental human rights, by reference to cultural, religious or political specific norms. Speaking recently in New York, the UN Secretary-General, Ban Ki Moon warned against such an interpretation of human rights:
  • “Yes, we recognize that social attitudes run deep.  Yes, social change often comes only with time.  Yet, let there be no confusion: where there is tension between cultural attitudes and universal human rights, universal human rights must carry the day. ” The universal and fundamental nature of human rights is the founding principle on which the United Nations was built: the right to freedom of expression must be guaranteed, “Asian Values” notwithstanding.
Weiye Loh

Adventures in Flay-land: James Delingpole and the "Science" of Denialism - 0 views

  • Perhaps like me, you watched the BBC Two Horizons program Monday night presented by Sir Paul Nurse, president of the Royal Society and Nobel Prize winning geneticist for his discovery of the genes of cell division.
  • James. He really believes there's some kind of mainstream science "warmist" conspiracy against the brave outliers who dare to challenge the consensus. He really believes that "climategate" is a real scandal. He fails to understand that it is a common practice in statistics to splice together two or more datasets where you know that the quality of data is patchy. In the case of "climategate", researchers found that indirect temperature measurements based on tree ring widths (the tree ring temperature proxy) is consistent with other proxy methods of recording temperature from before the start of the instrumental temperature record (around 1950) but begins to show a decline in temperature after that for reasons which are unclear. Actual temperature measurements however show the opposite. The researcher at the head of the climategate affair, Phil Jones, created a graph of the temperature record to include on the cover of a report for policy makers and journalists. For this graph he simply spliced together the tree ring proxy data up until 1950 with the recorded data after that using statistical techniques to bring them into agreement. What made this seem particularly dodgy was an email intercepted by a hacker in which Jones referred to this practice as a "Mike's Nature trick", referring to a paper published by his colleague Mike Hulme Michael Mann in the journal Nature. It is however nothing out of the ordinary. Delingpole and others have talked about how this "trick" was used here to "hide the decline" revealed by the other dataset, as though this was some sort of deception. The fact that all parties were found to have behaved ethically is simply further evidence of the global warmist conspiracy. Delingpole takes it further and casts aspersions on scientific consensus and the entire peer review process.
  • When Nurse asked Delingpole the very straightforward question of whether he would be willing to trust a scientific consensus if he required treatment for cancer, he could have said "Gee, that's an interesting question. Let me think about that and why it's different."
  • ...7 more annotations...
  • Instead, he became defensive and lost his focus. Eventually he would make such regrettable statements as this one: "It is not my job to sit down and read peer-reviewed papers because I simply haven’t got the time, I haven’t got the scientific expertise… I am an interpreter of interpretation."
  • In a parallel universe where James Delingpole is not the "penis" that Ben Goldacre describes him to be, he might have said the following: Gee, that's an interesting question. Let me think about why it's different. (Thinks) Well, it seems to me that when evaluating a scientifically agreed treatment for a disease such as cancer, we have not only all the theory to peruse and the randomized and blinded trials, but also thousands if not millions of case studies where people have undergone the intervention. We have enough data to estimate a person's chances of recovery and know that on average they will do better. When discussing climate change, we really only have the one case study. Just the one earth. And it's a patient that has not undergone any intervention. The scientific consensus is therfore entirely theoretical and intangible. This makes it more difficult for the lay person such as myself to trust it.
  • Sir Paul ended the program saying "Scientists have got to get out there… if we do not do that it will be filled by others who don’t understand the science, and who may be driven by politics and ideology."
  • f proxy tracks instrumental from 1850 to 1960 but then diverges for unknown reasons, how do we know that the proxy is valid for reconstructing temperatures in periods prior to 1850?
  • This is a good question and one I'm not sure I can answer it to anyone's satisfaction. We seem to have good agreement among several forms of temperature proxy going back centuries and with direct measurements back to 1880. There is divergence in more recent years and there are several theories as to why that might be. Some possible explanations here:http://www.skepticalscience.com/Tree-ring-proxies-divergence-problem.htm
  • In the physical world we can never be absolutely certain of anything. Rene Des Cartes showed it was impossible to prove that everything he sensed wasn't manipulated by some invisible demon.
  • It is necessary to first make certain assumptions about the universe that we observe. After that, we can only go with the best theories available that allow us to make scientific progress.
Weiye Loh

Political - or politicized? - psychology » Scienceline - 0 views

  • The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today’s heated political climate, understanding people on the “other side” — whether that side is left or right — takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
  • Consider the following 2006 study by the late California psychologists Jeanne and Jack Block, which compared the personalities of nursery school children to their political leanings as 23-year olds. Preschoolers who went on to identify as liberal were described by the authors as self-reliant, energetic, somewhat dominating and resilient. The children who later identified as conservative were described as easily offended, indecisive, fearful, rigid, inhibited and vulnerable. The negative descriptions of conservatives in this study strike Jacob Vigil, a psychologist at the University of New Mexico, as morally loaded. Studies like this one, he said, use language that suggests the researchers are “motivated to present liberals with more ideal descriptions as compared to conservatives.”
  • Most of the researchers in this field are, in fact, liberal. In 2007 UCLA’s Higher Education Research Institute conducted a survey of faculty at four-year colleges and universities in the United States. About 68 percent of the faculty in history, political science and social science departments characterized themselves as liberal, 22 percent characterized themselves as moderate, and only 10 percent as conservative. Some social psychologists, like Jonathan Haidt of the University of Virginia, have charged that this liberal majority distorts the research in political psychology.
  • ...9 more annotations...
  • It’s a charge that John Jost, a social psychologist at New York University, flatly denies. Findings in political psychology bear upon deeply held personal beliefs and attitudes, he said, so they are bound to spark controversy. Research showing that conservatives score higher on measures of “intolerance of ambiguity” or the “need for cognitive closure” might bother some people, said Jost, but that does not make it biased.
  • “The job of the behavioral scientist is not to try to find something to say that couldn’t possibly be offensive,” said Jost. “Our job is to say what we think is true, and why.
  • Jost and his colleagues in 2003 compiled a meta-analysis of 88 studies from 12 different countries conducted over a 40-year period. They found strong evidence that conservatives tend to have higher needs to reduce uncertainty and threat. Conservatives also share psychological factors like fear, aggression, dogmatism, and the need for order, structure and closure. Political conservatism, they explained, could serve as a defense against anxieties and threats that arise out of everyday uncertainty, by justifying the status quo and preserving conditions that are comfortable and familiar.
  • The study triggered quite a public reaction, particularly within the conservative blogosphere. But the criticisms, according to Jost, were mistakenly focused on the researchers themselves; the findings were not disputed by the scientific community and have since been replicated. For example, a 2009 study followed college students over the span of their undergraduate experience and found that higher perceptions of threat did indeed predict political conservatism. Another 2009 study found that when confronted with a threat, liberals actually become more psychologically and politically conservative. Some studies even suggest that physiological traits like sensitivity to sudden noises or threatening images are associated with conservative political attitudes.
  • “The debate should always be about the data and its proper interpretation,” said Jost, “and never about the characteristics or motives of the researchers.” Phillip Tetlock, a psychologist at the University of California, Berkeley, agrees. However, Tetlock thinks that identifying the proper interpretation can be tricky, since personality measures can be described in many ways. “One observer’s ‘dogmatism’ can be another’s ‘principled,’ and one observer’s ‘open-mindedness’ can be another’s ‘flaccid and vacillating,’” Tetlock explained.
  • Richard Redding, a professor of law and psychology at Chapman University in Orange, California, points to a more general, indirect bias in political psychology. “It’s not the case that researchers are intentionally skewing the data,” which rarely happens, Redding said. Rather, the problem may lie in what sorts of questions are or are not asked.
  • For example, a conservative might be more inclined to undertake research on affirmative action in a way that would identify any negative outcomes, whereas a liberal probably wouldn’t, said Redding. Likewise, there may be aspects of personality that liberals simply haven’t considered. Redding is currently conducting a large-scale study on self-righteousness, which he suspects may be associated more highly with liberals than conservatives.
  • “The way you frame a problem is to some extent dictated by what you think the problem is,” said David Sears, a political psychologist at the University of California, Los Angeles. People’s strong feelings about issues like prejudice, sexism, authoritarianism, aggression, and nationalism — the bread and butter of political psychology — may influence how they design a study or present a problem.
  • The indirect bias that Sears and Redding identify is a far cry from the liberal groupthink others warn against. But given that psychology departments are predominantly left leaning, it’s important to seek out alternative viewpoints and explanations, said Jesse Graham, a social psychologist at the University of Southern California. A self-avowed liberal, Graham thinks it would be absurd to say he couldn’t do fair science because of his political preferences. “But,” he said, “it is something that I try to keep in mind.”
  •  
    The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today's heated political climate, understanding people on the "other side" - whether that side is left or right - takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
Weiye Loh

God is not the Creator, claims academic - Telegraph - 1 views

  • Professor Ellen van Wolde, a respected Old Testament scholar and author, claims the first sentence of Genesis "in the beginning God created the Heaven and the Earth" is not a true translation of the Hebrew.
  • She said she eventually concluded the Hebrew verb "bara", which is used in the first sentence of the book of Genesis, does not mean "to create" but to "spatially separate". The first sentence should now read "in the beginning God separated the Heaven and the Earth"
  • She said: "It meant to say that God did create humans and animals, but not the Earth itself."
  • ...1 more annotation...
  • She said she hoped that her conclusions would spark "a robust debate", since her finds are not only new, but would also touch the hearts of many religious people. She said: "Maybe I am even hurting myself. I consider myself to be religious and the Creator used to be very special, as a notion of trust. I want to keep that trust." A spokesman for the Radboud University said: "The new interpretation is a complete shake up of the story of the Creation as we know it." Prof Van Wolde added: "The traditional view of God the Creator is untenable now."
Weiye Loh

A Data Divide? Data "Haves" and "Have Nots" and Open (Government) Data « Gurs... - 0 views

  • Researchers have extensively explored the range of social, economic, geographical and other barriers which underlie and to a considerable degree “explain” (cause) the Digital Divide.  My own contribution has been to argue that “access is not enough”, it is whether opportunities and pre-conditions are in place for the “effective use” of the technology particularly for those at the grassroots.
  • The idea of a possible parallel “Data Divide” between those who have access and the opportunity to make effective use of data and particularly “open data” and those who do not, began to occur to me.  I was attending several planning/recruitment events for the Open Data “movement” here in Vancouver and the socio-demographics and some of the underlying political assumptions seemed to be somewhat at odds with the expressed advocacy position of “data for all”.
  • Thus the “open data” which was being argued for would not likely be accessible and usable to the groups and individuals with which Community Informatics has largely been concerned – the grassroots, the poor and marginalized, indigenous people, rural people and slum dwellers in Less Developed countries. It was/is hard to see, given the explanations, provided to date how these folks could use this data in any effective way to help them in responding to the opportunities for advance and social betterment which open data advocates have been indicating as the outcome of their efforts.
  • ...5 more annotations...
  • many involved in “open data” saw their interests and activities being confined to making data ‘legally” and “technically” accessible — what happened to it after that was somebody else’s responsibility.
  • while the Digital Divide deals with, for the most part “infrastructure” issues, the Data Divide is concerned with “content” issues.
  • where a Digital Divide might exist for example, as a result of geographical or policy considerations and thus have uniform effects on all those on the wrong side of the “divide” whatever their socio-demographic situation; a Data Divide and particularly one of the most significant current components of the Open Data movement i.e. OGD, would have particularly damaging negative effects and result in particularly significant lost opportunities for the most vulnerable groups and individuals in society and globally. (I’ve discussed some examples here at length in a previous blogpost.)
  • Data Divide thus would be the gap between those who have access to and are able to use Open (Government) Data and those who are not so enabled.
  • 1. infrastructure—being on the wrong side of the “Digital Divide” and thus not having access to the basic infrastructure supporting the availability of OGD. 2. devices—OGD that is not universally accessible and device independent (that only runs on I-Phones for example) 3. software—“accessible” OGD that requires specialized technical software/training to become “usable” 4. content—OGD not designed for use by those with handicaps, non-English speakers, those with low levels of functional literacy for example 5.  interpretation/sense-making—OGD that is only accessible for use through a technical intermediary and/or is useful only if “interpreted” by a professional intermediary 6. advocacy—whether the OGD is in a form and context that is supportive for use in advocacy (or other purposes) on behalf of marginalized and other groups and individuals 7. governance—whether the OGD process includes representation from the broad public in its overall policy development and governance (not just lawyers, techies and public servants).
Weiye Loh

"Cancer by the Numbers" by John Allen Paulos | Project Syndicate - 0 views

  • The USPSTF recently issued an even sharper warning about the prostate-specific antigen test for prostate cancer, after concluding that the test’s harms outweigh its benefits. Chest X-rays for lung cancer and Pap tests for cervical cancer have received similar, albeit less definitive, criticism.CommentsView/Create comment on this paragraphThe next step in the reevaluation of cancer screening was taken last year, when researchers at the Dartmouth Institute for Health Policy announced that the costs of screening for breast cancer were often minimized, and that the benefits were much exaggerated. Indeed, even a mammogram (almost 40 million are given annually in the US) that detects a cancer does not necessarily save a life.CommentsView/Create comment on this paragraphThe Dartmouth researchers found that, of the estimated 138,000 breast cancers detected annually in the US, the test did not help 120,000-134,000 of the afflicted women. The cancers either were growing so slowly that they did not pose a problem, or they would have been treated successfully if discovered clinically later (or they were so aggressive that little could be done).
Weiye Loh

Religion's regressive hold on animal rights issues | Peter Singer | Comment is free | g... - 0 views

  • chief minister of Malacca, Mohamad Ali Rustam, was quoted in the Guardian as saying that God created monkeys and rats for experiments to benefit humans.
  • Here is the head of a Malaysian state justifying the establishment of a scientific enterprise with a comment that flies in the face of everything science tells us.
  • Though the chief minister is, presumably, a Muslim, there is nothing specifically Islamic about the claim that God created animals for our sake. Similar remarks have been made repeatedly by Christian religious figures through the millennia, although today some Christian theologians offer a kinder, more compassionate interpretation of the idea of our God-given dominion over the animals. They regard the grant of dominion as a kind of stewardship, with God wanting us to take care of his creatures and treat them well.
  • ...2 more annotations...
  • What are we to say of the Indian company, Vivo Biosciences Inc, which takes advantage of such religious naivety – in which presumably its scientists do not for one moment believe – in order to gain approval for its £97m joint venture with a state-owned Malaysian biotech company?
    • Weiye Loh
       
      Isn't it ironic that scientists rely on religious rhetoric to justify their sciences? 
  • The chief minister's comment is yet another illustration of the generally regressive influence that religion has on ethical issues – whether they are concerned with the status of women, with sexuality, with end-of-life decisions in medicine, with the environment, or with animals.
  •  
    Religion's regressive hold on animal rights issues How are we to promote the need for improved animal welfare when battling religious views formed centuries ago? Peter Singerguardian.co.uk, Tuesday 8 June 2010 14.03 BSTArticle history
Weiye Loh

Rationally Speaking: Why do libertarians deny climate change? - 0 views

  • the trend is hard to miss. The libertarian think tank CATO Institute has been waging a media war against the very notion for years, and even prominent skeptics with libertarian leanings have pronounced themselves negatively on the matter (most famously Penn & Teller, and initially even Michael Shermer, though both — I count P&T as one — lately have taken a few steps back from their initial positions).
  • whether climate change is real or not. It is, according to the best science available. Yes, even the best science can be wrong, but frankly the only people who can tell with any degree of reasonability are those belonging to the relevant community of experts, in this case climate scientists
  • The question is particularly pertinent to libertarians and the ideologically close allied group of “objectivists,” i.e. followers of Ayn Rand (though there are significant differences between the two groups, as I mentioned before). These people often claim to be friends of science (as opposed to many radical conservatives like Senator James Inhofe (R-Okla), who called global warming the “greatest hoax ever perpetrated on the American people” (perpetrated by whom? And to what end?)), and in the case of objectivists, whose whole approach to politics is allegedly based on rational considerations of the facts.
  • ...6 more annotations...
  • one would think that libertarians could make a distinction between evidence-based interpretation of reality (global warming is happening), and whatever policies we might want to enact to avoid catastrophe. Qua Qua libertarians, they would obviously resist any government-led effort at clean up, especially if internationally coordinated, preferring instead a coalition of the willing within the private sector
  • there certainly is plenty of room for reasonable discussions and disagreements about how best to proceed in confronting the problem. On the other hand, there doesn’t seem to be much room for reasonable disagreement about the very existence of the problem itself. So, what gives, my dear libertarians?
  • . In the case of major libertarian outlets, like the CATO Institute think tank, the rather unglamorous answer may simply be that they are in the pockets of the oil industry. A large amount of the funding for CATO comes from private corporations with obvious political agendas including, you guessed it, Exxon-Mobil (remember the Valdez?). No wonder CATO people trump the party line on this one.
  • The second reason, however, is more personal and widespread: libertarianism is committed to the high moral value of private enterprise
  • it follows naturally (if irrationally) that libertarians cannot admit to themselves, and even less to the world at large, that the much vaunted private sector may be responsible — out of both greed and downright incompetence — for a major environmental catastrophe of planetary proportions. The industry is the good guy in their movie, how then could they possibly have done something so horrible?
  • hat’s the problem with ideology in general (be it left, right, or libertarian), it provides us with thick blinders that very effectively shield us from reality. Of course, no one is actually free of bias, yours truly included. But a core principle of skepticism and critical thinking is that we do our best to be aware (and minimize) our own biases, and that we ought to open ourselves to honest criticism from different parties, in pursuit of the best approximation to the truth that we can muster.
  •  
    Why do libertarians deny climate change?
1 - 20 of 59 Next › Last »
Showing 20 items per page