Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Truths

Rss Feed Group items tagged

Weiye Loh

Rationally Speaking: Truth from fiction: truth or fiction? - 0 views

  • Literature teaches us about life. Literature helps us understand the world.
  • this belief in truth-from-fiction is the party line for those who champion the merits of literature. Eminent English professor and critic Harold Bloom proclaims, in his bestselling How to Read and Why, that one of the main reasons to read literature is because "we require knowledge, not just of self and others, but of the way things are."
  • why would we expect literature to be a reliable source of knowledge about "the way things are"? After all, the narratives which are the most gripping and satisfying to read are not the most representative of how the world actually works. They have dramatic resolutions, foreshadowing, conflict, climax, and surprise. People tend to get their comeuppance after they misbehave. People who pursue their dream passionately tend to succeed. Disaster tends to strike when you least expect it. These narratives are over-represented in literature because they're more gratifying to read; why would we expect to learn from them about "the way things are"?
  • ...2 more annotations...
  • even if authors were all trying to faithfully represent the world as they perceived it, why would we expect their perceptions to be any more universally true than anyone else's?
  • I can't see any reason to give any more weight to the implicit arguments of a novel than we would give to the explicit arguments of any individual person. And yet when we read a novel or study it in school, especially if it's a hallowed classic, we tend to treat its arguments as truths.
  •  
    FRIDAY, JUNE 18, 2010 Truth from fiction: truth or fiction?
Weiye Loh

Does Anything Matter? by Peter Singer - Project Syndicate - 0 views

  • Although this view of ethics has often been challenged, many of the objections have come from religious thinkers who appealed to God’s commands. Such arguments have limited appeal in the largely secular world of Western philosophy. Other defenses of objective truth in ethics made no appeal to religion, but could make little headway against the prevailing philosophical mood.
  • Many people assume that rationality is always instrumental: reason can tell us only how to get what we want, but our basic wants and desires are beyond the scope of reasoning. Not so, Parfit argues. Just as we can grasp the truth that 1 + 1 = 2, so we can see that I have a reason to avoid suffering agony at some future time, regardless of whether I now care about, or have desires about, whether I will suffer agony at that time. We can also have reasons (though not always conclusive reasons) to prevent others from suffering agony. Such self-evident normative truths provide the basis for Parfit’s defense of objectivity in ethics.
  • One major argument against objectivism in ethics is that people disagree deeply about right and wrong, and this disagreement extends to philosophers who cannot be accused of being ignorant or confused. If great thinkers like Immanuel Kant and Jeremy Bentham disagree about what we ought to do, can there really be an objectively true answer to that question? Parfit’s response to this line of argument leads him to make a claim that is perhaps even bolder than his defense of objectivism in ethics. He considers three leading theories about what we ought to do – one deriving from Kant, one from the social-contract tradition of Hobbes, Locke, Rousseau, and the contemporary philosophers John Rawls and T.M. Scanlon, and one from Bentham’s utilitarianism – and argues that the Kantian and social-contract theories must be revised in order to be defensible.
  • ...3 more annotations...
  • he argues that these revised theories coincide with a particular form of consequentialism, which is a theory in the same broad family as utilitarianism. If Parfit is right, there is much less disagreement between apparently conflicting moral theories than we all thought. The defenders of each of these theories are, in Parfit’s vivid phrase, “climbing the same mountain on different sides.”
  • Parfit’s real interest is in combating subjectivism and nihilism. Unless he can show that objectivism is true, he believes, nothing matters.
  • When Parfit does come to the question of “what matters,” his answer might seem surprisingly obvious. He tells us, for example, that what matters most now is that “we rich people give up some of our luxuries, ceasing to overheat the Earth’s atmosphere, and taking care of this planet in other ways, so that it continues to support intelligent life.” Many of us had already reached that conclusion. What we gain from Parfit’s work is the possibility of defending these and other moral claims as objective truths.
  •  
    Can moral judgments be true or false? Or is ethics, at bottom, a purely subjective matter, for individuals to choose, or perhaps relative to the culture of the society in which one lives? We might have just found out the answer. Among philosophers, the view that moral judgments state objective truths has been out of fashion since the 1930's, when logical positivists asserted that, because there seems to be no way of verifying the truth of moral judgments, they cannot be anything other than expressions of our feelings or attitudes. So, for example, when we say, "You ought not to hit that child," all we are really doing is expressing our disapproval of your hitting the child, or encouraging you to stop hitting the child. There is no truth to the matter of whether or not it is wrong for you to hit the child.
Weiye Loh

Roger Pielke Jr.'s Blog: Mike Daisey and Higher Truths - 0 views

  • Real life is messy. And as a general rule, the more theatrical the story you hear, and the more it divides the world into goodies vs baddies, the less reliable that story is going to be.
  • some people do feel that certain issues are so important that there should be cause in political debates to overlook lies or misrepresentations in service of a "larger truth" (Yellow cake, anyone?). I have seen this attitude for years in the climate change debate (hey look, just today), and often condoned by scientists and journalists alike.
  • the "global warming: yes or no?" debate has become an obstacle to effective policy action related to climate. Several of these colleagues suggested that I should downplay the policy implications of my work showing that for a range of phenomena and places, future climate impacts depend much more on growing human vulnerability to climate than on projected changes in climate itself (under the assumptions of the Intergovernmental Panel on Climate Change). One colleague wrote, "I think we have a professional (or moral?) obligation to be very careful what we say and how we say it when the stakes are so high." In effect, some of these colleagues were intimating that ends justify means or, in other words, doing the "right thing" for the wrong reasons is OK.
  • ...3 more annotations...
  • When science is used (and misused) in political advocacy, there are frequent opportunities for such situations to arise.
  • I don't think you're being fair to Mike Lemonick. In the article by him that you cite, MIke's provocative question was framed in the context of an analogy he was making to the risks of smoking. For example, in that article, he also says: "So should the overall message be that nobody knows anything? I don’t think so. We would never want to pretend the uncertainty isn’t there, since that would be dishonest. But featuring it prominently is dishonest ,too, just as trumpeting uncertainty in the smoking-cancer connection would have been."Thus, I think you're reading way too much into Mike's piece. That said, I do agree with you that there are implications of the Daisey case for climate communicators and climate journalism. My own related post is here: http://www.collide-a-scape.com/2012/03/19/the-seduction-of-narrative/"
  • I don't want journalists shading the truth in a desire to be "effective" in some way. That is Daisey's tradeoff too.
  •  
    Recall that in the aftermath of initial revelations about Peter Gleick's phishing of the Heartland Institute, we heard defenses of his action that included claims that he was only doing the same thing that journalists do to the importance of looking beyond Gleick's misdeeds at the "larger truth." Consider also what was described in the UEA emails as "pressure to present a nice tidy story" related to climate science as well as the IPCC's outright falsification related to disasters and climate change. Such shenanigans so endemic in the climate change debate that when a journalist openly asks whether the media should tell the whole truth about climate change, no one even bats an eye. 
Magdaleine

The Moment of Truth - 5 views

The idea of Truth (vis-a-vis truth or Truths and truths) is very interesting and keeps recurring in all ethical debates. My question is, is Truth discovered? Or is Truth invented? As some of us h...

Ethics

Weiye Loh

The Mysterious Decline Effect | Wired Science | Wired.com - 0 views

  • Question #1: Does this mean I don’t have to believe in climate change? Me: I’m afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn’t mean, of course, that such theories won’t change or get modified – the strength of science is that nothing is settled.) Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study. The larger point is that we need to be a better job of considering the context behind every claim. In 1952, the Harvard philosopher Willard Von Orman published “The Two Dogmas of Empiricism.” In the essay, Quine compared the truths of science to a spider’s web, in which the strength of the lattice depends upon its interconnectedness. (Quine: “The unit of empirical significance is the whole of science.”) One of the implications of Quine’s paper is that, when evaluating the power of a given study, we need to also consider the other studies and untested assumptions that it depends upon. Don’t just fixate on the effect size – look at the web. Unfortunately for the denialists, climate change and natural selection have very sturdy webs.
  • biases are not fraud. We sometimes forget that science is a human pursuit, mingled with all of our flaws and failings. (Perhaps that explains why an episode like Climategate gets so much attention.) If there’s a single theme that runs through the article it’s that finding the truth is really hard. It’s hard because reality is complicated, shaped by a surreal excess of variables. But it’s also hard because scientists aren’t robots: the act of observation is simultaneously an act of interpretation.
  • (As Paul Simon sang, “A man sees what he wants to see and disregards the rest.”) Most of the time, these distortions are unconscious – we don’t know even we are misperceiving the data. However, even when the distortion is intentional it’s still rarely rises to the level of outright fraud. Consider the story of Mike Rossner. He’s executive director of the Rockefeller University Press, and helps oversee several scientific publications, including The Journal of Cell Biology.  In 2002, while trying to format a scientific image in Photoshop that was going to appear in one of the journals, Rossner noticed that the background of the image contained distinct intensities of pixels. “That’s a hallmark of image manipulation,” Rossner told me. “It means the scientist has gone in and deliberately changed what the data looks like. What’s disturbing is just how easy this is to do.” This led Rossner and his colleagues to begin analyzing every image in every accepted paper. They soon discovered that approximately 25 percent of all papers contained at least one “inappropriately manipulated” picture. Interestingly, the vast, vast majority of these manipulations (~99 percent) didn’t affect the interpretation of the results. Instead, the scientists seemed to be photoshopping the pictures for aesthetic reasons: perhaps a line on a gel was erased, or a background blur was deleted, or the contrast was exaggerated. In other words, they wanted to publish pretty images. That’s a perfectly understandable desire, but it gets problematic when that same basic instinct – we want our data to be neat, our pictures to be clean, our charts to be clear – is transposed across the entire scientific process.
  • ...2 more annotations...
  • One of the philosophy papers that I kept on thinking about while writing the article was Nancy Cartwright’s essay “Do the Laws of Physics State the Facts?” Cartwright used numerous examples from modern physics to argue that there is often a basic trade-off between scientific “truth” and experimental validity, so that the laws that are the most true are also the most useless. “Despite their great explanatory power, these laws [such as gravity] do not describe reality,” Cartwright writes. “Instead, fundamental laws describe highly idealized objects in models.”  The problem, of course, is that experiments don’t test models. They test reality.
  • Cartwright’s larger point is that many essential scientific theories – those laws that explain things – are not actually provable, at least in the conventional sense. This doesn’t mean that gravity isn’t true or real. There is, perhaps, no truer idea in all of science. (Feynman famously referred to gravity as the “greatest generalization achieved by the human mind.”) Instead, what the anomalies of physics demonstrate is that there is no single test that can define the truth. Although we often pretend that experiments and peer-review and clinical trials settle the truth for us – that we are mere passive observers, dutifully recording the results – the actuality of science is a lot messier than that. Richard Rorty said it best: “To say that we should drop the idea of truth as out there waiting to be discovered is not to say that we have discovered that, out there, there is no truth.” Of course, the very fact that the facts aren’t obvious, that the truth isn’t “waiting to be discovered,” means that science is intensely human. It requires us to look, to search, to plead with nature for an answer.
Weiye Loh

Johann Hari and the tyranny of the 'good lie' – Telegraph Blogs - 0 views

  • Hari admits to substituting his interviewees’ written words for their spoken words, quoting from their books and pretending that they actually said those words to him over coffee. But that is okay, he says, because his only aim was to reveal “what the subject thinks in the most comprehensible possible words” and to make sure that the reader “understood the point”.
  • The nub of Hari’s argument is that reality and truth are two different things, that what happens in the real world – in this case a chat between a journalist and some famous author or activist – can be twisted in the name of handing to people a neat, presumably preordained “truth”. It is a cause for concern that more journalists have not been taken aback by such a casual disassociation of truth from fact.
  • the sad fact is that the BS notion that it is okay to manipulate facts in order to present a Greater Truth is now widespread in the decadent British media. Mark Lawson once wrote a column titled “The government has lied and I am glad”, in which he said it was right for the British authorities and media to exaggerate the threat of AIDS because this “good lie” (his words) helped to improve Britons’ moral conduct. When Piers Morgan was sacked from the Mirror for publishing faked photos of British soldiers urinating on Iraqi prisoners he said it was his “moral duty” to publish the pictures because they spoke to an ugly reality in Iraq. When this month it was discovered that the Syrian lesbian blogger was a fake, some in the media who had fallen for “her” made-up reports said the good thing about the blog is that it helped to “draw attention to a nation’s woes”. And now Hari says it doesn’t matter it he invents a conversation because it helps to express a “vital message” in the “clearest possible words”.
  • ...1 more annotation...
  • The idea of a “good lie” is a dramatically Orwellian device, designed to deceive and to patronise. A lie is a lie, whether your intention is to convince people that Saddam is evil and must be bombed or that Gideon Levy is a brainy and decent bloke. Lying to communicate a “vital message”, a liberal and profound “truth”, is no better than lying in order to justify a war or a law’n'order crackdown or whatever.
Weiye Loh

Ellsberg: "EVERY attack now made on WikiLeaks and Julian Assange was made against me an... - 0 views

  • Ex-Intelligence Officers, Others See Plusses in WikiLeaks Disclosures
  • The following statement was released today, signed by Daniel Ellsberg, Frank Grevil, Katharine Gun, David MacMichael, Ray McGovern, Craig Murray, Coleen Rowley and Larry Wilkerson; all are associated with Sam Adams Associates for Integrity in Intelligence.
  • How far down the U.S. has slid can be seen, ironically enough, in a recent commentary in Pravda (that’s right, Russia’s Pravda): “What WikiLeaks has done is make people understand why so many Americans are politically apathetic … After all, the evils committed by those in power can be suffocating, and the sense of powerlessness that erupts can be paralyzing, especially when … government evildoers almost always get away with their crimes. …”
  • ...6 more annotations...
  • shame on Barack Obama, Eric Holder, and all those who spew platitudes about integrity, justice and accountability while allowing war criminals and torturers to walk freely upon the earth. … the American people should be outraged that their government has transformed a nation with a reputation for freedom, justice, tolerance and respect for human rights into a backwater that revels in its criminality, cover-ups, injustices and hypocrisies.
  • As part of their attempt to blacken WikiLeaks and Assange, pundit commentary over the weekend has tried to portray Assange’s exposure of classified materials as very different from — and far less laudable than — what Daniel Ellsberg did in releasing the Pentagon Papers in 1971. Ellsberg strongly rejects the mantra “Pentagon Papers good; WikiLeaks material bad.” He continues: “That’s just a cover for people who don’t want to admit that they oppose any and all exposure of even the most misguided, secretive foreign policy. The truth is that EVERY attack now made on WikiLeaks and Julian Assange was made against me and the release of the Pentagon Papers at the time.”
  • WikiLeaks’ reported source, Army Pvt. Bradley Manning, having watched Iraqi police abuses, and having read of similar and worse incidents in official messages, reportedly concluded, “I was actively involved in something that I was completely against.” Rather than simply go with the flow, Manning wrote: “I want people to see the truth … because without information you cannot make informed decisions as a public,” adding that he hoped to provoke worldwide discussion, debates, and reform.
  • The media: again, the media is key. No one has said it better than Monseñor Romero of El Salvador, who just before he was assassinated 25 years ago warned, “The corruption of the press is part of our sad reality, and it reveals the complicity of the oligarchy.” Sadly, that is also true of the media situation in America today.
  • The big question is not whether Americans can “handle the truth.” We believe they can. The challenge is to make the truth available to them in a straightforward way so they can draw their own conclusions — an uphill battle given the dominance of the mainstream media, most of which have mounted a hateful campaign to discredit Assange and WikiLeaks.
  • So far, the question of whether Americans can “handle the truth” has been an academic rather than an experience-based one, because Americans have had very little access to the truth. Now, however, with the WikiLeaks disclosures, they do. Indeed, the classified messages from the Army and the State Department released by WikiLeaks are, quite literally, “ground truth.”
Weiye Loh

Arianna Huffington: The Media Gets It Wrong on WikiLeaks: It's About Broken Trust, Not ... - 0 views

  • Too much of the coverage has been meta -- focusing on questions about whether the leaks were justified, while too little has dealt with the details of what has actually been revealed and what those revelations say about the wisdom of our ongoing effort in Afghanistan. There's a reason why the administration is so upset about these leaks.
  • True, there hasn't been one smoking-gun, bombshell revelation -- but that's certainly not to say the cables haven't been revealing. What there has been instead is more of the consistent drip, drip, drip of damning details we keep getting about the war.
  • It's notable that the latest leaks came out the same week President Obama went to Afghanistan for his surprise visit to the troops -- and made a speech about how we are "succeeding" and "making important progress" and bound to "prevail."
  • ...16 more annotations...
  • The WikiLeaks cables present quite a different picture. What emerges is one reality (the real one) colliding with another (the official one). We see smart, good-faith diplomats and foreign service personnel trying to make the truth on the ground match up to the one the administration has proclaimed to the public. The cables show the widening disconnect. It's like a foreign policy Ponzi scheme -- this one fueled not by the public's money, but the public's acquiescence.
  • The second aspect of the story -- the one that was the focus of the symposium -- is the changing relationship to government that technology has made possible.
  • Back in the year 2007, B.W. (Before WikiLeaks), Barack Obama waxed lyrical about government and the internet: "We have to use technology to open up our democracy. It's no coincidence that one of the most secretive administrations in our history has favored special interest and pursued policy that could not stand up to the sunlight."
  • Not long after the election, in announcing his "Transparency and Open Government" policy, the president proclaimed: "Transparency promotes accountability and provides information for citizens about what their Government is doing. Information maintained by the Federal Government is a national asset." Cut to a few years later. Now that he's defending a reality that doesn't match up to, well, reality, he's suddenly not so keen on the people having a chance to access this "national asset."
  • Even more wikironic are the statements by his Secretary of State who, less than a year ago, was lecturing other nations about the value of an unfettered and free internet. Given her description of the WikiLeaks as "an attack on America's foreign policy interests" that have put in danger "innocent people," her comments take on a whole different light. Some highlights: In authoritarian countries, information networks are helping people discover new facts and making governments more accountable... technologies with the potential to open up access to government and promote transparency can also be hijacked by governments to crush dissent and deny human rights... As in the dictatorships of the past, governments are targeting independent thinkers who use these tools. Now "making government accountable" is, as White House spokesman Robert Gibbs put it, a "reckless and dangerous action."
  • ay Rosen, one of the participants in the symposium, wrote a brilliant essay entitled "From Judith Miller to Julian Assange." He writes: For the portion of the American press that still looks to Watergate and the Pentagon Papers for inspiration, and that considers itself a check on state power, the hour of its greatest humiliation can, I think, be located with some precision: it happened on Sunday, September 8, 2002. That was when the New York Times published Judith Miller and Michael Gordon's breathless, spoon-fed -- and ultimately inaccurate -- account of Iraqi attempts to buy aluminum tubes to produce fuel for a nuclear bomb.
  • Miller's after-the-facts-proved-wrong response, as quoted in a Michael Massing piece in the New York Review of Books, was: "My job isn't to assess the government's information and be an independent intelligence analyst myself. My job is to tell readers of The New York Times what the government thought about Iraq's arsenal." In other words, her job is to tell citizens what their government is saying, not, as Obama called for in his transparency initiative, what their government is doing.
  • As Jay Rosen put it: Today it is recognized at the Times and in the journalism world that Judy Miller was a bad actor who did a lot of damage and had to go. But it has never been recognized that secrecy was itself a bad actor in the events that led to the collapse, that it did a lot of damage, and parts of it might have to go. Our press has never come to terms with the ways in which it got itself on the wrong side of secrecy as the national security state swelled in size after September 11th.
  • And in the WikiLeaks case, much of media has again found itself on the wrong side of secrecy -- and so much of the reporting about WikiLeaks has served to obscure, to conflate, to mislead. For instance, how many stories have you heard or read about all the cables being "dumped" in "indiscriminate" ways with no attempt to "vet" and "redact" the stories first. In truth, only just over 1,200 of the 250,000 cables have been released, and WikiLeaks is now publishing only those cables vetted and redacted by their media partners, which includes the New York Times here and the Guardian in England.
  • The establishment media may be part of the media, but they're also part of the establishment. And they're circling the wagons. One method they're using, as Andrew Rasiej put it after the symposium, is to conflate the secrecy that governments use to operate and the secrecy that is used to hide the truth and allow governments to mislead us.
  • Nobody, including WikiLeaks, is promoting the idea that government should exist in total transparency,
  • Assange himself would not disagree. "Secrecy is important for many things," he told Time's Richard Stengel. "We keep secret the identity of our sources, as an example, take great pains to do it." At the same time, however, secrecy "shouldn't be used to cover up abuses."
  • Decentralizing government power, limiting it, and challenging it was the Founders' intent and these have always been core conservative principles. Conservatives should prefer an explosion of whistleblower groups like WikiLeaks to a federal government powerful enough to take them down. Government officials who now attack WikiLeaks don't fear national endangerment, they fear personal embarrassment. And while scores of conservatives have long promised to undermine or challenge the current monstrosity in Washington, D.C., it is now an organization not recognizably conservative that best undermines the political establishment and challenges its very foundations.
  • It is not, as Simon Jenkins put it in the Guardian, the job of the media to protect the powerful from embarrassment. As I said at the symposium, its job is to play the role of the little boy in The Emperor's New Clothes -- brave enough to point out what nobody else is willing to say.
  • When the press trades truth for access, it is WikiLeaks that acts like the little boy. "Power," wrote Jenkins, "loathes truth revealed. When the public interest is undermined by the lies and paranoia of power, it is disclosure that takes sanity by the scruff of its neck and sets it back on its feet."
  • A final aspect of the story is Julian Assange himself. Is he a visionary? Is he an anarchist? Is he a jerk? This is fun speculation, but why does it have an impact on the value of the WikiLeaks revelations?
Weiye Loh

Free Speech under Siege - Robert Skidelsky - Project Syndicate - 0 views

  • Breaking the cultural code damages a person’s reputation, and perhaps one’s career. Britain’s Home Secretary Kenneth Clarke recently had to apologize for saying that some rapes were less serious than others, implying the need for legal discrimination. The parade of gaffes and subsequent groveling apologies has become a regular feature of public life. In his classic essay On Liberty, John Stuart Mill defended free speech on the ground that free inquiry was necessary to advance knowledge. Restrictions on certain areas of historical inquiry are based on the opposite premise: the truth is known, and it is impious to question it. This is absurd; every historian knows that there is no such thing as final historical truth.
  • It is not the task of history to defend public order or morals, but to establish what happened. Legally protected history ensures that historians will play safe. To be sure, living by Mill’s principle often requires protecting the rights of unsavory characters. David Irving writes mendacious history, but his prosecution and imprisonment in Austria for “Holocaust denial” would have horrified Mill.
  • the pressure for “political correctness” rests on the argument that the truth is unknowable. Statements about the human condition are essentially matters of opinion.  Because a statement of opinion by some individuals is almost certain to offend others, and since such statements make no contribution to the discovery of truth, their degree of offensiveness becomes the sole criterion for judging their admissibility. Hence the taboo on certain words, phrases, and arguments that imply that certain individuals, groups, or practices are superior or inferior, normal or abnormal; hence the search for ever more neutral ways to label social phenomena, thereby draining language of its vigor and interest.
  • ...3 more annotations...
  • A classic example is the way that “family” has replaced “marriage” in public discourse, with the implication that all “lifestyles” are equally valuable, despite the fact that most people persist in wanting to get married. It has become taboo to describe homosexuality as a “perversion,” though this was precisely the word used in the 1960’s by the radical philosopher Herbert Marcuse (who was praising homosexuality as an expression of dissent). In today’s atmosphere of what Marcuse would call “repressive tolerance,” such language would be considered “stigmatizing.”
  • The sociological imperative behind the spread of “political correctness” is the fact that we no longer live in patriarchal, hierarchical, mono-cultural societies, which exhibit general, if unreflective, agreement on basic values. The pathetic efforts to inculcate a common sense of “Britishness” or “Dutchness” in multi-cultural societies, however well-intentioned, attest to the breakdown of a common identity.
  • The defense of free speech is made no easier by the abuses of the popular press. We need free media to expose abuses of power. But investigative journalism becomes discredited when it is suborned to “expose” the private lives of the famous when no issue of public interest is involved. Entertaining gossip has mutated into an assault on privacy, with newspapers claiming that any attempt to keep them out of people’s bedrooms is an assault on free speech. You know that a doctrine is in trouble when not even those claiming to defend it understand what it means. By that standard, the classic doctrine of free speech is in crisis. We had better sort it out quickly – legally, morally, and culturally – if we are to retain a proper sense of what it means to live in a free society.
  •  
    Yet freedom of speech in the West is under strain. Traditionally, British law imposed two main limitations on the "right to free speech." The first prohibited the use of words or expressions likely to disrupt public order; the second was the law against libel. There are good grounds for both - to preserve the peace, and to protect individuals' reputations from lies. Most free societies accept such limits as reasonable. But the law has recently become more restrictive. "Incitement to religious and racial hatred" and "incitement to hatred on the basis of sexual orientation" are now illegal in most European countries, independent of any threat to public order. The law has shifted from proscribing language likely to cause violence to prohibiting language intended to give offense. A blatant example of this is the law against Holocaust denial. To deny or minimize the Holocaust is a crime in 15 European countries and Israel. It may be argued that the Holocaust was a crime so uniquely abhorrent as to qualify as a special case. But special cases have a habit of multiplying.
Weiye Loh

You can't handle the truth - The Boston Globe - 0 views

  •  
    You can't handle the truth A respected scientist set out to determine which drugs are actually the most dangerous -- and discovered that the answers are, well, awkward
Weiye Loh

Book Review: "Merchants of Doubt: How a Handful of Scientists Obscured the Tr... - 0 views

  • Merchant of Doubt is exactly what its subtitle says: a historical view of how a handful of scientists have obscured the truth on matters of scientific fact.
  • it was a very small group who were responsible for creating a great deal of doubt on a variety of issues. The book opens in 1953, where the tobacco industry began to take action to obscure the truth about smoking’s harmful effects, when its relationship to cancer first received widespread media attention.
  • The tobacco industry exploited scientific tendency to be conservative in drawing conclusions, to throw up a handful of cherry-picked data and misleading statistics and to “spin unreasonable doubt.” This tactic, combined with the media’s adherence to the “fairness doctrine” which was interpreted as giving equal time “to both sides [of an issue], rather than giving accurate weight to both sides” allowed the tobacco industry to delay regulation for decades.
  • ...8 more annotations...
  • The natural scientific doubt was this: scientists could not say with absolute certainty that smoking caused cancer, because there wasn’t an invariable effect. “Smoking does not kill everyone who smokes, it only kills about half of them.” All scientists could say was that there was an extremely strong association between smoking and serious health risks
  • the “Tobacco Strategy” was created, and had two tactics: To “use normal scientific doubt to undermine the status of actual scientific knowledge” and To exploit the media’s adherence to the fairness doctrine, which would give equal weight to each side of a debate, regardless of any disparity in the supporting scientific evidence
  • Fred Seitz was a scientist who learned the Tobacco Strategy first-hand. He had an impressive resume. An actual rocket scientist, he helped build the atomic bomb in the 1940s, worked for NATO in the 1950s, was president of the U.S. National Academy of Sciences in the 1960s, and of Rockefeller University in the 1970s.
  • After his retirement in 1979, Seitz took on a job for the tobacco industry. Over the next 6 years, he doled out $45 million of R.J. Reynolds’ money to fund biomedical research to create “an extensive body of scientifically well-grounded data useful in defending the industry against attacks” by such means as focussing on alternative “causes or development mechanisms of chronic degenerative diseases imputed to cigarettes.” He was joined by, most notably, two other physicists: William Nierenberg, who also worked on the atom bomb in the 1940s, submarine warfare, NATO, and was appointed director or the Scripps Institution of Oceanography in 1965; and Robert Jastrow, who founded NASA’s Goddard Institute for Space Studies, which he directed until he retired in 1981 to teach at Dartmouth College.
  • In 1984, these three founded the think tank, the George C. Marshall Institute
  • None of these men were experts in environmental and health issues, but they all “used their scientific credentials to present themselves as authorities, and they used their authority to discredit any science they didn’t like.” They turned out to be wrong, in terms of the science, on every issue they weighed in on. But they turned out to be highly successful in preventing or limiting regulation that the scientific evidence would warrant.
  • The bulk of the book details at how these men and others applied the Tobacco Strategy to create doubt on the following issues: the unfeasibility of the Strategic Defense Initiative (Ronald Reagan’s “Star Wars”), and the resultant threat of nuclear winter that Carl Sagan, among others, pointed out acid rain depletion of the ozone layer second-hand smoke, and most recently, and significantly, global warming.
  • Having pointed out the dangers the doubt-mongers pose, Oreskes and Conway propose a remedy: an emphasis on scientific literacy, not in the sense of memorizing scientific facts, but in being able to assess which scientists to trust.
Weiye Loh

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
Weiye Loh

Meet the man who broke the vaccine-autism scandal - The Globe and Mail - 0 views

  • Brian Deer radiates a remarkably bland persona for someone who stunned the global medical community and unravelled what he calls “one of those Aristotelian stories where you have both pity and fear.” This is the journalist behind the series of stories that completely discredited the research linking the measles mumps rubella (MMR) vaccine to autism. First published in The Lancet in 1998, it unleashed a worldwide public health scare and gave distressed parents of autistic children a place to lay blame for the devastation of the diagnosis.
  • Seven years ago, Mr. Deer, a freelance journalist who works mostly for The Sunday Times in London, began an investigation into research conducted in the 1990s, which had spawned a worldwide debate about the safety and well-being of children. The published research showed a link between the MMR vaccine, routinely given to children in the first years of life, to the onset of autism, a developmental disorder that appears in the first three years, and affects a child’s social behaviour and communication skills. Out of fear, many parents refused to immunize their children.The final outcome of Mr. Deer’s investigation came last month, when Andrew Wakefield, the lead researcher, as well as two of his colleagues, saw their reputations torn to shreds in a medical misconduct inquiry, the longest in history, by the General Medical Council in the United Kingdom. More than 30 charges, including four counts of dishonesty in regard to money, research and public statements, were proven against Dr. Wakefield. The Lancet retracted the paper in 2010.
  • The MMR research paper, which triggered a high-profile anti-vaccine campaign, led by such celebrities as actress Jenny McCarthy, involved 12 children between the ages of three and nine. All had brain disorders. The parents of eight of them reported that signs of autism arose within days of the children receiving the MMR vaccine.“It was just too cute,” Mr. Deer says of the findings. Through the Freedom of Information Act, he discovered that Dr. Wakefield’s research had been funded by the British Legal Aid fund, and that the children had been recruited through lawyers and anti-vaccine groups.
  • ...3 more annotations...
  • Dr. Wakefield sued him and The Sunday Times for libel, but later withdrew the charges and was forced to pay Mr. Deer’s legal costs, which amounted to £1.4 million (almost $3-million). In the subsequent medical inquiry, Dr. Wakefield was shown to have had “a callous disregard” for the “distress and pain” of the developmentally challenged children, some of whom were subjected to invasive “high risk” procedures, including lumbar punctures, without clinical reasons.After the first story ran in 2004, Mr. Deer, who is unmarried and has no children, also revealed that Dr. Wakefield had patented a single measles vaccine after creating fear about the standard MMR shot.
  • To this day, Dr. Wakefield remains unrepentant. He boycotted the legal inquiry just as he has avoided any interview with Mr. Deer. A father of four children, he has a large ranch in Austin, Texas. Some parents in the anti-vaccine community, enabled by the Internet, have falsely accused Mr. Deer of mounting a kangaroo court against Dr. Wakefield.
  • While the consequences of Dr. Wakefield’s research were serious – immunization rates in Britain dropped dramatically and measles outbreaks ensued – it also gave parents of autistic children a purpose (however ill-founded) in which to find solace. How does he feel about taking that away?“I can’t think through the consequences of trying to tell the truth,” he stutters, seemingly surprised by the question. After a thoughtful pause he adds: “I think those parents are freer for having the truth than being caught in denial and deception.”
    • Weiye Loh
       
      Truth hurts. That's why people prefer to live in denial. 
Weiye Loh

Balderdash - 0 views

  • Addendum: People have notified me that after almost 2 1/2 years, many of the pictures are now missing. I have created galleries with the pictures and hosted them on my homepage:
  • I have no problem at all with people who have plastic surgery. Unlike those who believe that while it is great if you are born pretty, having a surgically constructed or enhanced face is a big no-no (ie A version of the Naturalistic fallacy), I have no problems with people getting tummy tucks, chin lifts, boob jobs or any other form of physical sculpting or enhancement. After all, she seems to have gotten quite a reception on Hottest Blogger.
  • Denying that you have gone under the knife and feigning, with a note of irritation, tired resignation about the accusations, however, is a very different matter. Considering that many sources know the truth about her plastic surgery, this is a most perilous assertion to make and I was riled enough to come up with this blog post. [Addendum: She also goes around online squashing accusations and allegations of surgery.]
  •  
    Two wrongs and two rights.
  •  
    Not exactly the most recent case, but still worth revisiting the ethical concerns behind it. It is easy to find more than one ethical question and problem in this case and it involves more than one technology. The dichotomies of lies versus truths, nature versus man-made, wrongs versus rights, beautiful versus ugly,and so on... So who is right and who is wrong in this case? Whose and what rights are invoked and/or violated? Can a right be wrong? Can a wrong be right? Do two wrongs make one right? What parts do the technologies play in this case?
  •  
    On a side note, given the internet's capability to dig up past issues and rehash them, is it ethical for us to open up old wounds in the name of academic freedom? Beyond research, with IRB and such, what about daily academic discourses and processes? What are the ethical concerns?
Weiye Loh

Balderdash - 0 views

  • A letter Paul wrote to complain about the "The Dead Sea Scrolls" exhibition at the Arts House:To Ms. Amira Osman (Marketing and Communications Manager),cc.Colin Goh, General Manager,Florence Lee, Depury General ManagerDear Ms. Osman,I visited the Dead Sea Scrolls “exhibition” today with my wife. Thinking that it was from a legitimate scholarly institute or (how naïve of me!) the Israel Antiquities Authority, I was looking forward to a day of education and entertainment.Yet when I got it, much of the exhibition (and booklets) merely espouses an evangelical (fundamentalist) view of the Bible – there are booklets on the inerrancy of the Bible, on how archaeology has proven the Bible to be true etc.Apart from these there are many blatant misrepresentations of the state of archaeology and mainstream biblical scholarship:a) There was initial screening upon entry of a 5-10 minute pseudo-documentary on the Dead Sea Scrolls. A presenter (can’t remember the name) was described as a “biblical archaeologist” – a term that no serious archaeologist working in the Levant would apply to him or herself. (Some prefer the term “Syro-Palestinian archaeologist” but almost all reject the term “biblical archaeologist”). See the book by Thomas W. Davis, “Shifting Sands: The Rise and Fall of Biblical Archaeology”, Oxford, New York 2004. Davis is an actual archaeologist working in the field and the book tells why the term “Biblical archaeologist” is not considered a legitimate term by serious archaeologist.b) In the same presentation, the presenter made the erroneous statement that the entire old testament was translated into Greek in the third century BCE. This is a mistake – only the Pentateuch (the first five books of the Old Testament) was translated during that time. Note that this ‘error’ is not inadvertent but is a familiar claim by evangelical apologists who try to argue for an early date of all the books of the Old testament - if all the books have been translated by the third century BCE obviously these books must all have been written before then! This flies against modern scholarship which show that some books in the Old Testament such as the Book of Daniel was written only in the second century BCE]The actual state of scholarship on the Septuagint [The Greek translation of the Bible] is accurately given in the book by Ernst Würthwein, “The Text of the Old Testament” – Eerdmans 1988 pp.52-54c) Perhaps the most blatant error was one which claimed that the “Magdalene fragments” – which contains the 26th chapter of the Gospel of Matthew is dated to 50 AD!!! Scholars are unanimous in dating these fragments to 200 AD. The only ‘scholar’ cited that dated these fragments to 50 AD was the German papyrologist Carsten Thiede – a well know fundamentalist. This is what Burton Mack (a critical – legitimate – NT scholar) has to say about Thiede’s eccentric dating “From a critical scholar's point of view, Thiede's proposal is an example of just how desperate the Christian imagination can become in the quest to argue for the literal facticity of the Christian gospels” [Mack, Burton L., “Who Wrote the New Testament?:The Making of the Christian Myth” HarperCollins, San Francisco 1995] Yet the dating of 50 AD is presented as though it is a scholarly consensus position!In fact the last point was so blatant that I confronted the exhibitors. (Tak Boleh Tahan!!) One American exhibitor told me that “Yes, it could have been worded differently, but then we would have to change the whole display” (!!). When I told him that this was not a typo but a blatant attempt to deceive, he mentioned that Theide’s views are supported by “The Dallas Theological Seminary” – another well know evangelical institute!I have no issue with the religious strengthening their faith by having their own internal exhibitions on historical artifacts etc. But when it is presented to the public as a scholarly exhibition – this is quite close to being dishonest.I felt cheated of the $36 dollars I paid for the tickets and of the hour that I spent there before realizing what type of exhibition it was.I am disappointed with The Art House for show casing this without warning potential visitors of its clear religious bias.Yours sincerely,Paul TobinTo their credit, the Arts House speedily replied.
    • Weiye Loh
       
      The issue of truth is indeed so maddening. Certainly, the 'production' of truth has been widely researched and debated by scholars. Spivak for example, argued for the deconstruction by means of questioning the privilege of identity so that someone is believed to have the truth. And along the same line, albeit somewhat misunderstood I feel, It was mentioned in class that somehow people who are oppressed know better.
jaime yeo

Spread of HIV misinformation online - 6 views

"HIV denialists" - people who deny that HIV will cause AIDS, have been spreading information about their beliefs on the Internet. There is a cause for concern that such false information will deter...

misinformation online rights truth

started by jaime yeo on 21 Aug 09 no follow-up yet
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

Skepticblog » The Decline Effect - 0 views

  • The first group are those with an overly simplistic or naive sense of how science functions. This is a view of science similar to those films created in the 1950s and meant to be watched by students, with the jaunty music playing in the background. This view generally respects science, but has a significant underappreciation for the flaws and complexity of science as a human endeavor. Those with this view are easily scandalized by revelations of the messiness of science.
  • The second cluster is what I would call scientific skepticism – which combines a respect for science and empiricism as a method (really “the” method) for understanding the natural world, with a deep appreciation for all the myriad ways in which the endeavor of science can go wrong. Scientific skeptics, in fact, seek to formally understand the process of science as a human endeavor with all its flaws. It is therefore often skeptics pointing out phenomena such as publication bias, the placebo effect, the need for rigorous controls and blinding, and the many vagaries of statistical analysis. But at the end of the day, as complex and messy the process of science is, a reliable picture of reality is slowly ground out.
  • The third group, often frustrating to scientific skeptics, are the science-deniers (for lack of a better term). They may take a postmodernist approach to science – science is just one narrative with no special relationship to the truth. Whatever you call it, what the science-deniers in essence do is describe all of the features of science that the skeptics do (sometimes annoyingly pretending that they are pointing these features out to skeptics) but then come to a different conclusion at the end – that science (essentially) does not work.
  • ...13 more annotations...
  • this third group – the science deniers – started out in the naive group, and then were so scandalized by the realization that science is a messy human endeavor that the leap right to the nihilistic conclusion that science must therefore be bunk.
  • The article by Lehrer falls generally into this third category. He is discussing what has been called “the decline effect” – the fact that effect sizes in scientific studies tend to decrease over time, sometime to nothing.
  • This term was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time. 
  • Lehrer, however, applies this idea to all of science, not just parapsychology. He writes: And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
  • Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
  • Lehrer is describing the cutting edge or the fringe of science, and then acting as if it applies all the way down to the core. I think the problem is that there is so much scientific knowledge that we take for granted – so much so that we forget it is knowledge that derived from the scientific method, and at one point was not known.
  • It is telling that Lehrer uses as his primary examples of the decline effect studies from medicine, psychology, and ecology – areas where the signal to noise ratio is lowest in the sciences, because of the highly variable and complex human element. We don’t see as much of a decline effect in physics, for example, where phenomena are more objective and concrete.
  • If the truth itself does not “wear off”, as the headline of Lehrer’s article provocatively states, then what is responsible for this decline effect?
  • it is no surprise that effect science in preliminary studies tend to be positive. This can be explained on the basis of experimenter bias – scientists want to find positive results, and initial experiments are often flawed or less than rigorous. It takes time to figure out how to rigorously study a question, and so early studies will tend not to control for all the necessary variables. There is further publication bias in which positive studies tend to be published more than negative studies.
  • Further, some preliminary research may be based upon chance observations – a false pattern based upon a quirky cluster of events. If these initial observations are used in the preliminary studies, then the statistical fluke will be carried forward. Later studies are then likely to exhibit a regression to the mean, or a return to more statistically likely results (which is exactly why you shouldn’t use initial data when replicating a result, but should use entirely fresh data – a mistake for which astrologers are infamous).
  • skeptics are frequently cautioning against new or preliminary scientific research. Don’t get excited by every new study touted in the lay press, or even by a university’s press release. Most new findings turn out to be wrong. In science, replication is king. Consensus and reliable conclusions are built upon multiple independent lines of evidence, replicated over time, all converging on one conclusion.
  • Lehrer does make some good points in his article, but they are points that skeptics are fond of making. In order to have a  mature and functional appreciation for the process and findings of science, it is necessary to understand how science works in the real world, as practiced by flawed scientists and scientific institutions. This is the skeptical message.
  • But at the same time reliable findings in science are possible, and happen frequently – when results can be replicated and when they fit into the expanding intricate weave of the picture of the natural world being generated by scientific investigation.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Learn to love uncertainty and failure, say leading thinkers | Edge question | Science |... - 0 views

  • Being comfortable with uncertainty, knowing the limits of what science can tell us, and understanding the worth of failure are all valuable tools that would improve people's lives, according to some of the world's leading thinkers.
  • he ideas were submitted as part of an annual exercise by the web magazine Edge, which invites scientists, philosophers and artists to opine on a major question of the moment. This year it was, "What scientific concept would improve everybody's cognitive toolkit?"
  • the public often misunderstands the scientific process and the nature of scientific doubt. This can fuel public rows over the significance of disagreements between scientists about controversial issues such as climate change and vaccine safety.
  • ...13 more annotations...
  • Carlo Rovelli, a physicist at the University of Aix-Marseille, emphasised the uselessness of certainty. He said that the idea of something being "scientifically proven" was practically an oxymoron and that the very foundation of science is to keep the door open to doubt.
  • "A good scientist is never 'certain'. Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain: because the good scientist will be ready to shift to a different point of view if better elements of evidence, or novel arguments emerge. Therefore certainty is not only something of no use, but is in fact damaging, if we value reliability."
  • physicist Lawrence Krauss of Arizona State University agreed. "In the public parlance, uncertainty is a bad thing, implying a lack of rigour and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time," he said.
  • however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, no measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning."
  • Neil Gershenfeld, director of the Massachusetts Institute of Technology's Centre for Bits and Atoms wants everyone to know that "truth" is just a model. "The most common misunderstanding about science is that scientists seek and find truth. They don't – they make and test models," he said.
  • Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features – violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom."
  • writer and web commentator Clay Shirky suggested that people should think more carefully about how they see the world. His suggestion was the Pareto principle, a pattern whereby the top 1% of the population control 35% of the wealth or, on Twitter, the top 2% of users send 60% of the messages. Sometimes known as the "80/20 rule", the Pareto principle means that the average is far from the middle.It is applicable to many complex systems, "And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world," said Shirky. "We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming."
  • Kevin Kelly, editor-at-large of Wired, pointed to the value of negative results. "We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced."
  • Michael Shermer, publisher of the Skeptic Magazine, wrote about the importance of thinking "bottom up not top down", since almost everything in nature and society happens this way.
  • But most people don't see things that way, said Shermer. "Bottom up reasoning is counterintuitive. This is why so many people believe that life was designed from the top down, and why so many think that economies must be designed and that countries should be ruled from the top down."
  • Roger Schank, a psychologist and computer scientist, proposed that we should all know the true meaning of "experimentation", which he said had been ruined by bad schooling, where pupils learn that scientists conduct experiments and if we copy exactly what they did in our high school labs we will get the results they got. "In effect we learn that experimentation is boring, is something done by scientists and has nothing to do with our daily lives."Instead, he said, proper experiments are all about assessing and gathering evidence. "In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment. But people who don't see their actions as experiments, and those who don't know how to reason carefully from data, will continue to learn less well from their own experiences than those who do
  • Lisa Randall, a physicist at Harvard University, argued that perhaps "science" itself would be a useful concept for wider appreciation. "The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned – while appreciating and categorising the extent and limitations of what we know – plays a big role in how we think.
  • "Many words that summarise the nature of science such as 'cause and effect', 'predictions', and 'experiments', as well as words that describe probabilistic results such as 'mean', 'median', 'standard deviation', and the notion of 'probability' itself help us understand more specifically what this means and how to interpret the world and behaviour within it."
1 - 20 of 108 Next › Last »
Showing 20 items per page