The first group are those with an overly simplistic or naive sense of how science functions. This is a view of science similar to those films created in the 1950s and meant to be watched by students, with the jaunty music playing in the background. This view generally respects science, but has a significant underappreciation for the flaws and complexity of science as a human endeavor. Those with this view are easily scandalized by revelations of the messiness of science.
The second cluster is what I would call scientific skepticism – which combines a respect for science and empiricism as a method (really “the” method) for understanding the natural world, with a deep appreciation for all the myriad ways in which the endeavor of science can go wrong. Scientific skeptics, in fact, seek to formally understand the process of science as a human endeavor with all its flaws. It is therefore often skeptics pointing out phenomena such as publication bias, the placebo effect, the need for rigorous controls and blinding, and the many vagaries of statistical analysis. But at the end of the day, as complex and messy the process of science is, a reliable picture of reality is slowly ground out.
The third group, often frustrating to scientific skeptics, are the science-deniers (for lack of a better term). They may take a postmodernist approach to science – science is just one narrative with no special relationship to the truth. Whatever you call it, what the science-deniers in essence do is describe all of the features of science that the skeptics do (sometimes annoyingly pretending that they are pointing these features out to skeptics) but then come to a different conclusion at the end – that science (essentially) does not work.
this third group – the science deniers – started out in the naive group, and then were so scandalized by the realization that science is a messy human endeavor that the leap right to the nihilistic conclusion that science must therefore be bunk.
The article by Lehrer falls generally into this third category. He is discussing what has been called “the decline effect” – the fact that effect sizes in scientific studies tend to decrease over time, sometime to nothing.
This term was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time.
Lehrer, however, applies this idea to all of science, not just parapsychology. He writes:
And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
Lehrer is describing the cutting edge or the fringe of science, and then acting as if it applies all the way down to the core. I think the problem is that there is so much scientific knowledge that we take for granted – so much so that we forget it is knowledge that derived from the scientific method, and at one point was not known.
It is telling that Lehrer uses as his primary examples of the decline effect studies from medicine, psychology, and ecology – areas where the signal to noise ratio is lowest in the sciences, because of the highly variable and complex human element. We don’t see as much of a decline effect in physics, for example, where phenomena are more objective and concrete.
If the truth itself does not “wear off”, as the headline of Lehrer’s article provocatively states, then what is responsible for this decline effect?
it is no surprise that effect science in preliminary studies tend to be positive. This can be explained on the basis of experimenter bias – scientists want to find positive results, and initial experiments are often flawed or less than rigorous. It takes time to figure out how to rigorously study a question, and so early studies will tend not to control for all the necessary variables. There is further publication bias in which positive studies tend to be published more than negative studies.
Further, some preliminary research may be based upon chance observations – a false pattern based upon a quirky cluster of events. If these initial observations are used in the preliminary studies, then the statistical fluke will be carried forward. Later studies are then likely to exhibit a regression to the mean, or a return to more statistically likely results (which is exactly why you shouldn’t use initial data when replicating a result, but should use entirely fresh data – a mistake for which astrologers are infamous).
skeptics are frequently cautioning against new or preliminary scientific research. Don’t get excited by every new study touted in the lay press, or even by a university’s press release. Most new findings turn out to be wrong. In science, replication is king. Consensus and reliable conclusions are built upon multiple independent lines of evidence, replicated over time, all converging on one conclusion.
Lehrer does make some good points in his article, but they are points that skeptics are fond of making. In order to have a mature and functional appreciation for the process and findings of science, it is necessary to understand how science works in the real world, as practiced by flawed scientists and scientific institutions. This is the skeptical message.
But at the same time reliable findings in science are possible, and happen frequently – when results can be replicated and when they fit into the expanding intricate weave of the picture of the natural world being generated by scientific investigation.
By Carlin Romano
Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism?
Tone matters. And sarcasm is not science.
The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
"As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
Citizens of a functioning democracy must be able to know what the state is saying and doing in our name, to engage in what Pierre Rosanvallon calls “counter-democracy”*, the democracy of citizens distrusting rather than legitimizing the actions of the state. Wikileaks plainly improves those abilities.
On the other hand, human systems can’t stand pure transparency. For negotiation to work, people’s stated positions have to change, but change is seen, almost universally, as weakness. People trying to come to consensus must be able to privately voice opinions they would publicly abjure, and may later abandon. Wikileaks plainly damages those abilities. (If Aaron Bady’s analysis is correct, it is the damage and not the oversight that Wikileaks is designed to create.*)
we have a tension between two requirements for democratic statecraft, one that can’t be resolved, but can be brought to an acceptable equilibrium. Indeed, like the virtues of equality vs. liberty, or popular will vs. fundamental rights, it has to be brought into such an equilibrium for democratic statecraft not to be wrecked either by too much secrecy or too much transparency.
As Tom Slee puts it, “Your answer to ‘what data should the government make public?’ depends not so much on what you think about data, but what you think about the government.”* My personal view is that there is too much secrecy in the current system, and that a corrective towards transparency is a good idea. I don’t, however, believe in total transparency, and even more importantly, I don’t think that independent actors who are subject to no checks or balances is a good idea in the long haul.
The practical history of politics, however, suggests that the periodic appearance of such unconstrained actors in the short haul is essential to increased democratization, not just of politics but of thought.
We celebrate the printers of 16th century Amsterdam for making it impossible for the Catholic Church to constrain the output of the printing press to Church-approved books*, a challenge that helped usher in, among other things, the decentralization of scientific inquiry and the spread of politically seditious writings advocating democracy.
This intellectual and political victory didn’t, however, mean that the printing press was then free of all constraints. Over time, a set of legal limitations around printing rose up, including restrictions on libel, the publication of trade secrets, and sedition. I don’t agree with all of these laws, but they were at least produced by some legal process.
I am conflicted about the right balance between the visibility required for counter-democracy and the need for private speech among international actors. Here’s what I’m not conflicted about: When authorities can’t get what they want by working within the law, the right answer is not to work outside the law. The right answer is that they can’t get what they want.
The Unites States is — or should be — subject to the rule of law, which makes the extra-judicial pursuit of Wikileaks especially nauseating. (Calls for Julian’s assassination are even more nauseating.) It may be that what Julian has done is a crime. (I know him casually, but not well enough to vouch for his motivations, nor am I a lawyer.) In that case, the right answer is to bring the case to a trial.
Over the long haul, we will need new checks and balances for newly increased transparency — Wikileaks shouldn’t be able to operate as a law unto itself anymore than the US should be able to. In the short haul, though, Wikileaks is our Amsterdam. Whatever restrictions we eventually end up enacting, we need to keep Wikileaks alive today, while we work through the process democracies always go through to react to change. If it’s OK for a democracy to just decide to run someone off the internet for doing something they wouldn’t prosecute a newspaper for doing, the idea of an internet that further democratizes the public sphere will have taken a mortal blow.
In December 2010 the Department of Science and Technology (DST) launched a monthly competition in association with Cincinnati-based Proctor & Gamble (P&G) to solicit innovative ideas from Indian researchers. Winners were promised a cash award of $1000 and possible commercialization of their ideas by P&G, which has a beauty business worth over US$10 billion in global sales.
But the competition's first call - for skin whitening alternatives to hydroquinone, which is not approved for use in many places including the European Union - has prompted criticism from researchers who argue that such products help to propagate racist attitudes in the country. Meanwhile, the department's January challenge for cheaper alternatives to silicones in shampoos, lotions, fabric softeners, and other beauty products marketed by P&G has fared little better. The principal drawback of silicones is their expense and poor biodegradability but some researchers argue that India has more pressing issues for its scientists to address.
However, the current DST secretary Thirumalachari Ramasami disagrees. The DST-P&G Challenge of The Month is only a small part of the department's overall activities, he says.
“It is not a priority project but a very minor programme compared to larger issues of national importance that we are concerned with,” Ramasami told Nature adding that his department has earmarked only Rs.50 million (US$1.1 million) in total for the project. He says it is absurd to accuse the DST of promoting beauty research at the expense of more important problems. “Tell me which challenging issue has been ignored by DST?” he asks.
The website, churnalism.com, created by charity the Media Standards Trust, allows readers to paste press releases into a "churn engine". It then compares the text with a constantly updated database of more than 3m articles. The results, which give articles a "churn rating", show the percentage of any given article that has been reproduced from publicity material.The Guardian was given exclusive access to churnalism.com prior to launch. It revealed how all media organisations are at times simply republishing, verbatim, material sent to them by marketing companies and campaign groups.
Meanwhile, an independent film-maker, Chris Atkins, has revealed how he duped the BBC into running an entirely fictitious story about Downing Street's new cat to coincide with the site's launch.
The director created a Facebook page in the name of a fictitious character, "Tim Sutcliffe", who claimed the cat – which came from Battersea Cats Home – had belonged to his aunt Margaret. The story appeared in the Daily Mail and Metro, before receiving a prominent slot on BBC Radio 5 Live.
BBC Radio 5 Live's Gaby Logan talks about a fictitious cat story Link to this audio
Atkins, who was not involved in creating churnalism.com, uses spoof stories to highlight the failure of journalists to corroborate stories. He was behind an infamous prank last year that led to the BBC running a news package on a hoax Youtube video purporting to show urban foxhunters.
The creation of churnalism.com is likely to unnerve overworked journalists and the press officers who feed them. "People don't realise how much churn they're being fed every day," said Martin Moore, director of the trust, which seeks to improve standards in news. "Hopefully this will be an eye-opener."
Interestingly, all media outlets appear particularly susceptible to PR material disseminated by supermarkets: the Mail appears to have a particular appetite for publicity from Asda and Tesco, while the Guardian favours Waitrose releases.
Moore said one unexpected discovery has been that the BBC news website appears particularly prone to churning publicity material."Part of the reason is presumably because they feel a duty to put out so many government pronouncements," Moore said. "But the BBC also has a lot to produce in regions that the newspapers don't cover."
Scientists these days tend to keep up a polite fiction that all
science is equal. Except for the work of the misguided opponent whose
arguments we happen to be refuting at the time, we speak as though
every scientist's field and methods of study are as good as every
other scientist's and perhaps a little better. This keeps us all
cordial when it comes to recommending each other for government
grants.
Why should there be such rapid advances in some fields and not in
others? I think the usual explanations that we tend to think of - such
as the tractability of the subject, or the quality or education of the
men drawn into it, or the size of research contracts - are important
but inadequate. I have begun to believe that the primary factor in
scientific advance is an intellectual one. These rapidly moving fields
are fields where a particular method of doing scientific research is
systematically used and taught, an accumulative method of inductive
inference that is so effective that I think it should be given the
name of "strong inference." I believe it is important to examine this
method, its use and history and rationale, and to see whether other
groups and individuals might learn to adopt it profitably in their own
scientific and intellectual work.
In its separate elements, strong inference is just the simple and
old-fashioned method of inductive inference that goes back to Francis
Bacon. The steps are familiar to every college student and are
practiced, off and on, by every scientist. The difference comes in
their systematic application. Strong inference consists of applying
the following steps to every problem in science, formally and
explicitly and regularly:
Devising alternative hypotheses;
Devising a crucial experiment (or several of them), with
alternative possible outcomes, each of which will, as nearly is
possible, exclude one or more of the hypotheses;
Carrying out the experiment so as to get a clean result;
Recycling the procedure, making subhypotheses or sequential
hypotheses to refine the possibilities that remain, and so on.
On any new problem, of course, inductive inference is not as
simple and certain as deduction, because it involves reaching out into
the unknown. Steps 1 and 2 require intellectual inventions, which must
be cleverly chosen so that hypothesis, experiment, outcome, and
exclusion will be related in a rigorous syllogism; and the question of
how to generate such inventions is one which has been extensively
discussed elsewhere (2, 3). What the formal schema reminds us to do is to
try to make these inventions, to take the next step, to proceed to the
next fork, without dawdling or getting tied up in irrelevancies.
It is clear why this makes for rapid and powerful progress. For
exploring the unknown, there is no faster method; this is the minimum
sequence of steps. Any conclusion that is not an exclusion is insecure
and must be rechecked. Any delay in recycling to the next set of
hypotheses is only a delay. Strong inference, and the logical tree it
generates, are to inductive reasoning what the syllogism is to
deductive reasoning in that it offers a regular method for reaching
firm inductive conclusions one after the other as rapidly as
possible.
"But what is so novel about this?" someone will say. This is
the method of science and always has been, why give it a
special name? The reason is that many of us have almost forgotten
it. Science is now an everyday business. Equipment, calculations,
lectures become ends in themselves. How many of us write down our
alternatives and crucial experiments every day, focusing on the
exclusion of a hypothesis? We may write our scientific papers
so that it looks as if we had steps 1, 2, and 3 in mind all along. But
in between, we do busywork. We become "method- oriented" rather than
"problem-oriented." We say we prefer to "feel our way" toward
generalizations. We fail to teach our students how to sharpen up their
inductive inferences. And we do not realize the added power that the
regular and explicit use of alternative hypothesis and sharp exclusion
could give us at every step of our research.
A distinguished cell
biologist rose and said, "No two cells give the same
properties. Biology is the science of heterogeneous systems." And he
added privately. "You know there are scientists, and there are
people in science who are just working with these over-simplified
model systems - DNA chains and in vitro systems - who are not doing
science at all. We need their auxiliary work: they build apparatus,
they make minor studies, but they are not scientists."
To which Cy Levinthal replied: "Well, there are two kinds of
biologists, those who are looking to see if there is one thing that
can be understood and those who keep saying it is very complicated and
that nothing can be understood. . . . You must study the
simplest system you think has the properties you are interested
in."
At the 1958
Conference on Biophysics, at Boulder, there was a dramatic
confrontation between the two points of view. Leo Szilard said: "The
problems of how enzymes are induced, of how proteins are synthesized,
of how antibodies are formed, are closer to solution than is generally
believed. If you do stupid experiments, and finish one a year, it can
take 50 years. But if you stop doing experiments for a little while
and think how proteins can possibly be synthesized, there are
only about 5 different ways, not 50! And it will take only a few
experiments to distinguish these."
One of the young men added: "It is essentially the old question:
How small and elegant an experiment can you
perform?"
These comments upset a number of those present. An electron
microscopist said. "Gentlemen, this is off the track. This is
philosophy of science."
Szilard retorted. "I was not quarreling with third-rate scientists:
I was quarreling with first-rate scientists."
Any criticism or challenge to consider changing our methods strikes
of course at all our ego-defenses. But in this case the analytical
method offers the possibility of such great increases in effectiveness
that it is unfortunate that it cannot be regarded more often as a
challenge to learning rather than as challenge to combat. Many of the
recent triumphs in molecular biology have in fact been achieved on
just such "oversimplified model systems," very much along the
analytical lines laid down in the 1958 discussion. They have not
fallen to the kind of men who justify themselves by saying "No two
cells are alike," regardless of how true that may ultimately be. The
triumphs are in fact triumphs of a new way of thinking.
the emphasis on strong inference
is also partly due to the nature of the fields
themselves. Biology, with its vast informational detail and
complexity, is a "high-information" field, where years and decades can
easily be wasted on the usual type of "low-information" observations
or experiments if one does not think carefully in advance about what
the most important and conclusive experiments would be. And in
high-energy physics, both the "information flux" of particles from the
new accelerators and the million-dollar costs of operation have forced
a similar analytical approach. It pays to have a top-notch group
debate every experiment ahead of time; and the habit spreads
throughout the field.
Historically, I think, there have been two main contributions to
the development of a satisfactory strong-inference method. The first
is that of Francis Bacon (13). He wanted a
"surer method" of "finding out nature" than either the logic-chopping
or all-inclusive theories of the time or the laudable but crude
attempts to make inductions "by simple enumeration." He did not merely
urge experiments as some suppose, he showed the fruitfulness of
interconnecting theory and experiment so that the one checked the
other. Of the many inductive procedures he suggested, the most
important, I think, was the conditional inductive tree, which
proceeded from alternative hypothesis (possible "causes," as he calls
them), through crucial experiments ("Instances of the Fingerpost"), to
exclusion of some alternatives and adoption of what is left
("establishing axioms"). His Instances of the Fingerpost are
explicitly at the forks in the logical tree, the term being borrowed
"from the fingerposts which are set up where roads part, to indicate
the several directions."
ere was a method that could separate off the empty theories!
Bacon, said the inductive method could be learned by anybody, just
like learning to "draw a straighter line or more perfect circle
. . . with the help of a ruler or a pair of compasses." "My way of
discovering sciences goes far to level men's wit and leaves but little
to individual excellence, because it performs everything by the surest
rules and demonstrations." Even occasional mistakes would not be
fatal. "Truth will sooner come out from error than from
confusion."
Nevertheless there is a difficulty with this method. As Bacon
emphasizes, it is necessary to make "exclusions." He says, "The
induction which is to be available for the discovery and demonstration
of sciences and arts, must analyze nature by proper rejections and
exclusions, and then, after a sufficient number of negatives come to a
conclusion on the affirmative instances." "[To man] it is granted only
to proceed at first by negatives, and at last to end in affirmatives
after exclusion has been exhausted."
Or, as the philosopher Karl Popper says today there is no such
thing as proof in science - because some later alternative explanation
may be as good or better - so that science advances only by
disproofs. There is no point in making hypotheses that are not
falsifiable because such hypotheses do not say anything, "it must be
possible for all empirical scientific system to be refuted by
experience" (14).
The difficulty is that disproof is a hard doctrine. If you have a
hypothesis and I have another hypothesis, evidently one of them must be
eliminated. The scientist seems to have no choice but to be either
soft-headed or disputatious. Perhaps this is why so many tend to
resist the strong analytical approach and why some great scientists
are so disputatious.
Fortunately, it seems to me, this difficulty can be removed by the
use of a second great intellectual invention, the "method of multiple
hypotheses," which is what was needed to round out the Baconian
scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the
century, who is best known for his contribution to the
Chamberlain-Moulton hypothesis of the origin of the solar system.
Chamberlin says our trouble is that when we make a single
hypothesis, we become attached to it.
"The moment one has offered an original explanation for a
phenomenon which seems satisfactory, that moment affection for his
intellectual child springs into existence, and as the explanation
grows into a definite theory his parental affections cluster about his
offspring and it grows more and more dear to him. . . . There springs
up also unwittingly a pressing of the theory to make it fit the facts
and a pressing of the facts to make them fit the theory..."
"To avoid this grave danger, the method of multiple working
hypotheses is urged. It differs from the simple working hypothesis in
that it distributes the effort and divides the affections. . . . Each
hypothesis suggests its own criteria, its own method of proof, its own
method of developing the truth, and if a group of hypotheses encompass
the subject on all sides, the total outcome of means and of methods is
full and rich."
The conflict and
exclusion of alternatives that is necessary to sharp inductive
inference has been all too often a conflict between men, each with his
single Ruling Theory. But whenever each man begins to have multiple
working hypotheses, it becomes purely a conflict between ideas. It
becomes much easier then for each of us to aim every day at conclusive
disproofs - at strong inference - without either reluctance or
combativeness. In fact, when there are multiple hypotheses, which are
not anyone's "personal property," and when there are crucial
experiments to test them, the daily life in the laboratory takes on an
interest and excitement it never had, and the students can hardly wait
to get to work to see how the detective story will come out. It seems
to me that this is the reason for the development of those distinctive
habits of mind and the "complex thought" that Chamberlin described,
the reason for the sharpness, the excitement, the zeal, the teamwork -
yes, even international teamwork - in molecular biology and high-
energy physics today. What else could be so effective?
Unfortunately, I think, there are other other areas of science
today that are sick by comparison, because they have forgotten the
necessity for alternative hypotheses and disproof. Each man has only
one branch - or none - on the logical tree, and it twists at random
without ever coming to the need for a crucial decision at any point.
We can see from the external symptoms that there is something
scientifically wrong. The Frozen Method, The Eternal Surveyor, The
Never Finished, The Great Man With a Single Hypothcsis, The Little
Club of Dependents, The Vendetta, The All-Encompassing Theory Which
Can Never Be Falsified.
a "theory" of
this sort is not a theory at all, because it does not exclude
anything. It predicts everything, and therefore does not predict
anything. It becomes simply a verbal formula which the graduate
student repeats and believes because the professor has said it so
often. This is not science, but faith; not theory, but
theology. Whether it is hand-waving or number-waving, or
equation-waving, a theory is not a theory unless it can be
disproved. That is, unless it can be falsified by some possible
experimental outcome.
the work methods of a number of scientists have been
testimony to the power of strong inference. Is success not due in
many cases to systematic use of Bacon's "surest rules and
demonstrations" as much as to rare and unattainable intellectual
power? Faraday's famous diary (16), or Fermi's
notebooks (3, 17), show how
these men believed in the effectiveness of daily steps in applying
formal inductive methods to one problem after another.
Surveys, taxonomy,
design of equipment, systematic measurements and tables, theoretical
computations - all have their proper and honored place, provided they
are parts of a chain of precise induction of how nature
works. Unfortunately, all too often they become ends in themselves,
mere time-serving from the point of view of real scientific advance, a
hypertrophied methodology that justifies itself as a lore of
respectability.
We speak piously of taking measurements and making small studies
that will "add another brick to the temple of science." Most such
bricks just lie around the brickyard (20).
Tables of constraints have their place and value, but the study of one
spectrum after another, if not frequently re-evaluated, may become a
substitute for thinking, a sad waste of intelligence in a research
laboratory, and a mistraining whose crippling effects may last a
lifetime.
Beware of the man of one method or one
instrument, either experimental or theoretical. He tends to become
method-oriented rather than problem-oriented. The method-oriented man
is shackled; the problem-oriented man is at least reaching freely
toward that is most important. Strong inference redirects a man to
problem-orientation, but it requires him to be willing repeatedly to
put aside his last methods and teach himself new ones.
anyone who asks the question about
scientific effectiveness will also conclude that much of the
mathematizing in physics and chemistry today is irrelevant if not
misleading.
The great value of mathematical formulation is that when an
experiment agrees with a calculation to five decimal places, a great
many alternative hypotheses are pretty well excluded (though the Bohr
theory and the Schrödinger theory both predict exactly the same
Rydberg constant!). But when the fit is only to two decimal places, or
one, it may be a trap for the unwary; it may be no better than any
rule-of-thumb extrapolation, and some other kind of qualitative
exclusion might be more rigorous for testing the assumptions and more
important to scientific understanding than the quantitative fit.
Today we preach that science is not science unless it is
quantitative. We substitute correlations for causal studies, and
physical equations for organic reasoning. Measurements and equations
are supposed to sharpen thinking, but, in my observation, they more
often tend to make the thinking noncausal and fuzzy. They tend to
become the object of scientific manipulation instead of auxiliary
tests of crucial inferences.
Many - perhaps most - of the great issues of science are
qualitative, not quantitative, even in physics and
chemistry. Equations and measurements are useful when and only when
they are related to proof; but proof or disproof comes first and is in
fact strongest when it is absolutely convincing without any
quantitative measurement.
you can catch phenomena in a logical box
or in a mathematical box. The logical box is coarse but strong. The
mathematical box is fine-grained but flimsy. The mathematical box is
a beautiful way of wrapping up a problem, but it will not hold the
phenomena unless they have been caught in a logical box to begin
with.
Of course it is easy - and all too common - for one scientist to
call the others unscientific. My point is not that my particular
conclusions here are necessarily correct, but that we have long needed
some absolute standard of possible scientific effectiveness by which
to measure how well we are succeeding in various areas - a standard
that many could agree on and one that would be undistorted by the
scientific pressures and fashions of the times and the vested
interests and busywork that they develop. It is not public evaluation
I am interested in so much as a private measure by which to compare
one's own scientific performance with what it might be. I believe that
strong inference provides this kind of standard of what the maximum
possible scientific effectiveness could be - as well as a recipe for
reaching it.
The strong-inference
point of view is so resolutely critical of methods of work and values
in science that any attempt to compare specific cases is likely to
sound but smug and destructive. Mainly one should try to teach it by
example and by exhorting to self-analysis and self-improvement only in
general terms
one severe but useful private test - a
touchstone of strong inference - that removes the necessity for
third-person criticism, because it is a test that anyone can learn to
carry with him for use as needed. It is our old friend the Baconian
"exclusion," but I call it "The Question." Obviously it should be
applied as much to one's own thinking as to others'. It consists of
asking in your own mind, on hearing any scientific explanation or
theory put forward, "But sir, what experiment could disprove
your hypothesis?"; or, on hearing a scientific experiment described,
"But sir, what hypothesis does your experiment disprove?"
It is not true that all science is equal; or that we
cannot justly compare the effectiveness of scientists by any method
other than a mutual-recommendation system. The man to watch, the man
to put your money on, is not the man who wants to make "a survey" or a
"more detailed study" but the man with the notebook, the man with the
alternative hypotheses and the crucial experiments, the man who knows
how to answer your Question of disproof and is already working on
it.
There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
“A lot of those e-mails obviously weren’t meant for public consumption,” she told Chris Wallace of Fox News, where she is a source, a commentator and a subject, all wrapped into one.
She is of interest not because of what she did as governor but because she has almost perfected the modern hybrid of politician and celebrity: once your daughter appears on “Dancing With the Stars,” your celebrity is far more important that your position on off-shore drilling. That means that all those e-mails are destined for public consumption whether she likes it or not.
Like all other celebrities, politicians are expected now to be in constant digital contact with their fans/voters. Ms. Palin has excelled at this with her ubiquitous Twitter messages, her bus tour and her frequent appearances on Fox News. But unlike during the early days of the Internet, when a static Web site was all that politicians needed, communication these days travels not just one way or two, but in all directions. Being in touch means that people can touch you back.
Obama may want to be your friend of Facebook, but he, like every other president, wants to maintain custody of the narrative. Now that he actually has to govern, his ratings — an operative word in both politics and media — have dropped.
Extensive efforts were expended over the weekend to comb through Sarah Palin's e-mails from her time as the governor of Alaska. Ms. Palin may have thought that she was just chatting with her staff and friends, but now every comma, every aside, every random thought is being picked apart for meaning.
There may have been some legitimate news buried in the trove of e-mails, and she remains a person of significant public interest. So the press response makes sense, but she could not be blamed for feeling that she was under attack from a horde of biting ants.
1) Any errors, however inconsequential, will be taken Very Seriously and accusations of fraud will be made.
2) If you adjust the raw data we will accuse you of fraudulently fiddling the figures whilst cooking the books.3) If you don't adjust the raw data we will accuse you of fraudulently failing to account for station biases and UHI.
7) By all means publish all your source code, but we will still accuse you of hiding the methodology for your adjustments.
8) If you publish results to your website and errors are found, we will accuse you of a Very Serious Error irregardless of severity (see point #1) and bemoan the press release you made about your results even though you won't remember making any press release about your results.
9) With regard to point #8 above, at extra cost and time to yourself you must employ someone to thoroughly check each monthly update before is is published online, even if this delays publication of the results till the end of the month. You might be surprised at this because no-one actually relies on such freshly published data anyway and aren't the many eyes of blog audit better than a single pair of eyes? Well that's irrelevant. See points #1 and #810) If you don't publish results promptly at the start of the month on the public website, but instead say publish the results to a private site for checks to be performed before release, we will accuse you of engaging in unscientific-like secrecy and massaging the data behind closed doors.
14) If any region/station shows a warming trend that doesn't match the raw data, and we can't understand why, we will accuse you of fraud and dismiss the entire record. Don't expect us to have to read anything to understand results.
15) You must provide all input datasets on your website. It's no good referencing NOAAs site and saying they "own" the GHCN data for example. I don't want their GHCN raw temperatures file, I want the one on your hard drive which you used for the analysis, even if you claim they are the same. If you don't do this we will accuse you of hiding the data and preventing us checking your results.
24. In the event that you comply with all of the above, we will point out that a mere hundred-odd years of data is irrelevant next to the 4.5 billion year history of Earth. So why do you even bother?
23) In the unlikely event that I haven't wasted enough of your time forcing you to comply with the above rules, I also demand to see all emails you have sent or will send during the period 1950 to 2050 that contain any of these keywords
22) We don't need any scrutiny because our role isn't important.
17) We will treat your record as if no alternative exists. As if your record is the make or break of Something Really Important (see point #1) and we just can't check the results in any other way.
16) You are to blame for any station data your team uses. If we find out that a station you use is next to an AC Unit, we will conclude you personally planted the thermometer there to deliberately get warming.
an article today by Roger Pielke Nr. (no relation) that posited the fascinating concept that thermometers are just as capricious and unreliable proxies for temperature as tree rings. In fact probably more so, and re-computing global temperature by gristlecone pines would reveal the true trend of global cooling, which will be in all our best interests and definitely NOT just those of well paying corporate entities.
Dear Professor Muller and Team,
If you want your Berkley Earth Surface Temperature project to succeed and become the center of attention you need to learn from the vast number of mistakes Hansen and Jones have made with their temperature records. To aid this task I created a point by point list for you.
So if syphilis causes AIDS, and not HIV, where is the evidence? As microbiologist and epidemiolist Tara Smith points out in her excellent blog, Margulis offers none. Instead, she says to the credulous and uncritical interviewer:
The idea that penicillin kills the cause of the disease is nuts. If you treat the painless chancre in the first few days of infection, you may stop the bacterium before the symbiosis develops, but if you really get syphilis, all you can do is live with the spirochete. The spirochete lives permanently as a symbiont in the patient. The infection cannot be killed because it becomes part of the patient’s genome and protein synthesis biochemistry. After syphilis establishes this symbiotic relationship with a person, it becomes dependent on human cells and is undetectable by any testing.
Great. Just what we need: an untestable hypothesis promoted by assertion and reputation, not something concrete that scientists could test (although most specialists in microbiology would say the evidence is clear that the HIV retrovirus, and not the spirochaete bacterium Treponema pallidum, is the true cause of AIDS).
Has she never actually LOOKED at the hundreds of peer-reviewed scientific papers documenting the structure of the HIV virus, and the clear documentation of that virus in patients that suffer and die from AIDS? Or the fact that patients treated with anti-retrovirals manage to suppress their AIDS symptoms? Or the disaster in South Africa, when the government became active AIDS deniers, spread misinformation and myths about AIDS, and the infection rate shot up? Not even the hard-core AIDS deniers like Peter Duesberg deny that the HIV virus exists!
she slips outside the realm of science entirely, and becomes a full-fledged AIDS denier. My jaw just dropped when I read the following:
There is a vast body of literature on syphilis spanning from the 1500s until after World War II, when the disease was supposedly cured by penicillin. It’s in our paper “Resurgence of the Great Imitator.” Our claim is that there’s no evidence that HIV is an infectious virus, or even an entity at all. There’s no scientific paper that proves that the HIV virus causes AIDS. Kary Mullis said in an interview that he went looking for a reference substantiating that HIV causes AIDS and discovered, “There is no such document.”
The phenomenon is a familiar one: let's call it "the Linus Pauling effect." A highly respected and honored senior scientist, largely out of the mainstream and not up to date with the recent developments (and perhaps a bit senile), makes weird pronouncements about their pet ideas-and the press, so used to giving celebrities free air time for any junk they wish to say, prints and publishes it all as if it is the final truth. The great Linus Pauling may have won two Nobel Prizes, but his crazy idea that megadoses of Vitamin C would cure nearly everything seems to have died with him. William Shockley may have won a Nobel for his work on transistors, but his racist ideas about genetics (a field in which he had no expertise) should never been taken seriously. Kary Mullis may have deserved his Nobel Prize for developing the polymerase chain reaction, but that gives him no qualifications to speak with authority on his unscientific ideas about AIDS denial and global warming and astrology (he hits the trifecta for pseudoscientific woo).
Arguably the most important finding from the emerging economics of happiness has been the Easterlin Paradox.
What is this paradox? It is the juxtaposition of three observations:
1) Within a society, rich people tend to be much happier than poor people.
2) But, rich societies tend not to be happier than poor societies (or not by much).
3) As countries get richer, they do not get happier.
Easterlin offered an appealing resolution to his paradox, arguing that only relative income matters to happiness. Other explanations suggest a “hedonic treadmill,” in which we must keep consuming more just to stay at the same level of happiness.
We have re-analyzed all of the relevant post-war data, and also analyzed the particularly interesting new data from the Gallup World Poll.
Last Thursday we presented our research at the latest Brookings Panel on Economic Activity, and we have arrived at a rather surprising conclusion:
There is no Easterlin Paradox.
The facts about income and happiness turn out to be much simpler than first realized:
1) Rich people are happier than poor people.
2) Richer countries are happier than poorer countries.
3) As countries get richer, they tend to get happier.
What explains these new findings? The key turns out to be an accumulation of data over recent decades. Thirty years ago it was difficult to make convincing international comparisons because there were few datasets comparing rich and poor countries. Instead, researchers were forced to make comparisons based on a handful of moderately-rich and very-rich countries. These data just didn’t lend themselves to strong conclusions.
Moreover, repeated happiness surveys around the world have allowed us to observe the evolution of G.D.P. and happiness through time — both over a longer period, and for more countries. On balance, G.D.P. and happiness have tended to move together.
There is a second issue here that has led to mistaken inferences: a tendency to confuse absence of evidence for a proposition as evidence of its absence. Thus, when early researchers could not isolate a statistically reliable association between G.D.P. and happiness, they inferred that this meant the two were unrelated, and a paradox was born.
Our complete analysis is available here. An excellent summary is available in today’s New York Times, here, with a very cool graphic, and readers’ comments. Other commentary is available in the F.T. (here and here), and Time Magazine.
Murdoch himself was protected by his potent
political contacts. Tony Blair, for example, would do anything to help out
his close friend and ally. I can even disclose that, before the last
election, Tony Blair rang Gordon Brown to try to persuade the Labour Prime
Minister to stop the Labour MP Tom Watson raising the issue of phone
hacking. And as recently as two weeks ago both Ed Miliband and David Cameron
attended the News International (News Corp’s British newspaper publishing
arm) summer party, despite the fact that the newspaper group was the subject
of two separate criminal investigations.
For more than three decades the most powerful man in Britain has not been a politician; it has been the brilliant but ruthless US-based media tycoon Rupert Murdoch, who burst on to the scene with the purchase of the News of the World in an audacious takeover bid in 1968. Within barely a decade he had built up a controlling interest in British newspapers.But he did not just control our media. He dominated British public life. Politicians - including prime ministers - treated him with deference and fear. Time and again the Murdoch press - using techniques of which we have only just become aware - destroyed political careers. Murdoch also claims to determine the results of general elections.
A blogger has been threatened with a libel action by the Daily Mail, one of the papers that rails against the libel laws because of their chilling effect on press freedom.
Kevin Arscott, author of the Angry Mob blog, reports that he and his webhosts have received letters from lawyers acting for the Mail's parent company, Associated Newspapers.
It concerns an item posted on his former blog in November 2009 that attacked the Mail and its editor, Paul Dacre, over a story about the number of babies born in a London hospital to non-British mothers. (Needless to say, it was economical with the truth - see here).
reporters had not specifically asked the family's permission to publish them and
that his parents had not wanted the photographs to be used. "There was no
question that the photo had news value," AP senior managing editor John
Daniszewski said. "But we also were very aware the family wished for the picture
not to be seen."After lengthy internal discussions, AP concluded that
the photo was a part of the war they needed to convey.
The US Defence Secretary, Mr Robert Gates, condemned the decision by the news
agency Associated Press (AP) to publish the picture. "I cannot imagine the pain
and suffering Lance Corporal Bernard's death has caused his family. Why your
organisation would purposefully defy the family's wishes, knowing full well that
it will lead to yet more anguish, is beyond me,"
the picture illustrated the sacrifice and the bravery of those fighting in
Afghanistan."We feel it is our journalistic duty to show the reality of
the war there, however unpleasant and brutal that sometimes is," said Mr
Santiago Lyon, director of photography for AP.
Ethical question, when public's demand for information collides with private's demand for non-disclosure, which one should win? How do we measure the pros and cons?
URL: http://en.mercopress.com/2009/09/16/lula-da-silva-supports-unrestricted-political-campaigning-in-internet The article talks about the use of the internet in political campaigns. Brazilian Pre...
WIKILEAK RELEASES SECRET CHURCH DOCUMENTS! The First Link is an article regarding Wikileaks releasing a 'copyrighted' and confidential Church document of the Mormons (also known as the Church of J...
I posted a bookmark on something related to this issue. http://www.todayonline.com/World/EDC090907-0000047/The-photo-thats-caused-a-stir AP decided to publish a photo of a fatally wounded young ...
American university professor predicted that the mass media will lose its status as the world’s primary information source
Instead, people will demand customised content to suit their individual needs, he said.
“People will increasingly have the ability to choose news and information according to their individual interests,” he told 400 media professionals, lecturers and students at the Singapore Press Holdings’ Media in Transition Lecture Series.
Crosbie said that this new media world might develop as a result of today’s information overload. Tailoring news to each reader could help address the million-dollar question of how to continue making money from print media at a time when online news is flourishing and free, he said.
Sam Harris’s first book,
The End of Faith
, ignited a
worldwide debate about the validity of religion. In the aftermath,
Harris discovered that most people—from religious fundamentalists to
non-believing scientists—agree on one point: Science has nothing to say
on the subject of human values. Indeed, our failure to address
questions of meaning and morality through science has now become the
most common justification for religious faith. It is also the primary
reason why so many secularists and religious moderates feel obligated
to “respect” the hardened superstitions of their more devout
neighbors.
In this explosive new book, Sam Harris tears down the wall
between scientific facts and human values, arguing that most people are
simply mistaken about the relationship between morality and the rest of
human knowledge. Harris urges us to think about morality in terms of
human and animal well-being, viewing the experiences of conscious
creatures as peaks and valleys on a “moral landscape.” Because there
are definite facts to be known about where we fall on this landscape,
Harris foresees a time when science will no longer limit itself to
merely describing what people do in the name of “morality”; in
principle, science should be able to tell us what we ought to do to
live the best lives possible.
Harris demonstrates that we already know
enough about the human brain and its relationship to events in the
world to say that there are right and wrong answers to the most
pressing questions of human life. Because such answers exist, moral
relativism is simply false—and comes at increasing cost to humanity.
it’s odd that a narrative of crisis, of a systemic failure, in American education is currently so persuasive. This back-to-school season, we have Davis Guggenheim’s documentary about the charter-school movement, “Waiting for ‘Superman’ ”; two short, dyspeptic books about colleges and universities, “Higher Education?,” by Andrew Hacker and Claudia Dreifus, and “Crisis on Campus,” by Mark C. Taylor; and a lot of positive attention to the school-reform movement in the national press. From any of these sources, it would be difficult to reach the conclusion that, over all, the American education system works quite well.
In higher education, the reform story isn’t so fully baked yet, but its main elements are emerging. The system is vast: hundreds of small liberal-arts colleges; a new and highly leveraged for-profit sector that offers degrees online; community colleges; state universities whose budgets are being cut because of the recession; and the big-name private universities, which get the most attention. You wouldn’t design a system this way—it’s filled with overlaps and competitive excess. Much of it strives toward an ideal that took shape in nineteenth-century Germany: the university as a small, élite center of pure scholarly research. Research is the rationale for low teaching loads, publication requirements, tenure, tight-knit academic disciplines, and other practices that take it on the chin from Taylor, Hacker, and Dreifus for being of little benefit to students or society.
Yet for a system that—according to Taylor, especially—is deeply in crisis, American higher education is not doing badly. The lines of people wanting to get into institutions that the authors say are just waiting to cheat them by overcharging and underteaching grow ever longer and more international, and the people waiting in those lines don’t seem deterred by price increases, even in a terrible recession.
There have been attempts in the past to make the system more rational and less redundant, and to shrink the portion of it that undertakes scholarly research, but they have not met with much success, and not just because of bureaucratic resistance by the interested parties. Large-scale, decentralized democratic societies are not very adept at generating neat, rational solutions to messy situations. The story line on education, at this ill-tempered moment in American life, expresses what might be called the Noah’s Ark view of life: a vast territory looks so impossibly corrupted that it must be washed away, so that we can begin its activities anew, on finer, higher, firmer principles. One should treat any perception that something so large is so completely awry with suspicion, and consider that it might not be true—especially before acting on it.
mass higher education is one of the great achievements of American democracy. It embodies a faith in the capabilities of ordinary people that the Founders simply didn't have.