Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Thought Experiment

Rss Feed Group items tagged

Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

Edge: HOW DOES OUR LANGUAGE SHAPE THE WAY WE THINK? By Lera Boroditsky - 0 views

  • Do the languages we speak shape the way we see the world, the way we think, and the way we live our lives? Do people who speak different languages think differently simply because they speak different languages? Does learning new languages change the way you think? Do polyglots think differently when speaking different languages?
  • For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia.
  • What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world.
  • ...15 more annotations...
  • Suppose you want to say, "Bush read Chomsky's latest book." Let's focus on just the verb, "read." To say this sentence in English, we have to mark the verb for tense; in this case, we have to pronounce it like "red" and not like "reed." In Indonesian you need not (in fact, you can't) alter the verb to mark tense. In Russian you would have to alter the verb to indicate tense and gender. So if it was Laura Bush who did the reading, you'd use a different form of the verb than if it was George. In Russian you'd also have to include in the verb information about completion. If George read only part of the book, you'd use a different form of the verb than if he'd diligently plowed through the whole thing. In Turkish you'd have to include in the verb how you acquired this information: if you had witnessed this unlikely event with your own two eyes, you'd use one verb form, but if you had simply read or heard about it, or inferred it from something Bush said, you'd use a different verb form.
  • Clearly, languages require different things of their speakers. Does this mean that the speakers think differently about the world? Do English, Indonesian, Russian, and Turkish speakers end up attending to, partitioning, and remembering their experiences differently just because they speak different languages?
  • For some scholars, the answer to these questions has been an obvious yes. Just look at the way people talk, they might say. Certainly, speakers of different languages must attend to and encode strikingly different aspects of the world just so they can use their language properly. Scholars on the other side of the debate don't find the differences in how people talk convincing. All our linguistic utterances are sparse, encoding only a small part of the information we have available. Just because English speakers don't include the same information in their verbs that Russian and Turkish speakers do doesn't mean that English speakers aren't paying attention to the same things; all it means is that they're not talking about them. It's possible that everyone thinks the same way, notices the same things, but just talks differently.
  • Believers in cross-linguistic differences counter that everyone does not pay attention to the same things: if everyone did, one might think it would be easy to learn to speak other languages. Unfortunately, learning a new language (especially one not closely related to those you know) is never easy; it seems to require paying attention to a new set of distinctions. Whether it's distinguishing modes of being in Spanish, evidentiality in Turkish, or aspect in Russian, learning to speak these languages requires something more than just learning vocabulary: it requires paying attention to the right things in the world so that you have the correct information to include in what you say.
  • Follow me to Pormpuraaw, a small Aboriginal community on the western edge of Cape York, in northern Australia. I came here because of the way the locals, the Kuuk Thaayorre, talk about space. Instead of words like "right," "left," "forward," and "back," which, as commonly used in English, define space relative to an observer, the Kuuk Thaayorre, like many other Aboriginal groups, use cardinal-direction terms — north, south, east, and west — to define space.1 This is done at all scales, which means you have to say things like "There's an ant on your southeast leg" or "Move the cup to the north northwest a little bit." One obvious consequence of speaking such a language is that you have to stay oriented at all times, or else you cannot speak properly. The normal greeting in Kuuk Thaayorre is "Where are you going?" and the answer should be something like " Southsoutheast, in the middle distance." If you don't know which way you're facing, you can't even get past "Hello."
  • The result is a profound difference in navigational ability and spatial knowledge between speakers of languages that rely primarily on absolute reference frames (like Kuuk Thaayorre) and languages that rely on relative reference frames (like English).2 Simply put, speakers of languages like Kuuk Thaayorre are much better than English speakers at staying oriented and keeping track of where they are, even in unfamiliar landscapes or inside unfamiliar buildings. What enables them — in fact, forces them — to do this is their language. Having their attention trained in this way equips them to perform navigational feats once thought beyond human capabilities. Because space is such a fundamental domain of thought, differences in how people think about space don't end there. People rely on their spatial knowledge to build other, more complex, more abstract representations. Representations of such things as time, number, musical pitch, kinship relations, morality, and emotions have been shown to depend on how we think about space. So if the Kuuk Thaayorre think differently about space, do they also think differently about other things, like time? This is what my collaborator Alice Gaby and I came to Pormpuraaw to find out.
  • To test this idea, we gave people sets of pictures that showed some kind of temporal progression (e.g., pictures of a man aging, or a crocodile growing, or a banana being eaten). Their job was to arrange the shuffled photos on the ground to show the correct temporal order. We tested each person in two separate sittings, each time facing in a different cardinal direction. If you ask English speakers to do this, they'll arrange the cards so that time proceeds from left to right. Hebrew speakers will tend to lay out the cards from right to left, showing that writing direction in a language plays a role.3 So what about folks like the Kuuk Thaayorre, who don't use words like "left" and "right"? What will they do? The Kuuk Thaayorre did not arrange the cards more often from left to right than from right to left, nor more toward or away from the body. But their arrangements were not random: there was a pattern, just a different one from that of English speakers. Instead of arranging time from left to right, they arranged it from east to west. That is, when they were seated facing south, the cards went left to right. When they faced north, the cards went from right to left. When they faced east, the cards came toward the body and so on. This was true even though we never told any of our subjects which direction they faced. The Kuuk Thaayorre not only knew that already (usually much better than I did), but they also spontaneously used this spatial orientation to construct their representations of time.
  • I have described how languages shape the way we think about space, time, colors, and objects. Other studies have found effects of language on how people construe events, reason about causality, keep track of number, understand material substance, perceive and experience emotion, reason about other people's minds, choose to take risks, and even in the way they choose professions and spouses.8 Taken together, these results show that linguistic processes are pervasive in most fundamental domains of thought, unconsciously shaping us from the nuts and bolts of cognition and perception to our loftiest abstract notions and major life decisions. Language is central to our experience of being human, and the languages we speak profoundly shape the way we think, the way we see the world, the way we live our lives.
  • The fact that even quirks of grammar, such as grammatical gender, can affect our thinking is profound. Such quirks are pervasive in language; gender, for example, applies to all nouns, which means that it is affecting how people think about anything that can be designated by a noun.
  • How does an artist decide whether death, say, or time should be painted as a man or a woman? It turns out that in 85 percent of such personifications, whether a male or female figure is chosen is predicted by the grammatical gender of the word in the artist's native language. So, for example, German painters are more likely to paint death as a man, whereas Russian painters are more likely to paint death as a woman.
  • Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key" — a word that is masculine in German and feminine in Spanish — the German speakers were more likely to use words like "hard," "heavy," "jagged," "metal," "serrated," and "useful," whereas Spanish speakers were more likely to say "golden," "intricate," "little," "lovely," "shiny," and "tiny." To describe a "bridge," which is feminine in German and masculine in Spanish, the German speakers said "beautiful," "elegant," "fragile," "peaceful," "pretty," and "slender," and the Spanish speakers said "big," "dangerous," "long," "strong," "sturdy," and "towering." This was true even though all testing was done in English, a language without grammatical gender. The same pattern of results also emerged in entirely nonlinguistic tasks (e.g., rating similarity between pictures). And we can also show that it is aspects of language per se that shape how people think: teaching English speakers new grammatical gender systems influences mental representations of objects in the same way it does with German and Spanish speakers. Apparently even small flukes of grammar, like the seemingly arbitrary assignment of gender to a noun, can have an effect on people's ideas of concrete objects in the world.
  • Even basic aspects of time perception can be affected by language. For example, English speakers prefer to talk about duration in terms of length (e.g., "That was a short talk," "The meeting didn't take long"), while Spanish and Greek speakers prefer to talk about time in terms of amount, relying more on words like "much" "big", and "little" rather than "short" and "long" Our research into such basic cognitive abilities as estimating duration shows that speakers of different languages differ in ways predicted by the patterns of metaphors in their language. (For example, when asked to estimate duration, English speakers are more likely to be confused by distance information, estimating that a line of greater length remains on the test screen for a longer period of time, whereas Greek speakers are more likely to be confused by amount, estimating that a container that is fuller remains longer on the screen.)
  • An important question at this point is: Are these differences caused by language per se or by some other aspect of culture? Of course, the lives of English, Mandarin, Greek, Spanish, and Kuuk Thaayorre speakers differ in a myriad of ways. How do we know that it is language itself that creates these differences in thought and not some other aspect of their respective cultures? One way to answer this question is to teach people new ways of talking and see if that changes the way they think. In our lab, we've taught English speakers different ways of talking about time. In one such study, English speakers were taught to use size metaphors (as in Greek) to describe duration (e.g., a movie is larger than a sneeze), or vertical metaphors (as in Mandarin) to describe event order. Once the English speakers had learned to talk about time in these new ways, their cognitive performance began to resemble that of Greek or Mandarin speakers. This suggests that patterns in a language can indeed play a causal role in constructing how we think.6 In practical terms, it means that when you're learning a new language, you're not simply learning a new way of talking, you are also inadvertently learning a new way of thinking. Beyond abstract or complex domains of thought like space and time, languages also meddle in basic aspects of visual perception — our ability to distinguish colors, for example. Different languages divide up the color continuum differently: some make many more distinctions between colors than others, and the boundaries often don't line up across languages.
  • To test whether differences in color language lead to differences in color perception, we compared Russian and English speakers' ability to discriminate shades of blue. In Russian there is no single word that covers all the colors that English speakers call "blue." Russian makes an obligatory distinction between light blue (goluboy) and dark blue (siniy). Does this distinction mean that siniy blues look more different from goluboy blues to Russian speakers? Indeed, the data say yes. Russian speakers are quicker to distinguish two shades of blue that are called by the different names in Russian (i.e., one being siniy and the other being goluboy) than if the two fall into the same category. For English speakers, all these shades are still designated by the same word, "blue," and there are no comparable differences in reaction time. Further, the Russian advantage disappears when subjects are asked to perform a verbal interference task (reciting a string of digits) while making color judgments but not when they're asked to perform an equally difficult spatial interference task (keeping a novel visual pattern in memory). The disappearance of the advantage when performing a verbal task shows that language is normally involved in even surprisingly basic perceptual judgments — and that it is language per se that creates this difference in perception between Russian and English speakers.
  • What it means for a language to have grammatical gender is that words belonging to different genders get treated differently grammatically and words belonging to the same grammatical gender get treated the same grammatically. Languages can require speakers to change pronouns, adjective and verb endings, possessives, numerals, and so on, depending on the noun's gender. For example, to say something like "my chair was old" in Russian (moy stul bil' stariy), you'd need to make every word in the sentence agree in gender with "chair" (stul), which is masculine in Russian. So you'd use the masculine form of "my," "was," and "old." These are the same forms you'd use in speaking of a biological male, as in "my grandfather was old." If, instead of speaking of a chair, you were speaking of a bed (krovat'), which is feminine in Russian, or about your grandmother, you would use the feminine form of "my," "was," and "old."
  •  
    For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia. What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world. Language is a uniquely human gift, central to our experience of being human. Appreciating its role in constructing our mental lives brings us one step closer to understanding the very nature of humanity.
Weiye Loh

The world through language » Scienceline - 0 views

  • If you know only one language, you live only once. A man who knows two languages is worth two men. He who loses his language loses his world. (Czech, French and Gaelic proverbs.)
  • The hypothesis first put forward fifty years ago by linguist Benjamin Lee Whorf—that our language significantly affects our experience of the world—is making a comeback in various forms, and with it no shortage of debate.
  • The idea that language shapes thought was taboo for a long time, said Dan Slobin, a psycholinguist at the University of California, Berkeley. “Now the ice is breaking.” The taboo, according to Slobin, was largely due to the widespread acceptance of the ideas of Noam Chomsky, one of the most influential linguists of the 20th century. Chomsky proposed that the human brain comes equipped at birth with a set of rules—or universal grammar—that organizes language. As he likes to say, a visiting Martian would conclude that everyone on Earth speaks mutually unintelligible dialects of a single language.
  • ...11 more annotations...
  • Chomsky is hesitant to accept the recent claims of language’s profound influence on thought. “I’m rather skeptical about all of this, though there probably are some marginal effects,” he said.
  • Some advocates of the Whorfian view find support in studies of how languages convey spatial orientation. English and Dutch speakers describe orientation from an egocentric frame of reference (to my left or right). Mayan speakers use a geocentric frame of reference (to the north or south).
  • Does this mean they think about space in fundamentally different ways? Not exactly, said Lila Gleitman, a psychologist from the University of Pennsylvania. Since we ordinarily assume that others talk like us, she explained, vague instructions like “arrange it the same way” will be interpreted in whatever orientation (egocentric or geocentric) is most common in our language. “That’s going to influence how you solve an ambiguous problem, but it doesn’t mean that’s the way you think, or must think,” said Gleitman. In fact, she repeated the experiment with unambiguous instructions, providing cues to indicate whether objects should be arranged north-south or left-right. She found that people in both languages are just as good at arranging objects in either orientation.
  • Similarly, Anna Papafragou, a psychologist at the University of Delaware, thinks that the extent of language’s effect on thought has been somewhat exaggerated.
  • Papafragou compared how long Greek and English speakers paid attention to clip-art animation sequences, for example, a man skating towards a snowman. By measuring their eye movements, Papafragou was able to tell which parts of the scene held their gaze the longest. Because English speakers generally use verbs that describe manner of motion, like slide and skip, she predicted they would pay more attention to what was moving (the skates). Since Greeks use verbs that describe path, like approach and ascend, they should pay more attention to endpoint of the motion (the snowman). She found that this was true only when people had to describe the scene; when asked to memorize it, attention patterns were nearly identical. According to Papafragou, when people need to speak about what they see, they’ll focus on the parts relevant for planning sentences. Otherwise, language does not show much of an effect on attention.
  • “Each language is a bright transparent medium through which our thoughts may pass, relatively undistorted,” said Gleitman.
  • Others think that language does, in fact, introduce some distortion. Linguist Guy Deutscher of the University of Manchester in the U.K. suggests that while language can’t prevent you from thinking anything, it does compel you to think in specific ways. Language forces you to habitually pay attention to different aspects of the world.
  • For example, many languages assign genders to nouns (“bridge” is feminine in German and masculine in Spanish). A study by cognitive psychologist Lera Boroditsky of Stanford University found that German speakers were more likely to describe “bridge” with feminine terms like elegant and slender, while Spanish speakers picked words like sturdy and towering. Having to constantly keep track of gender, Deutscher suggests, may subtly change the way native speakers imagine object’s characteristics.
  • However, this falls short of the extreme view some ascribe to Whorf: that language actually determines thought. According to Steven Pinker, an experimental psychologist and linguist at Harvard University, three things have to hold for the Whorfian hypothesis to be true: speakers of one language should find it nearly impossible to think like speakers of another language; the differences in language should affect actual reasoning; and the differences should be caused by language, not just correlated with it. Otherwise, we may just be dealing with a case of “crying Whorf.”
  • But even mild claims may reveal complexities in the relationship between language and thought. “You can’t actually separate language, thought and perception,” said Debi Roberson, a psychologist at the University of Essex in the U.K. “All of these processes are going on, not just in parallel, but interactively.”
  • Language may not, as the Gaelic proverb suggests, form our entire world. But it will continue to provide insights into our thoughts—whether as a window, a looking glass, or a distorted mirror.
Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Weiye Loh

TPM: The Philosophers' Magazine | Is morality relative? Depends on your personality - 0 views

  • no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
  • If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
  • in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here
  • ...8 more annotations...
  • when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
  • consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
  • Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
  • Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
  • People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
  • To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
  • The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
  • If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Weiye Loh

The Ashtray: The Ultimatum (Part 1) - NYTimes.com - 0 views

  • “Under no circumstances are you to go to those lectures. Do you hear me?” Kuhn, the head of the Program in the History and Philosophy of Science at Princeton where I was a graduate student, had issued an ultimatum. It concerned the philosopher Saul Kripke’s lectures — later to be called “Naming and Necessity” — which he had originally given at Princeton in 1970 and planned to give again in the Fall, 1972.
  • Whiggishness — in history of science, the tendency to evaluate and interpret past scientific theories not on their own terms, but in the context of current knowledge. The term comes from Herbert Butterfield’s “The Whig Interpretation of History,” written when Butterfield, a future Regius professor of history at Cambridge, was only 31 years old. Butterfield had complained about Whiggishness, describing it as “…the study of the past with direct and perpetual reference to the present” – the tendency to see all history as progressive, and in an extreme form, as an inexorable march to greater liberty and enlightenment. [3] For Butterfield, on the other hand, “…real historical understanding” can be achieved only by “attempting to see life with the eyes of another century than our own.” [4][5].
  • Kuhn had attacked my Whiggish use of the term “displacement current.” [6] I had failed, in his view, to put myself in the mindset of Maxwell’s first attempts at creating a theory of electricity and magnetism. I felt that Kuhn had misinterpreted my paper, and that he — not me — had provided a Whiggish interpretation of Maxwell. I said, “You refuse to look through my telescope.” And he said, “It’s not a telescope, Errol. It’s a kaleidoscope.” (In this respect, he was probably right.) [7].
  • ...9 more annotations...
  • I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ” [8] ¶He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.” ¶And then I added, “…except for someone who imagines himself to be God.” ¶It was at this point that Kuhn threw the ashtray at me.
  • I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.
  • That's the problem with relativism: Who's to say who's right and who's wrong? Somehow I'm not surprised to hear Kuhn was an ashtray-hurler. In the end, what other argument could he make?
  • For us to have a conversation and come to an agreement about the meaning of some word without having to refer to some outside authority like a dictionary, we would of necessity have to be satisfied that our agreement was genuine and not just a polite acknowledgement of each others' right to their opinion, can you agree with that? If so, then let's see if we can agree on the meaning of the word 'know' because that may be the crux of the matter. When I use the word 'know' I mean more than the capacity to apprehend some aspect of the world through language or some other represenational symbolism. Included in the word 'know' is the direct sensorial perception of some aspect of the world. For example, I sense the floor that my feet are now resting upon. I 'know' the floor is really there, I can sense it. Perhaps I don't 'know' what the floor is made of, who put it there, and other incidental facts one could know through the usual symbolism such as language as in a story someone tells me. Nevertheless, the reality I need to 'know' is that the floor, or whatever you may wish to call the solid - relative to my body - flat and level surface supported by more structure then the earth, is really there and reliably capable of supporting me. This is true and useful knowledge that goes directly from the floor itself to my knowing about it - via sensation - that has nothing to do with my interpretive system.
  • Now I am interested in 'knowing' my feet in the same way that my feet and the whole body they are connected to 'know' the floor. I sense my feet sensing the floor. My feet are as real as the floor and I know they are there, sensing the floor because I can sense them. Furthermore, now I 'know' that it is 'I' sensing my feet, sensing the floor. Do you see where I am going with this line of thought? I am including in the word 'know' more meaning than it is commonly given by everyday language. Perhaps it sounds as if I want to expand on the Cartesian formula of cogito ergo sum, and in truth I prefer to say I sense therefore I am. It is my sensations of the world first and foremost that my awareness, such as it is, is actively engaged with reality. Now, any healthy normal animal senses the world but we can't 'know' if they experience reality as we do since we can't have a conversation with them to arrive at agreement. But we humans can have this conversation and possibly agree that we can 'know' the world through sensation. We can even know what is 'I' through sensation. In fact, there is no other way to know 'I' except through sensation. Thought is symbolic representation, not direct sensing, so even though the thoughtful modality of regarding the world may be a far more reliable modality than sensation in predicting what might happen next, its very capacity for such accurate prediction is its biggest weakness, which is its capacity for error
  • Sensation cannot be 'wrong' unless it is used to predict outcomes. Thought can be wrong for both predicting outcomes and for 'knowing' reality. Sensation alone can 'know' reality even though it is relatively unreliable, useless even, for making predictions.
  • If we prioritize our interests by placing predictability over pure knowing through sensation, then of course we will not value the 'knowledge' to be gained through sensation. But if we can switch the priorities - out of sheer curiosity perhaps - then we can enter a realm of knowledge through sensation that is unbelievably spectacular. Our bodies are 'made of' reality, and by methodically exercising our nascent capacity for self sensing, we can connect our knowing 'I' to reality directly. We will not be able to 'know' what it is that we are experiencing in the way we might wish, which is to be able to predict what will happen next or to represent to ourselves symbolically what we might experience when we turn our attention to that sensation. But we can arrive at a depth and breadth of 'knowing' that is utterly unprecedented in our lives by operating that modality.
  • One of the impressions that comes from a sustained practice of self sensing is a clearer feeling for what "I" is and why we have a word for that self referential phenomenon, seemingly located somewhere behind our eyes and between our ears. The thing we call "I" or "me" depending on the context, turns out to be a moving point, a convergence vector for a variety of images, feelings and sensations. It is a reference point into which certain impressions flow and out of which certain impulses to act diverge and which may or may not animate certain muscle groups into action. Following this tricky exercize in attention and sensation, we can quickly see for ourselves that attention is more like a focused beam and awareness is more like a diffuse cloud, but both are composed of energy, and like all energy they vibrate, they oscillate with a certain frequency. That's it for now.
  • I loved the writer's efforts to find a fixed definition of “Incommensurability;” there was of course never a concrete meaning behind the word. Smoke and mirrors.
Weiye Loh

Models, Plain and Fancy - NYTimes.com - 0 views

  • Karl Smith argues that informal economic arguments — models in the sense of thought experiments, not necessarily backed by equations and/or data-crunching — deserve more respect from the profession.
  • misunderstandings in economics come about because people don’t have in their minds any intuitive notion of what it is they’re supposed to be modeling.
  • And Karl Smith is right: no way could Hume have published such a thing in a modern journal. So yes, simple intuitive stories are important, and deserve more credit.
  • ...1 more annotation...
  • You could argue that modern economics really began with David Hume’s Of the Balance of Trade, whose core is a gloriously clear thought experiment
Weiye Loh

Roger Pielke Jr.'s Blog: Global Temperature Trends - 0 views

  • My concern about the potential effects of human influences on the climate system are not a function of global average warming over a long-period of time or of predictions of continued warming into the future.
  • what maters are the effects of human influences on the climate system on human and ecological scales, not at the global scale. No one experiences global average temperature and it is very poorly correlated with things that we do care about in specific places at specific times.
  • Consider the following thought experiment. Divide the world up into 1,000 grid boxes of equal area. Now imagine that the temperature in each of 500 of those boxes goes up by 20 degrees while the temperature in the other 500 goes down by 20 degrees. The net global change is exactly zero (because I made it so). However, the impacts would be enormous. Let's further say that the changes prescribed in my thought experiment are the direct consequence of human activity. Would we want to address those changes? Or would we say, ho hum, it all averages out globally, so no problem? The answer is obvious and is not a function of what happens at some global average scale, but what happens at human and ecological scales.
  • ...2 more annotations...
  • In the real world, the effects of increasing carbon dioxide on human and ecological scales are well established, and they include a biogechemical effect on land ecosystems with subsequent effects on water and climate, as well as changes to the chemistry of the oceans. Is it possible that these effects are benign? Sure. Is it also possible that these effects have some negatives? Sure. These two factors alone would be sufficient for one to begin to ask questions about the worth of decarbonizing the global energy system. But greenhouse gas emissions also have a radiative effect that, in the real world, is thought to be a net warming, all else equal and over a global scale. However, if this effect were to be a net cooling, or even, no net effect at the global scale, it would not change my views about a need to consider decarbonizing the energy system one bit. There is an effect -- or effects to be more accurate -- and these effects could be negative.
  • The debate over climate change has many people on both sides of the issue wrapped up in discussing global average temperature trends. I understand this as it is an icon with great political symbolism. It has proved a convenient political battleground, but the reality is that it should matter little to the policy case for decarbonization. What matters is that there is a human effect on the climate system and it could be negative with respect to things people care about. That is enough to begin asking whether we want to think about accelerating decarbonization of the global economy.
  •  
    one needs to know only two things about the science of climate change to begin asking whether accelerating decarbonization of the economy might be worth doing: Carbon dioxide has an influence on the climate system. This influence might well be negative for things many people care about. That is it. An actual decision to accelerate decarbonization and at what rate will depend on many other things, like costs and benefits of particular actions unrelated to climate and technological alternatives. In this post I am going to further explain my views, based on an interesting question posed in that earlier thread. What would my position be if it were to be shown, hypothetically, that the global average surface temperature was not warming at all, or in fact even cooling (over any relevant time period)? Would I then change my views on the importance of decarbonizing the global energy system?
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Sociologist Harry Collins poses as a physicist. - By Jon Lackman - Slate Magazine - 0 views

  • British sociologist Harry Collins asked a scientist who specializes in gravitational waves to answer seven questions about the physics of these waves. Collins, who has made an amateur study of this field for more than 30 years but has never actually practiced it, also answered the questions himself. Then he submitted both sets of answers to a panel of judges who are themselves gravitational-wave researchers. The judges couldn't tell the impostor from one of their own. Collins argues that he is therefore as qualified as anyone to discuss this field, even though he can't conduct experiments in it.
  • The journal Nature predicted that the experiment would have a broad impact, writing that Collins could help settle the "science wars of the 1990s," "when sociologists launched what scientists saw as attacks on the very nature of science, and scientists responded in kind," accusing the sociologists of misunderstanding science. More generally, it could affect "the argument about whether an outsider, such as an anthropologist, can properly understand another group, such as a remote rural community." With this comment, Nature seemed to be saying that if a sociologist can understand physics, then anyone can understand anything.
  • It will be interesting to see if Collins' results can indeed be repeated in different situations. Meanwhile, his experiment is plenty interesting in itself. Just one of the judges succeeded in distinguishing Collins' answers from those of the trained experts. One threw up his hands. And the other seven declared Collins the physicist. He didn't simply do as well as the trained specialist—he did better, even though the test questions demanded technical answers. One sample answer from Collins gives you the flavor: "Since gravitational waves change the shape of spacetime and radio waves do not, the effect on an interferometer of radio waves can only be to mimic the effects of a gravitational wave, not reproduce them." (More details can be found in this paper Collins wrote with his collaborators.)
  • ...5 more annotations...
  • To be sure, a differently designed experiment would have presented more difficulty for Collins. If he'd chosen questions that involved math, they would have done him in
  • But many scientists consider themselves perfectly qualified to discuss topics for which they lack the underlying mathematical skills, as Collins noted when I talked to him. "You can be a great physicist and not know any mathematics," he said.
  • So, if Collins can talk gravitational waves as well as an insider, who cares if he doesn't know how to crunch the numbers? Alan Sokal does. The New York University physicist is famous for an experiment a decade ago that seemed to demonstrate the futility of laymen discussing science. In 1996, he tricked the top humanities journal Social Text into publishing as genuine scholarship a totally nonsensical paper that celebrated fashionable literary theory and then applied it to all manner of scientific questions. ("As Lacan suspected, there is an intimate connection between the external structure of the physical world and its inner psychological representation qua knot theory.") Sokal showed that, with a little flattery, laymen could be induced to swallow the most ridiculous of scientific canards—so why should we value their opinions on science as highly as scientists'?
  • Sokal doesn't think Collins has proved otherwise. When I reached him this week, he acknowledged that you don't need to practice science in order to understand it. But he maintains, as he put it to Nature, that in many science debates, "you need a knowledge of the field that is virtually, if not fully, at the level of researchers in the field," in order to participate. He elaborated: Say there are two scientists, X and Y. If you want to argue that X's theory was embraced over Y's, even though Y's is better, because the science community is biased against Y, then you had better be able to read and evaluate their theories yourself, mathematics included (or collaborate with someone who can). He has a point. Just because mathematics features little in the work of some gravitational-wave physicists doesn't mean it's a trivial part of the subject.
  • Even if Collins didn't demonstrate that he is qualified to pronounce on all of gravitational-wave physics, he did learn more of the subject than anyone may have thought possible. Sokal says he was shocked by Collins' store of knowledge: "He knows more about gravitational waves than I do!" Sokal admitted that Collins was already qualified to pronounce on a lot, and that with a bit more study, he would be the equal of a professional.
Weiye Loh

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
Weiye Loh

It's Only A Theory: From the 2010 APA in Boston: Neuropsychology and ethics - 0 views

  • Joshua Greene from Harvard, known for his research on "neuroethics," the neurological underpinnings of ethical decision making in humans. The title of Greene's talk was "Beyond point-and-shoot morality: why cognitive neuroscience matters for ethics."
  • What Greene is interested in is to find out to what factors moral judgment is sensitive to, and whether it is sensitive to the relevant factors. He presented his dual process theory of morality. In this respect, he proposed an analogy with a camera. Cameras have automatic (point and shoot) settings as well as manual controls. The first mode is good enough for most purposes, the second allows the user to fine tune the settings more carefully. The two modes allow for a nice combination of efficiency and flexibility.
  • The idea is that the human brain also has two modes, a set of efficient automatic responses and a manual mode that makes us more flexible in response to non standard situations. The non moral example is our response to potential threats. Here the amygdala is very fast and efficient at focusing on potential threats (e.g., the outline of eyes in the dark), even when there actually is no threat (it's a controlled experiment in a lab, no lurking predator around).
  • ...12 more annotations...
  • Delayed gratification illustrates the interaction between the two modes. The brain is attracted by immediate rewards, no matter what kind. However, when larger rewards are eventually going to become available, other parts of the brain come into play to override (sometimes) the immediate urge.
  • Greene's research shows that our automatic setting is "Kantian," meaning that our intuitive responses are deontological, rule driven. The manual setting, on the other hand, tends to be more utilitarian / consequentialist. Accordingly, the first mode involves emotional areas of the brain, the second one involves more cognitive areas.
  • The evidence comes from the (in)famous trolley dilemma and it's many variations.
  • when people refuse to intervene in the footbridge (as opposed to the lever) version of the dilemma, they do so because of a strong emotional response, which contradicts the otherwise utilitarian calculus they make when considering the lever version.
  • psychopaths turn out to be more utilitarian than normal subjects - presumably not because consequentialism is inherently pathological, but because their emotional responses are stunted. Mood also affects the results, with people exposed to comedy (to enhance mood), for instance, more likely to say that it is okay to push the guy off the footbridge.
  • In a more recent experiment, subjects were asked to say which action carried the better consequences, which made them feel worse, and which was overall morally acceptable. The idea was to separate the cognitive, emotional and integrative aspects of moral decision making. Predictably, activity in the amygdala correlated with deontological judgment, activity in more cognitive areas was associated with utilitarianism, and different brain regions became involved in integrating the two.
  • Another recent experiment used visual vs. verbal descriptions of moral dilemmas. Turns out that more visual people tend to behave emotionally / deontologically, while more verbal people are more utilitarian.
  • studies show that interfering with moral judgment by engaging subjects with a cognitive task slows down (though it does not reverse) utilitarian judgment, but has no effect on deontological judgment. Again, in agreement with the conclusion that the first type of modality is the result of cognition, the latter of emotion.
  • Nice to know, by the way, that when experimenters controlled for "real world expectations" that people have about trolleys, or when they used more realistic scenarios than trolleys and bridges, the results don't vary. In other words, trolley thought experiments are actually informative, contrary to popular criticisms.
  • What factors affect people's decision making in moral judgment? The main one is proximity, with people feeling much stronger obligations if they are present to the event posing the dilemma, or even relatively near (a disaster happens in a nearby country), as opposed to when they are far (a country on the other side of the world).
  • Greene's general conclusion is that neuroscience matters to ethics because it reveals the hidden mechanisms of human moral decision making. However, he says this is interesting to philosophers because it may lead to question ethical theories that are implicitly or explicitly based on such judgments. But neither philosophical deontology nor consequentialism are in fact based on common moral judgments, seems to me. They are the result of explicit analysis. (Though Greene raises the possibility that some philosophers engage in rationalizing, rather than reason, as in Kant's famously convoluted idea that masturbation is wrong because one is using oneself as a mean to an end...)
  • this is not to say that understanding moral decision making in humans isn't interesting or in fact even helpful in real life cases. An example of the latter is the common moral condemnation of incest, which is an emotional reaction that probably evolved to avoid genetically diseased offspring. It follows that science can tell us that three is nothing morally wrong in cases of incest when precautions have been taken to avoid pregnancy (and assuming psychological reactions are also accounted for). Greene puts this in terms of science helping us to transform difficult ought questions into easier ought questions.
Weiye Loh

Arsenic bacteria - a post-mortem, a review, and some navel-gazing | Not Exactly Rocket ... - 0 views

  • t was the big news that wasn’t. Hyperbolic claims about the possible discovery of alien life, or a second branch of life on Earth, turned out to be nothing more than bacteria that can thrive on arsenic, using it in place of phosphorus in their DNA and other molecules. But after the initial layers of hype were peeled away, even this extraordinar
  • This is a chronological roundup of the criticism against the science in the paper itself, ending with some personal reflections on my own handling of the story (skip to Friday, December 10th for that bit).
  • Thursday, December 2nd: Felisa Wolfe-Simon published a paper in Science, claiming to have found bacteria in California’s Mono Lake that can grow using arsenic instead of phosphorus. Given that phosphorus is meant to be one of six irreplaceable elements, this would have been a big deal, not least because the bacteria apparently used arsenic to build the backbones of their DNA molecules.
  • ...14 more annotations...
  • In my post, I mentioned some caveats. Wolfe-Simon isolated the arsenic-loving strain, known as GFAJ-1, by growing Mono Lake bacteria in ever-increasing concentrations of arsenic while diluting out the phosphorus. It is possible that the bacteria’s arsenic molecules were an adaptation to the harsh environments within the experiment, rather than Mono Lake itself. More importantly, there were still detectable levels of phosphorus left in the cells at the end of the experiment, although Wolfe-Simon claimed that the bacteria shouldn’t have been able to grow on such small amounts.
  • signs emerged that NASA weren’t going to engage with the criticisms. Dwayne Brown, their senior public affairs officer, highlighted the fact that the paper was published in one of the “most prestigious scientific journals” and deemed it inappropriate to debate the science using the same media and bloggers who they relied on for press coverage of the science. Wolfe-Simon herself tweeted that “discussion about scientific details MUST be within a scientific venue so that we can come back to the public with a unified understanding.”
  • Jonathan Eisen says that “they carried out science by press release and press conference” and “are now hypocritical if they say that the only response should be in the scientific literature.” David Dobbs calls the attitude “a return to pre-Enlightenment thinking”, and rightly noted that “Rosie Redfield is a peer, and her blog is peer review”.
  • Chris Rowan agreed, saying that what happens after publication is what he considers to be “real peer review”. Rowan said, “The pre-publication stuff is just a quality filter, a check that the paper is not obviously wrong – and an imperfect filter at that. The real test is what happens in the months and years after publication.”Grant Jacobs and others post similar thoughts, while Nature and the Columbia Journalism Review both cover the fracas.
  • Jack Gilbert at the University of Chicago said that impatient though he is, peer-reviewed journals are the proper forum for criticism. Others were not so kind. At the Guardian, Martin Robbins says that “at almost every stage of this story the actors involved were collapsing under the weight of their own slavish obedience to a fundamentally broken… well… ’system’” And Ivan Oransky noted that NASA failed to follow its own code of conduct when announcing the study.
  • Dr Isis said, “If question remains about the voracity of these authors findings, then the only thing that is going to answer that doubt is data.  Data cannot be generated by blog discussion… Talking about digging a ditch never got it dug.”
  • it is astonishing how quickly these events unfolded and the sheer number of bloggers and media outlets that became involved in the criticism. This is indeed a brave new world, and one in which we are all the infamous Third Reviewer.
  • I tried to quell the hype around the study as best I could. I had the paper and I think that what I wrote was a fair representation of it. But, of course, that’s not necessarily enough. I’ve argued before that journalists should not be merely messengers – we should make the best possible efforts to cut through what’s being said in an attempt to uncover what’s actually true. Arguably, that didn’t happen although to clarify, I am not saying that the paper is rubbish or untrue. Despite the criticisms, I want to see the authors respond in a thorough way or to see another lab attempt replicate the experiments before jumping to conclusions.
  • the sheer amount of negative comment indicates that I could have been more critical of the paper in my piece. Others have been supportive in suggesting that this was more egg on the face of the peer reviewers and indeed, several practicing scientists took the findings on face value, speculating about everything from the implications for chemotherapy to whether the bacteria have special viruses. The counter-argument, which I have no good retort to, is that peer review is no guarantee of quality, and that writers should be able to see through the fog of whatever topic they write about.
  • my response was that we should expect people to make reasonable efforts to uncover truth and be skeptical, while appreciating that people can and will make mistakes.
  • it comes down to this: did I do enough? I was certainly cautious. I said that “there is room for doubt” and I brought up the fact that the arsenic-loving bacteria still contain measurable levels of phosphorus. But I didn’t run the paper past other sources for comment, which I typically do it for stories that contain extraordinary claims. There was certainly plenty of time to do so here and while there were various reasons that I didn’t, the bottom line is that I could have done more. That doesn’t always help, of course, but it was an important missed step. A lesson for next time.
  • I do believe that it you’re going to try to hold your profession to a higher standard, you have to be honest and open when you’ve made mistakes yourself. I also think that if you cover a story that turns out to be a bit dodgy, you have a certain responsibility in covering the follow-up
  • A basic problem with is the embargo. Specifically that journalists get early access, while peers – other specialists in the field – do not. It means that the journalist, like yourself, can rely only on the original authors, with no way of getting other views on the findings. And it means that peers can’t write about the paper when the journalists (who, inevitably, do a positive-only coverage due to the lack of other viewpoints) do, but will be able to voice only after they’ve been able to digest the paper and formulate a response.
  • No, that’s not true. The embargo doens’t preclude journalists from sending papers out to other authors for review and comment. I do this a lot and I have been critical about new papers as a result, but that’s the step that I missed for this story.
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
Weiye Loh

Skepticblog » Cognitive Biases and Handedness - 0 views

  • A recent study concerns the bias of being left or right-handed. Our handedness affects our judgments regarding the quality and “goodness” of things in our environment. There is a clear language bias favoring the dominant right-handers: “right” is correct, while left-handed complements are undesirable, for example. It turns out this is not mere cultural bias, but reflects an underlying cognitive bias. For example: In experiments by psychologist Daniel Casasanto, when people were asked which of two products to buy, which of two job applicants to hire, or which of two alien creatures looks more intelligent, right-handers tended to choose the product, person, or creature they saw on their right, but most left-handers chose the one on their left.
  • when put into a situation where we have to make a judgment based mostly on our gut feelings or intuition, biases will tend to come out. (It is probably difficult for most people to come up with an evidence-based system for assessing which alien looks more intelligent.) It is possible the common evolved sensibilities will dominate in such situations – most people, for example, might pick the alien with the larger eyes. But that is not what the researchers found – simple handedness was the determining factor.
  • This is a subconscious bias. If a subject were asked why they chose the alien on the right, they would probably not say, “because I am right-handed and have an inherent bias toward things in the right side of my visual field.” Rather, they would justify their judgment post-hoc – pointing out features that had nothing to do with their actual decision-making, but giving the illusion of a rational choice.
  • ...2 more annotations...
  • Casasanto found, in the new study, that these biases are also easily manipulated. First he studies stroke patients who were paralyzed on one side of the body or the other. If a right-hander were weak on the left side (as a control) this had no effect on their choice. But if their right side were weak, then their preference shifted to their intact left side. This, however, can be due to damage to the brain, rather than the fact that they are now obligate left-handers. So he did a follow up experiment in which subjects were made to perform a task with a ski-glove on one hand. If right-handers wore the glove on their left hand, again this had no effect on their choices. But if they wore it on their right hand while performing tasks for as little as 12 minutes, then their cognitive bias shifted to that of a left-hander.
  • Casasanto observes: ‘People generally think their judgments are rational, and their concepts are stable. But if wearing a glove for a few minutes can reverse people’s usual judgments about what’s good and bad, perhaps the mind is more malleable than we thought.’
  •  
    believers generally operate under the paradigm of seeing is believing, while skeptics operate under the paradigm that often believing is seeing.
Weiye Loh

Don't dumb me down | Science | The Guardian - 0 views

  • Science stories usually fall into three families: wacky stories, scare stories and "breakthrough" stories.
  • these stories are invariably written by the science correspondents, and hotly followed, to universal jubilation, with comment pieces, by humanities graduates, on how bonkers and irrelevant scientists are.
  • A close relative of the wacky story is the paradoxical health story. Every Christmas and Easter, regular as clockwork, you can read that chocolate is good for you (www.badscience.net/?p=67), just like red wine is, and with the same monotonous regularity
  • ...19 more annotations...
  • At the other end of the spectrum, scare stories are - of course - a stalwart of media science. Based on minimal evidence and expanded with poor understanding of its significance, they help perform the most crucial function for the media, which is selling you, the reader, to their advertisers. The MMR disaster was a fantasy entirely of the media's making (www.badscience.net/?p=23), which failed to go away. In fact the Daily Mail is still publishing hysterical anti-immunisation stories, including one calling the pneumococcus vaccine a "triple jab", presumably because they misunderstood that the meningitis, pneumonia, and septicaemia it protects against are all caused by the same pneumococcus bacteria (www.badscience.net/?p=118).
  • people periodically come up to me and say, isn't it funny how that Wakefield MMR paper turned out to be Bad Science after all? And I say: no. The paper always was and still remains a perfectly good small case series report, but it was systematically misrepresented as being more than that, by media that are incapable of interpreting and reporting scientific data.
  • Once journalists get their teeth into what they think is a scare story, trivial increases in risk are presented, often out of context, but always using one single way of expressing risk, the "relative risk increase", that makes the danger appear disproportionately large (www.badscience.net/?p=8).
  • he media obsession with "new breakthroughs": a more subtly destructive category of science story. It's quite understandable that newspapers should feel it's their job to write about new stuff. But in the aggregate, these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data
  • Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases. Often, a front page science story will emerge from a press release alone, and the formal academic paper may never appear, or appear much later, and then not even show what the press reports claimed it would (www.badscience.net/?p=159).
  • there was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false (www.tinyurl.com/ceq33). It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis that we'll see later. Scientists know how to read a paper. That's what they do for a living: read papers, pick them apart, pull out what's good and bad.
  • Scientists never said that tenuous small new findings were important headline news - journalists did.
  • there is no useful information in most science stories. A piece in the Independent on Sunday from January 11 2004 suggested that mail-order Viagra is a rip-off because it does not contain the "correct form" of the drug. I don't use the stuff, but there were 1,147 words in that piece. Just tell me: was it a different salt, a different preparation, a different isomer, a related molecule, a completely different drug? No idea. No room for that one bit of information.
  • Remember all those stories about the danger of mobile phones? I was on holiday at the time, and not looking things up obsessively on PubMed; but off in the sunshine I must have read 15 newspaper articles on the subject. Not one told me what the experiment flagging up the danger was. What was the exposure, the measured outcome, was it human or animal data? Figures? Anything? Nothing. I've never bothered to look it up for myself, and so I'm still as much in the dark as you.
  • Because papers think you won't understand the "science bit", all stories involving science must be dumbed down, leaving pieces without enough content to stimulate the only people who are actually going to read them - that is, the people who know a bit about science.
  • Compare this with the book review section, in any newspaper. The more obscure references to Russian novelists and French philosophers you can bang in, the better writer everyone thinks you are. Nobody dumbs down the finance pages.
  • Statistics are what causes the most fear for reporters, and so they are usually just edited out, with interesting consequences. Because science isn't about something being true or not true: that's a humanities graduate parody. It's about the error bar, statistical significance, it's about how reliable and valid the experiment was, it's about coming to a verdict, about a hypothesis, on the back of lots of bits of evidence.
  • science journalists somehow don't understand the difference between the evidence and the hypothesis. The Times's health editor Nigel Hawkes recently covered an experiment which showed that having younger siblings was associated with a lower incidence of multiple sclerosis. MS is caused by the immune system turning on the body. "This is more likely to happen if a child at a key stage of development is not exposed to infections from younger siblings, says the study." That's what Hawkes said. Wrong! That's the "Hygiene Hypothesis", that's not what the study showed: the study just found that having younger siblings seemed to be somewhat protective against MS: it didn't say, couldn't say, what the mechanism was, like whether it happened through greater exposure to infections. He confused evidence with hypothesis (www.badscience.net/?p=112), and he is a "science communicator".
  • how do the media work around their inability to deliver scientific evidence? They use authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. "Scientists today said ... scientists revealed ... scientists warned." And if they want balance, you'll get two scientists disagreeing, although with no explanation of why (an approach at its most dangerous with the myth that scientists were "divided" over the safety of MMR). One scientist will "reveal" something, and then another will "challenge" it
  • The danger of authority figure coverage, in the absence of real evidence, is that it leaves the field wide open for questionable authority figures to waltz in. Gillian McKeith, Andrew Wakefield, Kevin Warwick and the rest can all get a whole lot further, in an environment where their authority is taken as read, because their reasoning and evidence is rarely publicly examined.
  • it also reinforces the humanities graduate journalists' parody of science, for which we now have all the ingredients: science is about groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality: they do work that is either wacky, or dangerous, but either way, everything in science is tenuous, contradictory and, most ridiculously, "hard to understand".
  • This misrepresentation of science is a direct descendant of the reaction, in the Romantic movement, against the birth of science and empiricism more than 200 years ago; it's exactly the same paranoid fantasy as Mary Shelley's Frankenstein, only not as well written. We say descendant, but of course, the humanities haven't really moved forward at all, except to invent cultural relativism, which exists largely as a pooh-pooh reaction against science. And humanities graduates in the media, who suspect themselves to be intellectuals, desperately need to reinforce the idea that science is nonsense: because they've denied themselves access to the most significant developments in the history of western thought for 200 years, and secretly, deep down, they're angry with themselves over that.
  • had a good spirited row with an eminent science journalist, who kept telling me that scientists needed to face up to the fact that they had to get better at communicating to a lay audience. She is a humanities graduate. "Since you describe yourself as a science communicator," I would invariably say, to the sound of derisory laughter: "isn't that your job?" But no, for there is a popular and grand idea about, that scientific ignorance is a useful tool: if even they can understand it, they think to themselves, the reader will. What kind of a communicator does that make you?
  • Science is done by scientists, who write it up. Then a press release is written by a non-scientist, who runs it by their non-scientist boss, who then sends it to journalists without a science education who try to convey difficult new ideas to an audience of either lay people, or more likely - since they'll be the ones interested in reading the stuff - people who know their way around a t-test a lot better than any of these intermediaries. Finally, it's edited by a whole team of people who don't understand it. You can be sure that at least one person in any given "science communication" chain is just juggling words about on a page, without having the first clue what they mean, pretending they've got a proper job, their pens all lined up neatly on the desk.
Weiye Loh

The Way We Live Now - I Tweet, Therefore I Am - NYTimes.com - 0 views

  • Each Twitter post seemed a tacit referendum on who I am, or at least who I believe myself to be. The grocery-store episode telegraphed that I was tuned in to the Seinfeldian absurdities of life; my concern about women’s victimization, however sincere, signaled that I also have a soul. Together they suggest someone who is at once cynical and compassionate, petty yet deep. Which, in the end, I’d say, is pretty accurate.
  • Distilling my personality provided surprising focus, making me feel stripped to my essence. It forced me, for instance, to pinpoint the dominant feeling as I sat outside with my daughter listening to E.B. White. Was it my joy at being a mother? Nostalgia for my own childhood summers? The pleasures of listening to the author’s quirky, underinflected voice? Each put a different spin on the occasion, of who I was within it. Yet the final decision (“Listening to E.B. White’s ‘Trumpet of the Swan’ with Daisy. Slow and sweet.”) was not really about my own impressions: it was about how I imagined — and wanted — others to react to them. That gave me pause. How much, I began to wonder, was I shaping my Twitter feed, and how much was Twitter shaping me?
  • sociologist Erving Goffman famously argued that all of life is performance: we act out a role in every interaction, adapting it based on the nature of the relationship or context at hand. Twitter has extended that metaphor to include aspects of our experience that used to be considered off-set: eating pizza in bed, reading a book in the tub, thinking a thought anywhere, flossing. Effectively, it makes the greasepaint permanent, blurring the lines not only between public and private but also between the authentic and contrived self. If all the world was once a stage, it has now become a reality TV show: we mere players are not just aware of the camera; we mug for it.
  • ...3 more annotations...
  • Second Life, Facebook, MySpace, Twitter — has shifted not only how we spend our time but also how we construct identity. For her coming book, “Alone Together,” Sherry Turkle, a professor at M.I.T., interviewed more than 400 children and parents about their use of social media and cellphones. Among young people especially she found that the self was increasingly becoming externally manufactured rather than internally developed: a series of profiles to be sculptured and refined in response to public opinion. “On Twitter or Facebook you’re trying to express something real about who you are,” she explained. “But because you’re also creating something for others’ consumption, you find yourself imagining and playing to your audience more and more. So those moments in which you’re supposed to be showing your true self become a performance. Your psychology becomes a performance.” Referring to “The Lonely Crowd,” the landmark description of the transformation of the American character from inner- to outer-directed, Turkle added, “Twitter is outer-directedness cubed.”
  • when every thought is externalized, what becomes of insight? When we reflexively post each feeling, what becomes of reflection? When friends become fans, what happens to intimacy? The risk of the performance culture, of the packaged self, is that it erodes the very relationships it purports to create, and alienates us from our own humanity.
  • I am trying to gain some perspective on the perpetual performer’s self-consciousness. That involves trying to sort out the line between person and persona, the public and private self.
  •  
    THE WAY WE LIVE NOW I Tweet, Therefore I Am
Weiye Loh

The X Factor of Economics - People - NYTimes.com - 0 views

  • generally speaking, economists who thought it was a good idea at the time think it worked, and economists who thought otherwise beg to differ. And both sides make their cases with plenty of hard numbers.
  • Why do economists argue at all? Given that Fed members and economists are looking at the same data, and given the reams of evidence accumulated over decades — not to mention a few centuries of great minds, great theories and thick books that preceded this crisis — why isn’t a right answer self-evident?
  • the limits of economics is a subject that many in the field have been discussing for years, in print, in discussions with each other, and, in the case of Robert Solow, Nobel Prize winner and M.I.T. professor emeritus, with graduate students. “I talk about what it is about economics and economic life that leads to differences of opinion,” Mr. Solow said. “One point I always make to my graduate students is, avoid sound bites. Never sound more certain than you are.”
  • ...8 more annotations...
  • the world doesn’t offer up clean economic experiments is a common refrain in the discipline
  • It’s not just that there is so little clear signal amid so much noise. It’s that many economists have a unique idea of what signal to listen to and what priority it deserves.
  • another great variable: personal values.
  • Economics, Mr. Mankiw concludes, won’t tell us, definitively, whether Peter or Paula is paying too much, because an answer inevitably leads to matters of values, which inevitably leads to different answers.
  • This is not to suggest that economics is a total free-for-all, lacking a broad consensus on any subject. Polls of economists have found near unanimity on topics like tariffs and import quotas (bad), centralized economies (very bad) and flexible, floating exchange rates (very good).
  • economics will forever have to contend with the biggest X factor of all: people.
  • certain amount of psychological guesswork is part of an economist’s job, which accounts for the rise in popularity of behavioral economics, an effort to account for the slippery, indefinite nexus of money and humans.
  • there’s a good reason that human irrationality isn’t part of the standard economic models, and this gets to the dilemma of economics.
  •  
    On why economists disagree: The X Factor of Economics: People
Weiye Loh

Politics and self-confidence trump education on climate change - 0 views

  • One set of polls, conducted by the University of New Hampshire, focused on a set of rural areas, including Alaska, the Gulf Coast, and Appalachia. These probably don't reflect the US as a whole, but the pollsters had about 9,500 respondents. The second, published in the The Sociological Quarterly, took advantage of a decade's worth of Earth Day polls conducted by Gallup.
  • Both surveys asked similar questions, however, including whether climate change has occurred and whether humans were likely to be the primary cause. The scientific community, including all the major scientific organizations that have issued statements on the matter, has said yes to both of these questions, and the authors interpret their findings in light of that.
  • The UNH poll shows that a strong majority—in the 80-90 percent range—accepts that climate change is happening. The Gallup polls explicitly asked about global warming and got lower percentages, although it still found that a majority of the US thinks the climate is changing. Those who label themselves conservatives, however, are notably less likely to even accept that basic point; less than half of them do, while the majority of liberals and independents do.
  • ...7 more annotations...
  • Although there was widespread acceptance that climate change was occurring, Democrats were much more likely to ascribe it to human causes (margins ranged from 20 to 50 percent). Independents were somewhere in the middle. Among those who claimed to understand the topic well, the gap actually increased.
  • Republicans with a high degree of confidence in their knowledge of the climate were more likely to dismiss the scientific community's opinion; the highly confident Democrats were more likely to embrace it. The authors caution, however, that "The survey answers thus reflect self-confidence, which has an untested relation to knowledge."
  • The people working with Gallup data performed the same analysis, and found precisely the same thing: the more registered Republicans and those who describe themselves as conservatives thought they knew about anthropogenic climate change, the less likely they were to accept the evidence for it. For Democrats and independents, the opposite was true (same for self-styled moderates and liberals). This group also did a slightly different check, and broke out opinions on global warming based on education and political leanings. For Democrats and independents, increased education boosted their readiness to accept the scientific community's conclusions. For self-styled conservatives, education had almost no effect (it gave a slight boost in registered Republicans).
  • Because this group had temporal data, they could track the progression of this liberal/conservative gap. It existed back in the first year they had data, 2001, but the gap was relatively stable until about 2008. At that point, acceptance among conservatives plunged, leading to the current gap of over 40 percentage points (up from less than 20) between these groups.
  • Both groups also come to similar conclusions about why this gap has developed. The piece in The Sociological Quarterly is appropriately sociological, suggesting that modernizing forces have compelled most societies to deal with the "negative consequences of industrial capitalism," such as pollution. Climate change, for these authors, is a case where the elites of conservative politics have convinced their followers to protect capitalism from any negative associations.
  • The UNH group takes a more nuanced, psychological view of matters. "'Biased assimilation' has been demonstrated in experiments that find people reject information about the existence of a problem if they object to its possible solutions," they note, before later stating that many appear to be "basing their beliefs about science and physical reality on what they thought would be the political implications if human-caused climate change were true."
  • neither group offers a satisfying solution. The sociologists simply warn that the culture wars have reached potentially dangerous proportions when it comes to climate science, while the group from New Hampshire suggests we might have to wait until an unambiguous consequence, like the loss of Arctic ice in the summer, for some segments of society to come around.
  •  
    when it comes to climate change, politics dominates, eclipsing self-assessed knowledge and general education. In fact, it appears that your political persuasion might determine whether an education will make you more or less likely to believe the scientific community.
Weiye Loh

Mystery and Evidence - NYTimes.com - 0 views

  • a very natural way for atheists to react to religious claims: to ask for evidence, and reject these claims in the absence of it. Many of the several hundred comments that followed two earlier Stone posts “Philosophy and Faith” and “On Dawkins’s Atheism: A Response,” both by Gary Gutting, took this stance. Certainly this is the way that today’s “new atheists”  tend to approach religion. According to their view, religions — by this they mean basically Christianity, Judaism and Islam and I will follow them in this — are largely in the business of making claims about the universe that are a bit like scientific hypotheses. In other words, they are claims — like the claim that God created the world — that are supported by evidence, that are proved by arguments and tested against our experience of the world. And against the evidence, these hypotheses do not seem to fare well.
  • But is this the right way to think about religion? Here I want to suggest that it is not, and to try and locate what seem to me some significant differences between science and religion
  • To begin with, scientific explanation is a very specific and technical kind of knowledge. It requires patience, pedantry, a narrowing of focus and (in the case of the most profound scientific theories) considerable mathematical knowledge and ability. No-one can understand quantum theory — by any account, the most successful physical theory there has ever been — unless they grasp the underlying mathematics. Anyone who says otherwise is fooling themselves.
  • ...16 more annotations...
  • Religious belief is a very different kind of thing. It is not restricted only to those with a certain education or knowledge, it does not require years of training, it is not specialized and it is not technical. (I’m talking here about the content of what people who regularly attend church, mosque or synagogue take themselves to be thinking; I’m not talking about how theologians interpret this content.)
  • while religious belief is widespread, scientific knowledge is not. I would guess that very few people in the world are actually interested in the details of contemporary scientific theories. Why? One obvious reason is that many lack access to this knowledge. Another reason is that even when they have access, these theories require sophisticated knowledge and abilities, which not everyone is capable of getting.
  • most people aren’t deeply interested in science, even when they have the opportunity and the basic intellectual capacity to learn about it. Of course, educated people who know about science know roughly what Einstein, Newton and Darwin said. Many educated people accept the modern scientific view of the world and understand its main outlines. But this is not the same as being interested in the details of science, or being immersed in scientific thinking.
  • This lack of interest in science contrasts sharply with the worldwide interest in religion. It’s hard to say whether religion is in decline or growing, partly because it’s hard to identify only one thing as religion — not a question I can address here. But it’s pretty obvious that whatever it is, religion commands and absorbs the passions and intellects of hundreds of millions of people, many more people than science does. Why is this? Is it because — as the new atheists might argue — they want to explain the world in a scientific kind of way, but since they have not been properly educated they haven’t quite got there yet? Or is it because so many people are incurably irrational and are incapable of scientific thinking? Or is something else going on?
  • Some philosophers have said that religion is so unlike science that it has its own “grammar” or “logic” and should not be held accountable to the same standards as scientific or ordinary empirical belief. When Christians express their belief that “Christ has risen,” for example, they should not be taken as making a factual claim, but as expressing their commitment to what Wittgenstein called a certain “form of life,” a way of seeing significance in the world, a moral and practical outlook which is worlds away from scientific explanation.
  • This view has some merits, as we shall see, but it grossly misrepresents some central phenomena of religion. It is absolutely essential to religions that they make certain factual or historical claims. When Saint Paul says “if Christ is not risen, then our preaching is in vain and our faith is in vain” he is saying that the point of his faith depends on a certain historical occurrence.
  • Theologians will debate exactly what it means to claim that Christ has risen, what exactly the meaning and significance of this occurrence is, and will give more or less sophisticated accounts of it. But all I am saying is that whatever its specific nature, Christians must hold that there was such an occurrence. Christianity does make factual, historical claims. But this is not the same as being a kind of proto-science. This will become clear if we reflect a bit on what science involves.
  • The essence of science involves making hypotheses about the causes and natures of things, in order to explain the phenomena we observe around us, and to predict their future behavior. Some sciences — medical science, for example — make hypotheses about the causes of diseases and test them by intervening. Others — cosmology, for example — make hypotheses that are more remote from everyday causes, and involve a high level of mathematical abstraction and idealization. Scientific reasoning involves an obligation to hold a hypothesis only to the extent that the evidence requires it. Scientists should not accept hypotheses which are “ad hoc” — that is, just tailored for one specific situation but cannot be generalized to others. Most scientific theories involve some kind of generalization: they don’t just make claims about one thing, but about things of a general kind. And their hypotheses are designed, on the whole, to make predictions; and if these predictions don’t come out true, then this is something for the scientists to worry about.
  • Religions do not construct hypotheses in this sense. I said above that Christianity rests upon certain historical claims, like the claim of the resurrection. But this is not enough to make scientific hypotheses central to Christianity, any more than it makes such hypotheses central to history. It is true, as I have just said, that Christianity does place certain historical events at the heart of their conception of the world, and to that extent, one cannot be a Christian unless one believes that these events happened. Speaking for myself, it is because I reject the factual basis of the central Christian doctrines that I consider myself an atheist. But I do not reject these claims because I think they are bad hypotheses in the scientific sense. Not all factual claims are scientific hypotheses. So I disagree with Richard Dawkins when he says “religions make existence claims, and this means scientific claims.”
  • Taken as hypotheses, religious claims do very badly: they are ad hoc, they are arbitrary, they rarely make predictions and when they do they almost never come true. Yet the striking fact is that it does not worry Christians when this happens. In the gospels Jesus predicts the end of the world and the coming of the kingdom of God. It does not worry believers that Jesus was wrong (even if it causes theologians to reinterpret what is meant by ‘the kingdom of God’). If Jesus was framing something like a scientific hypothesis, then it should worry them. Critics of religion might say that this just shows the manifest irrationality of religion. But what it suggests to me is that that something else is going on, other than hypothesis formation.
  • Religious belief tolerates a high degree of mystery and ignorance in its understanding of the world. When the devout pray, and their prayers are not answered, they do not take this as evidence which has to be weighed alongside all the other evidence that prayer is effective. They feel no obligation whatsoever to weigh the evidence. If God does not answer their prayers, well, there must be some explanation of this, even though we may never know it. Why do people suffer if an omnipotent God loves them? Many complex answers have been offered, but in the end they come down to this: it’s a mystery.
  • Science too has its share of mysteries (or rather: things that must simply be accepted without further explanation). But one aim of science is to minimize such things, to reduce the number of primitive concepts or primitive explanations. The religious attitude is very different. It does not seek to minimize mystery. Mysteries are accepted as a consequence of what, for the religious, makes the world meaningful.
  • Religion is an attempt to make sense of the world, but it does not try and do this in the way science does. Science makes sense of the world by showing how things conform to its hypotheses. The characteristic mode of scientific explanation is showing how events fit into a general pattern.
  • Religion, on the other hand, attempts to make sense of the world by seeing a kind of meaning or significance in things. This kind of significance does not need laws or generalizations, but just the sense that the everyday world we experience is not all there is, and that behind it all is the mystery of God’s presence. The believer is already convinced that God is present in everything, even if they cannot explain this or support it with evidence. But it makes sense of their life by suffusing it with meaning. This is the attitude (seeing God in everything) expressed in George Herbert’s poem, “The Elixir.” Equipped with this attitude, even the most miserable tasks can come to have value: Who sweeps a room as for Thy laws/ Makes that and th’ action fine.
  • None of these remarks are intended as being for or against religion. Rather, they are part of an attempt (by an atheist, from the outside) to understand what it is. Those who criticize religion should have an accurate understanding of what it is they are criticizing. But to understand a world view, or a philosophy or system of thought, it is not enough just to understand the propositions it contains. You also have to understand what is central and what is peripheral to the view. Religions do make factual and historical claims, and if these claims are false, then the religions fail. But this dependence on fact does not make religious claims anything like hypotheses in the scientific sense. Hypotheses are not central. Rather, what is central is the commitment to the meaningfulness (and therefore the mystery) of the world.
  • while religious thinking is widespread in the world, scientific thinking is not. I don’t think that this can be accounted for merely in terms of the ignorance or irrationality of human beings. Rather, it is because of the kind of intellectual, emotional and practical appeal that religion has for people, which is a very different appeal from the kind of appeal that science has. Stephen Jay Gould once argued that religion and science are “non-overlapping magisteria.” If he meant by this that religion makes no factual claims which can be refuted by empirical investigations, then he was wrong. But if he meant that religion and science are very different kinds of attempt to understand the world, then he was certainly right.
  •  
    Mystery and Evidence By TIM CRANE
1 - 20 of 37 Next ›
Showing 20 items per page