Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged how

Rss Feed Group items tagged

Weiye Loh

Edge: HOW DOES OUR LANGUAGE SHAPE THE WAY WE THINK? By Lera Boroditsky - 0 views

  • Do the languages we speak shape the way we see the world, the way we think, and the way we live our lives? Do people who speak different languages think differently simply because they speak different languages? Does learning new languages change the way you think? Do polyglots think differently when speaking different languages?
  • For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia.
  • What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world.
  • ...15 more annotations...
  • Suppose you want to say, "Bush read Chomsky's latest book." Let's focus on just the verb, "read." To say this sentence in English, we have to mark the verb for tense; in this case, we have to pronounce it like "red" and not like "reed." In Indonesian you need not (in fact, you can't) alter the verb to mark tense. In Russian you would have to alter the verb to indicate tense and gender. So if it was Laura Bush who did the reading, you'd use a different form of the verb than if it was George. In Russian you'd also have to include in the verb information about completion. If George read only part of the book, you'd use a different form of the verb than if he'd diligently plowed through the whole thing. In Turkish you'd have to include in the verb how you acquired this information: if you had witnessed this unlikely event with your own two eyes, you'd use one verb form, but if you had simply read or heard about it, or inferred it from something Bush said, you'd use a different verb form.
  • Clearly, languages require different things of their speakers. Does this mean that the speakers think differently about the world? Do English, Indonesian, Russian, and Turkish speakers end up attending to, partitioning, and remembering their experiences differently just because they speak different languages?
  • For some scholars, the answer to these questions has been an obvious yes. Just look at the way people talk, they might say. Certainly, speakers of different languages must attend to and encode strikingly different aspects of the world just so they can use their language properly. Scholars on the other side of the debate don't find the differences in how people talk convincing. All our linguistic utterances are sparse, encoding only a small part of the information we have available. Just because English speakers don't include the same information in their verbs that Russian and Turkish speakers do doesn't mean that English speakers aren't paying attention to the same things; all it means is that they're not talking about them. It's possible that everyone thinks the same way, notices the same things, but just talks differently.
  • Believers in cross-linguistic differences counter that everyone does not pay attention to the same things: if everyone did, one might think it would be easy to learn to speak other languages. Unfortunately, learning a new language (especially one not closely related to those you know) is never easy; it seems to require paying attention to a new set of distinctions. Whether it's distinguishing modes of being in Spanish, evidentiality in Turkish, or aspect in Russian, learning to speak these languages requires something more than just learning vocabulary: it requires paying attention to the right things in the world so that you have the correct information to include in what you say.
  • Follow me to Pormpuraaw, a small Aboriginal community on the western edge of Cape York, in northern Australia. I came here because of the way the locals, the Kuuk Thaayorre, talk about space. Instead of words like "right," "left," "forward," and "back," which, as commonly used in English, define space relative to an observer, the Kuuk Thaayorre, like many other Aboriginal groups, use cardinal-direction terms — north, south, east, and west — to define space.1 This is done at all scales, which means you have to say things like "There's an ant on your southeast leg" or "Move the cup to the north northwest a little bit." One obvious consequence of speaking such a language is that you have to stay oriented at all times, or else you cannot speak properly. The normal greeting in Kuuk Thaayorre is "Where are you going?" and the answer should be something like " Southsoutheast, in the middle distance." If you don't know which way you're facing, you can't even get past "Hello."
  • The result is a profound difference in navigational ability and spatial knowledge between speakers of languages that rely primarily on absolute reference frames (like Kuuk Thaayorre) and languages that rely on relative reference frames (like English).2 Simply put, speakers of languages like Kuuk Thaayorre are much better than English speakers at staying oriented and keeping track of where they are, even in unfamiliar landscapes or inside unfamiliar buildings. What enables them — in fact, forces them — to do this is their language. Having their attention trained in this way equips them to perform navigational feats once thought beyond human capabilities. Because space is such a fundamental domain of thought, differences in how people think about space don't end there. People rely on their spatial knowledge to build other, more complex, more abstract representations. Representations of such things as time, number, musical pitch, kinship relations, morality, and emotions have been shown to depend on how we think about space. So if the Kuuk Thaayorre think differently about space, do they also think differently about other things, like time? This is what my collaborator Alice Gaby and I came to Pormpuraaw to find out.
  • To test this idea, we gave people sets of pictures that showed some kind of temporal progression (e.g., pictures of a man aging, or a crocodile growing, or a banana being eaten). Their job was to arrange the shuffled photos on the ground to show the correct temporal order. We tested each person in two separate sittings, each time facing in a different cardinal direction. If you ask English speakers to do this, they'll arrange the cards so that time proceeds from left to right. Hebrew speakers will tend to lay out the cards from right to left, showing that writing direction in a language plays a role.3 So what about folks like the Kuuk Thaayorre, who don't use words like "left" and "right"? What will they do? The Kuuk Thaayorre did not arrange the cards more often from left to right than from right to left, nor more toward or away from the body. But their arrangements were not random: there was a pattern, just a different one from that of English speakers. Instead of arranging time from left to right, they arranged it from east to west. That is, when they were seated facing south, the cards went left to right. When they faced north, the cards went from right to left. When they faced east, the cards came toward the body and so on. This was true even though we never told any of our subjects which direction they faced. The Kuuk Thaayorre not only knew that already (usually much better than I did), but they also spontaneously used this spatial orientation to construct their representations of time.
  • I have described how languages shape the way we think about space, time, colors, and objects. Other studies have found effects of language on how people construe events, reason about causality, keep track of number, understand material substance, perceive and experience emotion, reason about other people's minds, choose to take risks, and even in the way they choose professions and spouses.8 Taken together, these results show that linguistic processes are pervasive in most fundamental domains of thought, unconsciously shaping us from the nuts and bolts of cognition and perception to our loftiest abstract notions and major life decisions. Language is central to our experience of being human, and the languages we speak profoundly shape the way we think, the way we see the world, the way we live our lives.
  • The fact that even quirks of grammar, such as grammatical gender, can affect our thinking is profound. Such quirks are pervasive in language; gender, for example, applies to all nouns, which means that it is affecting how people think about anything that can be designated by a noun.
  • How does an artist decide whether death, say, or time should be painted as a man or a woman? It turns out that in 85 percent of such personifications, whether a male or female figure is chosen is predicted by the grammatical gender of the word in the artist's native language. So, for example, German painters are more likely to paint death as a man, whereas Russian painters are more likely to paint death as a woman.
  • Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key" — a word that is masculine in German and feminine in Spanish — the German speakers were more likely to use words like "hard," "heavy," "jagged," "metal," "serrated," and "useful," whereas Spanish speakers were more likely to say "golden," "intricate," "little," "lovely," "shiny," and "tiny." To describe a "bridge," which is feminine in German and masculine in Spanish, the German speakers said "beautiful," "elegant," "fragile," "peaceful," "pretty," and "slender," and the Spanish speakers said "big," "dangerous," "long," "strong," "sturdy," and "towering." This was true even though all testing was done in English, a language without grammatical gender. The same pattern of results also emerged in entirely nonlinguistic tasks (e.g., rating similarity between pictures). And we can also show that it is aspects of language per se that shape how people think: teaching English speakers new grammatical gender systems influences mental representations of objects in the same way it does with German and Spanish speakers. Apparently even small flukes of grammar, like the seemingly arbitrary assignment of gender to a noun, can have an effect on people's ideas of concrete objects in the world.
  • Even basic aspects of time perception can be affected by language. For example, English speakers prefer to talk about duration in terms of length (e.g., "That was a short talk," "The meeting didn't take long"), while Spanish and Greek speakers prefer to talk about time in terms of amount, relying more on words like "much" "big", and "little" rather than "short" and "long" Our research into such basic cognitive abilities as estimating duration shows that speakers of different languages differ in ways predicted by the patterns of metaphors in their language. (For example, when asked to estimate duration, English speakers are more likely to be confused by distance information, estimating that a line of greater length remains on the test screen for a longer period of time, whereas Greek speakers are more likely to be confused by amount, estimating that a container that is fuller remains longer on the screen.)
  • An important question at this point is: Are these differences caused by language per se or by some other aspect of culture? Of course, the lives of English, Mandarin, Greek, Spanish, and Kuuk Thaayorre speakers differ in a myriad of ways. How do we know that it is language itself that creates these differences in thought and not some other aspect of their respective cultures? One way to answer this question is to teach people new ways of talking and see if that changes the way they think. In our lab, we've taught English speakers different ways of talking about time. In one such study, English speakers were taught to use size metaphors (as in Greek) to describe duration (e.g., a movie is larger than a sneeze), or vertical metaphors (as in Mandarin) to describe event order. Once the English speakers had learned to talk about time in these new ways, their cognitive performance began to resemble that of Greek or Mandarin speakers. This suggests that patterns in a language can indeed play a causal role in constructing how we think.6 In practical terms, it means that when you're learning a new language, you're not simply learning a new way of talking, you are also inadvertently learning a new way of thinking. Beyond abstract or complex domains of thought like space and time, languages also meddle in basic aspects of visual perception — our ability to distinguish colors, for example. Different languages divide up the color continuum differently: some make many more distinctions between colors than others, and the boundaries often don't line up across languages.
  • To test whether differences in color language lead to differences in color perception, we compared Russian and English speakers' ability to discriminate shades of blue. In Russian there is no single word that covers all the colors that English speakers call "blue." Russian makes an obligatory distinction between light blue (goluboy) and dark blue (siniy). Does this distinction mean that siniy blues look more different from goluboy blues to Russian speakers? Indeed, the data say yes. Russian speakers are quicker to distinguish two shades of blue that are called by the different names in Russian (i.e., one being siniy and the other being goluboy) than if the two fall into the same category. For English speakers, all these shades are still designated by the same word, "blue," and there are no comparable differences in reaction time. Further, the Russian advantage disappears when subjects are asked to perform a verbal interference task (reciting a string of digits) while making color judgments but not when they're asked to perform an equally difficult spatial interference task (keeping a novel visual pattern in memory). The disappearance of the advantage when performing a verbal task shows that language is normally involved in even surprisingly basic perceptual judgments — and that it is language per se that creates this difference in perception between Russian and English speakers.
  • What it means for a language to have grammatical gender is that words belonging to different genders get treated differently grammatically and words belonging to the same grammatical gender get treated the same grammatically. Languages can require speakers to change pronouns, adjective and verb endings, possessives, numerals, and so on, depending on the noun's gender. For example, to say something like "my chair was old" in Russian (moy stul bil' stariy), you'd need to make every word in the sentence agree in gender with "chair" (stul), which is masculine in Russian. So you'd use the masculine form of "my," "was," and "old." These are the same forms you'd use in speaking of a biological male, as in "my grandfather was old." If, instead of speaking of a chair, you were speaking of a bed (krovat'), which is feminine in Russian, or about your grandmother, you would use the feminine form of "my," "was," and "old."
  •  
    For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia. What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world. Language is a uniquely human gift, central to our experience of being human. Appreciating its role in constructing our mental lives brings us one step closer to understanding the very nature of humanity.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

7 Essential Skills You Didn't Learn in College | Magazine - 0 views

shared by Weiye Loh on 15 Oct 10 - No Cached
  • Statistical Literacy Why take this course? We are misled by numbers and by our misunderstanding of probability.
  • Our world is shaped by widespread statistical illiteracy. We fear things that probably won’t kill us (terrorist attacks) and ignore things that probably will (texting while driving). We buy lottery tickets. We fall prey to misleading gut instincts, which lead to biases like loss aversion—an inability to gauge risk against potential gain. The effects play out in the grocery store, the office, and the voting booth (not to mention the bedroom: People who are more risk-averse are less successful in love).
  • We are now 53 percent more likely than our parents to trust polls of dubious merit. (That figure is totally made up. See?) Where do all these numbers that we remember so easily and cite so readily come from? How are they calculated, and by whom? How do we misuse them to make them say what we want them to? We’ll explore all of these questions in a sequence on sourcing statistics.
  • ...9 more annotations...
  • probabilistic intuition. We’ll learn to judge what’s likely and unlikely—and what’s impossible to know. We’ll learn about distorting habits of mind like selection bias—and how to guard against them. We’ll gamble. We’ll read The Art of Probability for Scientists and Engineers by Richard Hamming, Expert Political Judgment by Philip Tetlock, and How to Cheat Your Friends at Poker by Penn Jillette and Mickey Lynn.
  • Post-State Diplomacy Why take this course? As the world becomes evermore atomized, understanding the new leaders and constituencies becomes increasingly important.
  • tribal insurgents to multinational corporations, private charities to pirate gangs, religious movements to armies for hire, a range of organizations now compete with (and sometimes eclipse) the nation-states in which they reside. Without capitals or traditional constituencies, they can’t be persuaded or deterred by traditional tactics.
  • that doesn’t mean diplomacy is dead; quite the opposite. Negotiating with these parties requires the same skills as dealing with belligerent nations—understanding the shareholders and alliances they must answer to, the cultures that inform how they behave, and the religious, economic, and political interests they must address.
  • Power has always depended on who can provide justice, commerce, and stability.
  • Remix Culture Why take this course? Modern artists don’t start with a blank page or empty canvas. They start with preexisting works. What you’ll learn: How to analyze—and create—artworks made out of other artworks
  • philosophical roots of remix culture and study seminal works like Robert Rauschenberg’s Monogram and Jorge Luis Borges’ Pierre Menard, Author of Don Quixote. And we’ll examine modern-day exemplars from DJ Shadow’s Endtroducing to Auto-Tune the News.
  • Applied Cognition Why take this course? You have to know the brain to train the brain. What you’ll learn: How the mind works and how you can make it work for you.
  • Writing for New Forms Why take this course? You can write a cogent essay, but can you write it in 140 characters or less? What you’ll learn: How to adapt your message to multiple formats and audiences—human and machine.
  •  
    7 Essential Skills You Didn't Learn in College
Weiye Loh

IPhone and Android Apps Breach Privacy - WSJ.com - 0 views

  • Few devices know more personal details about people than the smartphones in their pockets: phone numbers, current location, often the owner's real name—even a unique ID number that can never be changed or turned off.
  • An examination of 101 popular smartphone "apps"—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone's unique device ID to other companies without users' awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent age, gender and other personal details to outsiders.
  • The findings reveal the intrusive effort by online-tracking companies to gather personal data about people in order to flesh out detailed dossiers on them.
  • ...24 more annotations...
  • iPhone apps transmitted more data than the apps on phones using Google Inc.'s Android operating system. Because of the test's size, it's not known if the pattern holds among the hundreds of thousands of apps available.
  • TextPlus 4, a popular iPhone app for text messaging. It sent the phone's unique ID number to eight ad companies and the phone's zip code, along with the user's age and gender, to two of them.
  • Pandora, a popular music app, sent age, gender, location and phone identifiers to various ad networks. iPhone and Android versions of a game called Paper Toss—players try to throw paper wads into a trash can—each sent the phone's ID number to at least five ad companies. Grindr, an iPhone app for meeting gay men, sent gender, location and phone ID to three ad companies.
  • iPhone maker Apple Inc. says it reviews each app before offering it to users. Both Apple and Google say they protect users by requiring apps to obtain permission before revealing certain kinds of information, such as location.
  • The Journal found that these rules can be skirted. One iPhone app, Pumpkin Maker (a pumpkin-carving game), transmits location to an ad network without asking permission. Apple declines to comment on whether the app violated its rules.
  • With few exceptions, app users can't "opt out" of phone tracking, as is possible, in limited form, on regular computers. On computers it is also possible to block or delete "cookies," which are tiny tracking files. These techniques generally don't work on cellphone apps.
  • makers of TextPlus 4, Pandora and Grindr say the data they pass on to outside firms isn't linked to an individual's name. Personal details such as age and gender are volunteered by users, they say. The maker of Pumpkin Maker says he didn't know Apple required apps to seek user approval before transmitting location. The maker of Paper Toss didn't respond to requests for comment.
  • Many apps don't offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn't provide privacy policies on their websites or inside the apps at the time of testing. Neither Apple nor Google requires app privacy policies.
  • the most widely shared detail was the unique ID number assigned to every phone.
  • On iPhones, this number is the "UDID," or Unique Device Identifier. Android IDs go by other names. These IDs are set by phone makers, carriers or makers of the operating system, and typically can't be blocked or deleted. "The great thing about mobile is you can't clear a UDID like you can a cookie," says Meghan O'Holleran of Traffic Marketplace, an Internet ad network that is expanding into mobile apps. "That's how we track everything."
  • O'Holleran says Traffic Marketplace, a unit of Epic Media Group, monitors smartphone users whenever it can. "We watch what apps you download, how frequently you use them, how much time you spend on them, how deep into the app you go," she says. She says the data is aggregated and not linked to an individual.
  • Apple and Google ad networks let advertisers target groups of users. Both companies say they don't track individuals based on the way they use apps.
  • Apple limits what can be installed on an iPhone by requiring iPhone apps to be offered exclusively through its App Store. Apple reviews those apps for function, offensiveness and other criteria.
  • Apple says iPhone apps "cannot transmit data about a user without obtaining the user's prior permission and providing the user with access to information about how and where the data will be used." Many apps tested by the Journal appeared to violate that rule, by sending a user's location to ad networks, without informing users. Apple declines to discuss how it interprets or enforces the policy.
  • Google doesn't review the apps, which can be downloaded from many vendors. Google says app makers "bear the responsibility for how they handle user information." Google requires Android apps to notify users, before they download the app, of the data sources the app intends to access. Possible sources include the phone's camera, memory, contact list, and more than 100 others. If users don't like what a particular app wants to access, they can choose not to install the app, Google says.
  • Neither Apple nor Google requires apps to ask permission to access some forms of the device ID, or to send it to outsiders. When smartphone users let an app see their location, apps generally don't disclose if they will pass the location to ad companies.
  • Lack of standard practices means different companies treat the same information differently. For example, Apple says that, internally, it treats the iPhone's UDID as "personally identifiable information." That's because, Apple says, it can be combined with other personal details about people—such as names or email addresses—that Apple has via the App Store or its iTunes music services. By contrast, Google and most app makers don't consider device IDs to be identifying information.
  • A growing industry is assembling this data into profiles of cellphone users. Mobclix, the ad exchange, matches more than 25 ad networks with some 15,000 apps seeking advertisers. The Palo Alto, Calif., company collects phone IDs, encodes them (to obscure the number), and assigns them to interest categories based on what apps people download and how much time they spend using an app, among other factors. By tracking a phone's location, Mobclix also makes a "best guess" of where a person lives, says Mr. Gurbuxani, the Mobclix executive. Mobclix then matches that location with spending and demographic data from Nielsen Co.
  • Mobclix can place a user in one of 150 "segments" it offers to advertisers, from "green enthusiasts" to "soccer moms." For example, "die hard gamers" are 15-to-25-year-old males with more than 20 apps on their phones who use an app for more than 20 minutes at a time. Mobclix says its system is powerful, but that its categories are broad enough to not identify individuals. "It's about how you track people better," Mr. Gurbuxani says.
  • four app makers posted privacy policies after being contacted by the Journal, including Rovio Mobile Ltd., the Finnish company behind the popular game Angry Birds (in which birds battle egg-snatching pigs). A spokesman says Rovio had been working on the policy, and the Journal inquiry made it a good time to unveil it.
  • Free and paid versions of Angry Birds were tested on an iPhone. The apps sent the phone's UDID and location to the Chillingo unit of Electronic Arts Inc., which markets the games. Chillingo says it doesn't use the information for advertising and doesn't share it with outsiders.
  • Some developers feel pressure to release more data about people. Max Binshtok, creator of the DailyHoroscope Android app, says ad-network executives encouraged him to transmit users' locations. Mr. Binshtok says he declined because of privacy concerns. But ads targeted by location bring in two to five times as much money as untargeted ads, Mr. Binshtok says. "We are losing a lot of revenue."
  • Apple targets ads to phone users based largely on what it knows about them through its App Store and iTunes music service. The targeting criteria can include the types of songs, videos and apps a person downloads, according to an Apple ad presentation reviewed by the Journal. The presentation named 103 targeting categories, including: karaoke, Christian/gospel music, anime, business news, health apps, games and horror movies. People familiar with iAd say Apple doesn't track what users do inside apps and offers advertisers broad categories of people, not specific individuals. Apple has signaled that it has ideas for targeting people more closely. In a patent application filed this past May, Apple outlined a system for placing and pricing ads based on a person's "web history or search history" and "the contents of a media library." For example, home-improvement advertisers might pay more to reach a person who downloaded do-it-yourself TV shows, the document says.
  • The patent application also lists another possible way to target people with ads: the contents of a friend's media library. How would Apple learn who a cellphone user's friends are, and what kinds of media they prefer? The patent says Apple could tap "known connections on one or more social-networking websites" or "publicly available information or private databases describing purchasing decisions, brand preferences," and other data. In September, Apple introduced a social-networking service within iTunes, called Ping, that lets users share music preferences with friends. Apple declined to comment.
Weiye Loh

'There Is No Values-Free Form Of Education,' Says U.S. Philosopher - Radio Fr... - 0 views

  • from the earliest years, education should be based primarily on exploration, understanding in depth, and the development of logical, critical thinking. Such an emphasis, she says, not only produces a citizenry capable of recognizing and rooting out political jingoism and intolerance. It also produces people capable of questioning authority and perceived wisdom in ways that enhance innovation and economic competitiveness. Nussbaum warns against a narrow educational focus on technical competence.
  • a successful, long-term democracy depends on a citizenry with certain qualities that can be fostered by education.
  • The first is the capacity we associate in the Western tradition with Socrates, but it certainly appears in all traditions -- that is, the ability to think critically about proposals that are brought your way, to analyze an argument, to distinguish a good argument from a bad argument. And just in general, to lead what Socrates called “the examined life.” Now that’s, of course, important because we know that people are very prone to go along with authority, with fashion, with peer pressure. And this kind of critical enlivened citizenry is the only thing that can keep democracy vital.
  • ...15 more annotations...
  • it can be trained from very early in a child’s education. There’re ways that you can get quite young children to recognize what’s a good argument and what’s a bad argument. And as children grow older, it can be done in a more and more sophisticated form until by the time they’re undergraduates in universities they would be studying Plato’s dialogues for example and really looking at those tricky arguments and trying to figure out how to think. And this is important not just for the individual thinking about society, but it’s important for the way people talk to each other. In all too many public discussions people just throw out slogans and they throw out insults. And what democracy needs is listening. And respect. And so when people learn how to analyze an argument, then they look at what the other person’s saying differently. And they try to take it apart, and they think: “Well, do I share some of those views and where do I differ here?” and so on. And this really does produce a much more deliberative, respectful style of public interaction.
  • The second [quality] is what I call “the ability to think as a citizen of the whole world.” We’re all narrow and this is again something that we get from our animal heritage. Most non-human animals just think about the group. But, of course, in this world we need to think, first of all, our whole nation -- its many different groups, minority and majority. And then we need to think outside the nation, about how problems involving, let’s say, the environment or global economy and so on need cooperative resolution that brings together people from many different nations.
  • That’s complicated and it requires learning a lot of history, and it means learning not just to parrot some facts about history but to think critically about how to assess historical evidence. It means learning how to think about the global economy. And then I think particularly important in this era, it means learning something about the major world religions. Learning complicated, nonstereotypical accounts of those religions because there’s so much fear that’s circulating around in every country that’s based usually on just inadequate stereotypes of what Muslims are or whatever. So knowledge can at least begin to address that.
  • the third thing, which I think goes very closely with the other two, is what I call “the narrative imagination,” which is the ability to put yourself in the shoes of another person to have some understanding of how the world looks from that point of view. And to really have that kind of educated sympathy with the lives of others. Now again this is something we come into the world with. Psychologists have now found that babies less than a year old are able to take up the perspective of another person and do things, see things from that perspective. But it’s very narrow and usually people learn how to think about what their parents are thinking and maybe other family members but we need to extend that and develop it, and learn how the world looks from the point of view of minorities in our own culture, people outside our culture, and so on.
  • since we can’t go to all the places that we need to understand -- it’s accomplished by reading narratives, reading literature, drama, participating through the arts in the thought processes of another culture. So literature and the arts are the major ways we would develop and extend that capacity.
  • For many years, the leading model of development ... used by economists and international agencies measuring welfare was simply that for a country to develop means to increase [its] gross domestic product per capita. Now, in recent years, there has been a backlash to that because people feel that it just doesn’t ask enough about what goods are really doing for people, what can people really do and be.
  • so since 1990s the United Nations’ development program has produced annually what’s called a “Human Development Report” that looks at things like access to education, access to health care. In other words, a much richer menu of human chances and opportunities that people have. And at the theoretical end I’ve worked for about 20 years now with economist Amartya Sen, who won the Nobel Prize in 1998 for economics. And we’ve developed this as account of -- so for us what it is for a country to do better is to enhance the set of capabilities meaning substantial opportunities that people have to lead meaningful, fruitful lives. And then I go on to focus on a certain core group of those capabilities that I think ought to be protected by constitutional law in every country.
  • Life; health; bodily integrity; the development of senses, imagination, and thought; the development of practical reason; opportunities to have meaningful affiliations both friendly and political with other people; the ability to have emotional health -- not to be in other words dominated by overwhelming fear and so on; the ability to have a productive relationship with the environment and the world of nature; the ability to play and have leisure time, which is something that I think people don’t think enough about; and then, finally, control over one’s material and social environment, some measure of control. Now of course, each of these is very abstract, and I specify them further. Although I also think that each country needs to finally specify them with its own particular circumstances in view.
  • when kids learn in a classroom that just makes them sit in a chair, well, they can take in something in their heads, but it doesn’t make them competent at negotiating in the world. And so starting, at least, with Jean Jacques Rousseau in the 18th century, people thought: “Well, if we really want people to be independent citizens in a democracy that means that we can’t have whole classes of people who don’t know how to do anything, who are just simply sitting there waiting to be waited on in practical matters.” And so the idea that children should participate in their practical environment came out of the initial democratizing tendencies that went running through the 18th century.
  • even countries who absolutely do not want that kind of engaged citizenry see that for the success of business these abilities are pretty important. Both Singapore and China have conducted mass education reforms over the last five years because they realized that their business cultures don’t have enough imagination and they also don’t have enough critical thinking, because you can have awfully corrupt business culture if no one is willing to say the unpleasant word or make a criticism.
  • So they have striven to introduce more critical thinking and more imagination into their curricula. But, of course, for them, they want to cordon it off -- they want to do it in the science classroom, in the business classroom, but not in the politics classroom. Well, we’ll see -- can they do that? Can they segment it that way? I think democratic thinking is awfully hard to segment as current events in the Middle East are showing us. It does have the tendency to spread.
  • so maybe the people in Singapore and China will not like the end result of what they tried to do or maybe the reform will just fail, which is equally likely -- I mean the educational reform.
  • if you really don’t want democracy, this is not the education for you. It had its origins in the ancient Athenian democracy which was a very, very strong participatory democracy and it is most at home in really true democracy, where our whole goal is to get each and every person involved and to get them thinking about things. So, of course, if politicians have ambivalence about that goal they may well not want this kind of education.
  • when we bring up children in the family or in the school, we are always engineering. I mean, there is no values-free form of education in the world. Even an education that just teaches you a list of facts has values built into it. Namely, it gives a negative value to imagination and to the critical faculties and a very high value to a kind of rote, technical competence. So, you can't avoid shaping children.
  • ncreasingly the child should be in control and should become free. And that's what the critical thinking is all about -- it's about promoting freedom as the child goes on. So, the end product should be an adult who is really thinking for him- or herself about the direction of society. But you don't get freedom just by saying, "Oh, you are free." Progressive educators that simply stopped teaching found out very quickly that that didn't produce freedom. Even some of the very extreme forms of progressive school where children were just allowed to say every day what it was they wanted to learn, they found that didn't give the child the kind of mastery of self and of the world that you really need to be a free person.
Weiye Loh

How I Created My First Membership Site [INFOGRAPHIC] - 0 views

  •  
    I launched my Infographic Academy membership site on Monday, and I thought you guys would like to know how I went about creating it. And since it's all about how to create infographics, what better way to show you how I did it than with an infographic?
Weiye Loh

Royal Society launches study on openness in science | Royal Society - 0 views

  • Science as a public enterprise: opening up scientific information will look at how scientific information should best be managed to improve the quality of research and build public trust.
  • “Science has always been about open debate. But incidents such as the UEA email leaks have prompted the Royal Society to look at how open science really is.  With the advent of the Internet, the public now expect a greater degree of transparency. The impact of science on people’s lives, and the implications of scientific assessments for society and the economy are now so great that  people won’t just believe scientists when they say “trust me, I’m an expert.” It is not just scientists who want to be able to see inside scientific datasets, to see how robust they are and ask difficult questions about their implications. Science has to adapt.”
  • The study will look at questions such as: What are the benefits and risks of openly sharing scientific data? How does the rise of the blogosphere change scientific research? What responsibility should scientists, their institutions and the funders of research have for open data? How do we make information more accessible and who will pay to do it? Should privately funded scientists be held to the same standards as those who are publicly funded? How do we balance openness against intellectual property rights and in the case of medical information how do protect patient confidentiality?  Will the same rules apply to scientists across the world?
  • ...1 more annotation...
  • “Different scientific disciplines share their information very differently.  The human genome project was incredibly open in how data were shared. But in biomedical science you also have drug trials conducted where no results are made public.” 
Weiye Loh

On the Media: Survey shows that not all polls are equal - latimes.com - 0 views

  • Internet surveys sometimes acknowledge how unscientific (read: meaningless) they really are. They surely must be a pale imitation of the rigorous, carefully sampled, thoroughly transparent polls favored by political savants and mainstream news organizations
  • The line between junk and credible polling remains. But it became a little blurrier — creating concern among professional survey organizations and reason for greater skepticism by all of us — because of charges this week that one widely cited pollster may have fabricated data or manipulated it so seriously as to render it meaningless.
  • founder of the left-leaning Daily Kos website, filed a lawsuit in federal court in Oakland on Wednesday charging that Research 2000, the organization he had commissioned for 1 1/2 years to test voter opinion, had doctored its results.
  • ...4 more annotations...
  • The firm's protestations that it did nothing wrong have been loud and repeated. Evidence against the company is somewhat arcane. Suffice it to say that independent statisticians have found a bewildering lack of statistical "noise" in the company's data. Where random variation would be expected, results are too consistent.
  • Most reputable pollsters agree on one thing — polling organizations should publicly disclose as much of their methodology as possible. Just for starters, they should reveal how many people were interviewed, how they were selected, how many rejected the survey, how "likely voters" and other sub-groups were defined and how the raw data was weighted to reflect the population, or subgroups.
  • Michael Cornfield, a George Washington University political scientist and polling expert, recommends that concerned citizens ignore the lone, sometimes sensational, poll result. "Trend data are superior to a single point in time," Cornfield said via e-mail, "and consensus results from multiple firms are superior to those conducted by a single outfit."
  • The rest of us should look at none of the polls. Or look at all of them. And look out for the operators not willing to tell us how they're doing business.
  •  
    On the Media: Survey shows not all polls equal
Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

To Die of Having Lived: an article by Richard Rapport | The American Scholar - 0 views

  • Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
  • Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death.
  • generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty.
    • Weiye Loh
       
      Isn't it always the case? That dichotomous relationships aren't clearly and equally demarcated but some how we attempt to split them up... through polemical discourses and rhetorics...
  • ...13 more annotations...
  • Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
    • Weiye Loh
       
      In what situations do we hear the word "nearly" and in what situations do we hear the word "certain"? When we're dealing with a person's life, we hear "nearly", but when we're dealing with climate science we hear "certain"? 
  • Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution.
  • The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
  • Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
  • Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
    • Weiye Loh
       
      Perhaps then, the value of life is not so much life in itself per se, but rather the transactive amount it generates. 
  • While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit.
  • the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
  • Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
  • best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy.
  • , not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed.
  • Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
  • Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
  • Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?
  •  
    Spring 2010 To Die of Having Lived A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
Weiye Loh

Happiness: Do we have a choice? » Scienceline - 0 views

  • “Objective choices make a difference to happiness over and above genetics and personality,” said Bruce Headey, a psychologist at Melbourne University in Australia. Headey and his colleagues analyzed annual self-reports of life satisfaction from over 20,000 Germans who have been interviewed every year since 1984. He compared five-year averages of people’s reported life satisfaction, and plotted their relative happiness on a percentile scale from 1 to 100. Heady found that as time went on, more and more people recorded substantial changes in their life satisfaction. By 2008, more than a third had moved up or down on the happiness scale by at least 25 percent, compared to where they had started in 1984.
  • Headey’s findings, published in the October 19th issue of Proceedings of the National Academy of Sciences, run contrary to what is known as the happiness set-point theory — the idea that even if you win the lottery or become a paraplegic, you’ll revert back to the same fixed level of happiness within a year or two. This psychological theory was widely accepted in the 1990s because it explained why happiness levels seemed to remain stable over the long term: They were mainly determined early in life by genetic factors including personality traits.
  • But even this dynamic choice-driven picture does not fully capture the nuance of what it means to be happy, said Jerome Kagan, a Harvard University developmental psychologist. He warns against conflating two distinct dimensions of happiness: everyday emotional experience (an assessment of how you feel at the moment) and life evaluation (a judgment of how satisfied you are with your life). It’s the difference between “how often did you smile yesterday?” and “how does your life compare to the best possible life you can imagine?”
  • ...4 more annotations...
  • Kagan suggests that we may have more choice over the latter, because life evaluation is not a function of how we currently feel — it is a comparison of our life to what we decide the good life should be.
  • Kagan has found that young children differ biologically in the ease with which they can feel happy, or tense, or distressed, or sad — what he calls temperament. People establish temperament early in life and have little capacity to change it. But they can change their life evaluation, which Kagan describes as an ethical concept synonymous with “how good of a life have I led?” The answer will depend on individual choices and the purpose they create for themselves. A painter who is constantly stressed and moody (unhappy in the moment) may still feel validation in creating good artwork and may be very satisfied with his life (happy as a judgment).
  • when it comes to happiness, our choices may matter — but it depends on what the choices are about, and how we define what we want to change.
  • Graham thinks that people may evaluate their happiness based on whichever dimension — happiness at the moment, or life evaluation — they have a choice over.
  •  
    Instead of existing as a stable equilibrium, Headey suggests that happiness is much more dynamic, and that individual choices - about one's partner, working hours, social participation and lifestyle - make substantial and permanent changes to reported happiness levels. For example, doing more or fewer paid hours of work than you want, or exercising regularly, can have just as much impact on life satisfaction as having an extroverted personality.
Weiye Loh

Rationally Speaking: The sorry state of higher education - 0 views

  • two disconcerting articles crossed my computer screen, both highlighting the increasingly sorry state of higher education, though from very different perspectives. The first is “Ed Dante’s” (actually a pseudonym) piece in the Chronicle of Higher Education, entitled The Shadow Scholar. The second is Gregory Petsko’s A Faustian Bargain, published of all places in Genome Biology.
  • There is much to be learned by educators in the Shadow Scholar piece, except the moral that “Dante” would like us to take from it. The anonymous author writes:“Pointing the finger at me is too easy. Why does my business thrive? Why do so many students prefer to cheat rather than do their own work? Say what you want about me, but I am not the reason your students cheat.
  • The point is that plagiarism and cheating happen for a variety of reasons, one of which is the existence of people like Mr. Dante and his company, who set up a business that is clearly unethical and should be illegal. So, pointing fingers at him and his ilk is perfectly reasonable. Yes, there obviously is a “market” for cheating in higher education, and there are complex reasons for it, but he is in a position similar to that of the drug dealer who insists that he is simply providing the commodity to satisfy society’s demand. Much too easy of a way out, and one that doesn’t fly in the case of drug dealers, and shouldn’t fly in the case of ghost cheaters.
  • ...16 more annotations...
  • As a teacher at the City University of New York, I am constantly aware of the possibility that my students might cheat on their tests. I do take some elementary precautionary steps
  • Still, my job is not that of the policeman. My students are adults who theoretically are there to learn. If they don’t value that learning and prefer to pay someone else to fake it, so be it, ultimately it is they who lose in the most fundamental sense of the term. Just like drug addicts, to return to my earlier metaphor. And just as in that other case, it is enablers like Mr. Dante who simply can’t duck the moral blame.
  • n open letter to the president of SUNY-Albany, penned by molecular biologist Gregory Petsko. The SUNY-Albany president has recently announced the closing — for budgetary reasons — of the departments of French, Italian, Classics, Russian and Theater Arts at his university.
  • Petsko begins by taking on one of the alleged reasons why SUNY-Albany is slashing the humanities: low enrollment. He correctly points out that the problem can be solved overnight at the stroke of a pen: stop abdicating your responsibilities as educators and actually put constraints on what your students have to take in order to graduate. Make courses in English literature, foreign languages, philosophy and critical thinking, the arts and so on, mandatory or one of a small number of options that the students must consider in order to graduate.
  • But, you might say, that’s cheating the market! Students clearly don’t want to take those courses, and a business should cater to its customers. That type of reasoning is among the most pernicious and idiotic I’ve ever heard. Students are not clients (if anything, their parents, who usually pay the tuition, are), they are not shopping for a new bag or pair of shoes. They do not know what is best for them educationally, that’s why they go to college to begin with. If you are not convinced about how absurd the students-as-clients argument is, consider an analogy: does anyone with functioning brain cells argue that since patients in a hospital pay a bill, they should be dictating how the brain surgeon operates? I didn’t think so.
  • Petsko then tackles the second lame excuse given by the president of SUNY-Albany (and common among the upper administration of plenty of public universities): I can’t do otherwise because of the legislature’s draconian cuts. Except that university budgets are simply too complicated for there not to be any other option. I know this first hand, I’m on a special committee at my own college looking at how to creatively deal with budget cuts handed down to us from the very same (admittedly small minded and dysfunctional) New York state legislature that has prompted SUNY-Albany’s action. As Petsko points out, the president there didn’t even think of involving the faculty and staff in a broad discussion of how to deal with the crisis, he simply announced the cuts on a Friday afternoon and then ran for cover. An example of very poor leadership to say the least, and downright hypocrisy considering all the talk that the same administrator has been dishing out about the university “community.”
  • Finally, there is the argument that the humanities don’t pay for their own way, unlike (some of) the sciences (some of the time). That is indubitably true, but irrelevant. Universities are not businesses, they are places of higher learning. Yes, of course they need to deal with budgets, fund raising and all the rest. But the financial and administrative side has one goal and one goal only: to provide the best education to the students who attend that university.
  • That education simply must include the sciences, philosophy, literature, and the arts, as well as more technical or pragmatic offerings such as medicine, business and law. Why? Because that’s the kind of liberal education that makes for an informed and intelligent citizenry, without which our democracy is but empty talk, and our lives nothing but slavery to the marketplace.
  • Maybe this is not how education works in the US. I thought that general (or compulsory) education (ie. up to high school) is designed to make sure that citizens in a democratic country can perform their civil duties. A balanced and well-rounded education, which includes a healthy mixture of science and humanities, is indeed very important for this purpose. However, college-level education is for personal growth and therefore the person must have a large say about what kind of classes he or she chooses to take. I am disturbed by Massimo's hospital analogy. Students are not ill. They don't go to college to be cured, or to be good citizens. They go to college to learn things that *they* want to learn. Patients are passive. Students are not.I agree that students typically do not know what kind of education is good for them. But who does?
  • students do have a saying in their education. They pick their major, and there are electives. But I object to the idea that they can customize their major any way they want. That assumes they know what the best education for them is, they don't. That's the point of education.
  • The students are in your class to get a good grade, any learning that takes place is purely incidental. Those good grades will look good on their transcript and might convince a future employer that they are smart and thus are worth paying more.
  • I don't know what the dollar to GPA exchange rate is these days, but I don't doubt that there is one.
  • Just how many of your students do you think will remember the extensive complex jargon of philosophy more than a couple of months after they leave your classroom?
  • and our lives nothing but slavery to the marketplace.We are there. Welcome. Where have you been all this time? In a capitalistic/plutocratic society money is power (and free speech too according to the supreme court). Money means a larger/better house/car/clothing/vacation than your neighbor and consequently better mating opportunities. You can mostly blame the women for that one I think just like the peacock's tail.
  • If a student of surgery fails to learn they might maim, kill or cripple someone. If an engineer of airplanes fails to learn they might design a faulty aircraft that fails and kills people. If a student of chemistry fails to learn they might design a faulty drug with unintended and unfortunate side effects, but what exactly would be the harm if a student of philosophy fails to learn Aristotle had to say about elements or Plato had to say about perfect forms? These things are so divorced from people's everyday activities as to be rendered all but meaningless.
  • human knowledge grows by leaps and bounds every day, but human brain capacity does not, so the portion of human knowledge you can personally hold gets smaller by the minute. Learn (and remember) as much as you can as fast as you can and you will still lose ground. You certainly have your work cut out for you emphasizing the importance of Thales in the Age of Twitter and whatever follows it next year.
Weiye Loh

Rethinking the gene » Scienceline - 0 views

  • Currently, the public views genes primarily as self-contained packets of information that come from parents and are distinct from the environment. “The popular notion of the gene is an attractive idea—it’s so magical,” said Mark Blumberg, a developmental biologist at the University of Iowa in Iowa City. But it ignores the growing scientific understanding of how genes and local environments interplay, he said.
  • With the rise of molecular biology in the 1930s and genomics (the study of entire genomes) in the 1970s, scientists have developed a much more dynamic and complex picture of this interplay. The simplistic notion of the gene has been replaced with gene-environment interactions and developmental influences—nature and nurture as completely intertwined.
  • But the public hasn’t quite kept up. There remains a “huge chasm” between the way scientists understand genetics and the way the public understands it, said David Shenk, an author who has written extensively on genetics and intelligence.
  • ...8 more annotations...
  • the public still thinks of genes as blueprints, providing precise instructions for each individual.
  • “The elegant simplicity of the idea is so powerful,” said Shenk. Unfortunately, it is also false. The blueprint metaphor is fundamentally deceptive, he said, and “leads people to believe that any difference they see can be tied back to specific genes.”
  • Instead, Shenk advocates the metaphor of a giant mixing board, in which genes are a multitude of knobs and switches that get turned on and off depending on various factors in their environment. Interaction is key, though it goes against how most people see genetics: the classic, but inaccurate, dichotomies of nature versus nurture, innate versus acquired and genes versus environment.
  • Belief in those dichotomies is hard to eliminate because people tend to understand genetics through the two separate “tracks” of genes and the environment, according to speech communication expert Celeste Condit from the University of Georgia in Athens. Condit suggests that, whenever possible, explanations of genetics—by scientists, authors, journalists, or doctors—should draw connections between the two tracks, effectively merging them into one. “We need to link up the gene and environment tracks,” she said, “so that [people] can’t think of one without thinking of the other.”
  • Part of what makes these concepts so difficult lies in the language of genetics itself. A recent study by Condit in the September issue of Clinical Genetics found that when people hear the word genetics, they primarily think of heredity, or the quality of being heritable (passed from one generation to the next). Unfortunately, the terms heredity and heritable are often confused with heritability, which has a very different meaning.
  • heritability has single-handedly muddled the discourse of genetics to such a degree that even experts can’t keep it straight, argues historian of science Evelyn Fox Keller at the Massachusetts Institute of Technology in her recent book, The Mirage of a Space Between Nature and Nurture. Keller describes how heritability (in the technical literature) refers to how much of the variation in a trait is due to genetic explanation. But the term has seeped out into the general public and is, understandably, taken to mean heritable, or ability to be inherited. These concepts are fundamentally different, but often hard to grasp.
  • For example, let’s say that in a population with people of different heights, 60 percent of the variation in height is attributable to genes (as opposed to nutrition). The heritability of height is 60 percent. This does not mean, however, that 60 percent of an individual’s height comes from her genes, and 40 percent from what she ate growing up. Heritability refers to causes of variations (between people), not to causes of traits themselves (in each particular individual). The conflation of crucially different terms like traits and variations has wreaked havoc on the public understanding of genetics.
  • The stakes are high. Condit emphasizes how important a solid understanding of genetics is for making health decisions. Whether people see diabetes or lung cancer as determined by family history or responsive to changes in behavior depends greatly on how they understand genetics. Policy decisions about education, childcare, or the workplace are all informed by a proper understanding of the dynamic interplay of genes and the environment, and this means looking beyond the Mendelian lens of heredity. According to Shenk, everyone in the business of communicating these issues “needs to bend over backwards to help people understand.”
Weiye Loh

How Is Twitter Impacting Search and SEO? Here's the (Visual) Proof | MackCollier.com - ... - 0 views

  • I picked a fairly specific term, in “Social Media Crisis Management”.  I checked prior to publishing yesterday’s post, and there were just a shade under 29,000 Google results for that term.  This is important because you need to pick the most specific term as possible, because this will result in less competition, and (if you’ve picked the right term for you) it means you will be more likely to get the ‘right’ kind of traffic.
  • Second, I made sure the term was in the title and mentioned a couple of times in the post.  I also made the term “Social Media Crisis Management” at the front of the post title, I originally had the title as “A No-Nonsense Guide to Social Media Crisis Management” but Amy wisely suggested that I flip it so the term I was targeting was at the front of the title.
  • when I published the post yesterday at 12:20pm, there were 28,900 Google results for the term “Social Media Crisis Management”.  I tweeted a link to it at that time.  Fifty minutes later at 1:10pm, the post was already showing up on the 3rd page for a Google search of #Social Media Crisis Management”:
  • ...5 more annotations...
  • I tweeted out another link to the post around 2pm, and then at 2:30pm, it moved a bit further up the results on the 3rd page:
  • The Latest results factors in real-time linking behavior, so it is picking up all the tweets where my post was being RTed, and as a result, the top half of the Latest results for the term “Social Media Crisis Management” were completely devoted to MY post.
  • That’s a perfect example of how Twitter and Facebook sharing is now impacting Google results.  And it’s also a wonderful illustration of the value of being active on Twitter.  I tweeted a link to that post several times yesterday and this morning, which was a big reason why it moved up the Google results so quickly, and a big reason why it dominated the Latest results for that term.
  • there are two things I want you to take away from this: 1 – This was very basic SEO stuff that any of you can do.  It was simply a case of targeting a specific phrase, and inserting it in the post.  Now as far as my having a large and engaged Twitter network and readership here (thanks guys!), that definitely played a big factor in the post moving up the results so quickly.  But at a basic level, everything I did from a SEO perspective is what you can do with every post.  And you should.
  • 2 – You can best learn by breaking stuff.  There are a gazillion ‘How to’ and ’10 Steps to…’ articles about using social media, and I have certainly written my fair share of these.  But the best way *I* learn is if you can show me the first 1 or 2 steps, then leave me alone and let me figure out the remaining 8 or 9 steps for myself.  Don’t just blindly follow my social media advice or anyone else’s.  Use the advice as a guide for how you can get started.  But there is no one RIGHT way to use social media.  Never forget that.  I can tell you what works for me and my clients, but you still need to tweak any advice so that it is perfect for you.  SEO geeks will no doubt see a ton of things that I could have done or altered in this experiment to get even better results.  And moving forward, I am going to continue to tweak and ‘break stuff’ in order to better figure out how all the moving parts work together.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Mike Adams Remains True to Form « Alternative Medicine « Health « Skeptic North - 0 views

  • The 10:23 demonstrations and the CBC Marketplace coverage have elicited fascinating case studies in CAM professionalism. Rather than offering any new information or evidence about homeopathy itself, some homeopaths have spuriously accused skeptical groups of being malicious Big Pharma shills.
  • Mike Adams of the Natural News website
  • has decided to provide his own coverage of the 10:23 campaign
  • ...17 more annotations...
  • Mike’s thesis is essentially: Silly skeptics, it’s impossible to OD on homeopathy!
  • 1. “Notice that they never consume their own medicines in large doses? Chemotherapy? Statin drugs? Blood thinners? They wouldn’t dare drink those.
  • Of course we wouldn’t. Steven Novella rightly points out that, though Mike thinks he’s being clever here, he’s actually demonstrating a lack of understanding for what the 10:23 campaign is about by using a straw man. Mike later issues a challenge for skeptics to drink their favourite medicines while he drinks homeopathy. Since no one will agree to that for the reasons explained above, he can claim some sort of victory — hence his smugness. But no one is saying that drugs aren’t harmful.
  • The difference between medicine and poison is in the dose. The vitamins and herbs promoted by the CAM industry are just as potentially harmful as any pharmaceutical drug, given enough of it. Would Adams be willing to OD on the vitamins or herbal remedies that he sells?
  • Even Adams’ favorite panacea, vitamin D, is toxic if you take enough of it (just ask Gary Null). Notice how skeptics don’t consume those either, because that is not the point they’re making.
  • The point of these demonstrations is that homeopathy has nothing in it, has no measurable physiological effects, and does not do what is advertised on the package.
  • 2. “Homeopathy, you see, isn’t a drug. It’s not a chemical.” Well, he’s got that right. “You know the drugs are kicking in when you start getting worse. Toxicity and conventional medicine go hand in hand.” [emphasis his]
  • Here I have to wonder if Adams knows any people with diabetes, AIDS, or any other illness that used to mean a death sentence before the significant medical advances of the 20th century that we now take for granted. So far he seems to be a firm believer in the false dichotomy that drugs are bad and natural products are good, regardless of what’s in them or how they’re used (as we know, natural products can have biologically active substances and effectively act as impure drugs – but leave it to Adams not to get bogged down with details). There is nothing to support the assertion that conventional medicine is nothing but toxic symptom-inducers.
  • 3-11. “But homeopathy isn’t a chemical. It’s a resonance. A vibration, or a harmony. It’s the restructuring of water to resonate with the particular energy of a plant or substance. We can get into the physics of it in a subsequent article, but for now it’s easy to recognize that even from a conventional physics point of view, liquid water has tremendous energy, and it’s constantly in motion, not just at the molecular level but also at the level of its subatomic particles and so-called “orbiting electrons” which aren’t even orbiting in the first place. Electrons are vibrations and not physical objects.” [emphasis his]
  • This is Star Trek-like technobabble – lots of sciency words
  • if something — anything — has an effect, then that effect is measurable by definition. Either something works or it doesn’t, regardless of mechanism. In any case, I’d like to see the well-documented series of research that conclusively proves this supposed mechanism. Actually, I’d like to see any credible research at all. I know what the answer will be to that: science can’t detect this yet. Well if you agree with that statement, reader, ask yourself this: then how does Adams know? Where did he get this information? Without evidence, he is guessing, and what is that really worth?
  • 13. “But getting back to water and vibrations, which isn’t magic but rather vibrational physics, you can’t overdose on a harmony. If you have one violin playing a note in your room, and you add ten more violins — or a hundred more — it’s all still the same harmony (with all its complex higher frequencies, too). There’s no toxicity to it.” [emphasis his]
  • Homeopathy has standard dosing regimes (they’re all the same), but there is no “dose” to speak of: the ingredients have usually been diluted out to nothing. But Adams is also saying that homeopathy doesn’t work by dose at all, it works by the properties of “resonance” and “vibration”. Then why any dosing regimen? To maintain the resonance? How is this resonance measured? How long does the “resonance” last? Why does it wear off? Why does he think televisions can inactivate homeopathy? (I think I might know the answer to that last one, as electronic interference is a handy excuse for inefficacy.)
  • “These skeptics just want to kill themselves… and they wouldn’t mind taking a few of you along with them, too. Hence their promotion of vaccines, pharmaceuticals, chemotherapy and water fluoridation. We’ll title the video, “SKEPTICS COMMIT MASS SUICIDE BY DRINKING PHARMACEUTICALS AS IF THEY WERE KOOL-AID.” Jonestown, anyone?”
  • “Do you notice the irony here? The only medicines they’re willing to consume in large doses in public are homeopathic remedies! They won’t dare consume large quantities of the medicines they all say YOU should be taking! (The pharma drugs.)” [emphasis his]
  • what Adams seems to have missed is that the skeptics have no intention of killing themselves, so his bizarre claims that the 10:23 participants are psychopathic, self-loathing, and suicidal makes not even a little bit of sense. Skeptics know they aren’t going to die with these demonstrations, because homeopathy has no active ingredients and no evidence of efficacy.
  • The inventor of homeopathy himself, Samuel Hahnemann believed that excessive doses of homeopathy could be harmful (see sections 275 and 276 of his Organon). Homeopaths are pros at retconning their own field to fit in with Hahnemann’s original ideas (inventing new mechanisms, such as water memory and resonance, in the face of germ theory). So how does Adams reconcile this claim?
Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

Freakonomics » How Advancements in Neuroscience Will Influence the Law - 0 views

  • as new technologies emerge to better reveal people’s experiences, the law ought to do more to take these experiences into account. In tort and criminal law, we often ignore or downplay the importance of subjective experience. This is no surprise. During the hundreds of years in which these bodies of law developed, we had very poor methods of making inferences about the experiences of others. As we get better at measuring experiences, however, I make the normative claim that we ought to change fundamental aspects of the law to take better account of people’s experiences.
  • Researchers are trying to develop more accurate methods of detecting deception using brain imaging.    While many in the scientific community doubt that current brain-based methods of lie detection are sufficiently accurate and reliable to use in forensic contexts, that has stopped neither companies from marketing fMRI lie detection services to the public, nor litigants from trying to introduce such evidence in court.
  • Given the substantial possibility that we will develop reasonably accurate lie detectors within the next thirty years, our current secretive behaviors have already become harder to hide.
  •  
    A new article published in the Emory Law Journal (full version here) entitled "The Experiential Future of the Law," by Brooklyn Law School professor Adam Kolber, looks at how these advancements will continue over the next 30 years (to the point of near mind-reading), and how they'll inevitably lead to changes in the law.
Weiye Loh

Hamlet and the region of death - The Boston Globe - 0 views

  • To many readers — and to some of Moretti’s fellow academics — the very notion of quantitative literary studies can seem like an offense to that which made literature worth studying in the first place: its meaning and beauty. For Moretti, however, moving literary scholarship beyond reading is the key to producing new knowledge about old texts — even ones we’ve been studying for centuries.
  •  
    Franco Moretti, however, often doesn't read the books he studies. Instead, he analyzes them as data. Working with a small group of graduate students, the Stanford University English professor has fed thousands of digitized texts into databases and then mined the accumulated information for new answers to new questions. How far, on average, do characters in 19th-century English novels walk over the course of a book? How frequently are new genres of popular fiction invented? How many words does the average novel's protagonist speak? By posing these and other questions, Moretti has become the unofficial leader of a new, more quantitative kind of literary study.
1 - 20 of 500 Next › Last »
Showing 20 items per page