Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Creationism

Rss Feed Group items tagged

Weiye Loh

Skepticblog » The Value of Vertigo - 1 views

  • But Ruse’s moment of vertigo is not as surprising as it may appear. Indeed, he put effort into achieving this immersion: “I am atypical, I took about three hours to go through [the creation museum] but judging from my students most people don’t read the material as obsessively as I and take about an hour.” Why make this meticulous effort, when he could have dismissed creationism’s well-known scientific problems from the parking lot, or from his easy chair at home?
  • According to Ruse, the vertiginous “what if?” feeling has a practical value. After all, it’s easy to find problems with a pseudoscientific belief; what’s harder is understanding how and why other people believe. “It is silly just to dismiss this stuff as false,” Ruse argues (although it is false, and although Ruse has fought against “this stuff” for decades). “A lot of people believe Creationism so we on the other side need to get a feeling not just for the ideas but for the psychology too.”
  •  
    In June of 2009, philosopher of biology Michael Ruse took a group of grad students to the Answers in Genesis Creation Museum in Kentucky (and also some mainstream institutions) as part of a course on how museums present science. In a critical description of his visit, Ruse reflected upon "the extent to which the Creationist museum uses modern science to its own ends, melding it in seamlessly with its own Creationist message." Continental drift, the Big Bang, and even natural selection are all presented as evidence in support of Young Earth cosmology and flood geology. While immersing himself in the museum's pitch, Ruse wrote, Just for one moment about half way through the exhibit…I got that Kuhnian flash that it could all be true - it was only a flash (rather like thinking that Freudianism is true or that the Republicans are right on anything whatsoever) but it was interesting nevertheless to get a sense of how much sense this whole display and paradigm can make to people.
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Hermits and Cranks: Lessons from Martin Gardner on Recognizing Pseudoscientists: Scient... - 0 views

  • In 1950 Martin Gardner published an article in the Antioch Review entitled "The Hermit Scientist," about what we would today call pseudoscientists.
  • there has been some progress since Gardner offered his first criticisms of pseudoscience. Now largely antiquated are his chapters on believers in a flat Earth, a hollow Earth, Atlantis and Lemuria, Alfred William Lawson, Roger Babson, Trofim Lysenko, Wilhelm Reich and Alfred Korzybski. But disturbingly, a good two thirds of the book's contents are relevant today, including Gardner's discussions of homeopathy, naturopathy, osteopathy, iridiagnosis (reading the iris of the eye to deter- mine bodily malfunctions), food faddists, cancer cures and other forms of medical quackery, Edgar Cayce, the Great Pyramid's alleged mystical powers, handwriting analysis, ESP and PK (psychokinesis), reincarnation, dowsing rods, eccentric sexual theories, and theories of group racial differences.
  • The "hermit scientist," a youthful Gardner wrote, works alone and is ignored by mainstream scientists. "Such neglect, of course, only strengthens the convictions of the self-declared genius."
  • ...5 more annotations...
  • Even then Gardner was bemoaning that some beliefs never seem to go out of vogue, as he recalled an H. L. Mencken quip from the 1920s: "Heave an egg out of a Pullman window, and you will hit a Fundamentalist almost anywhere in the U.S. today." Gardner cautions that when religious superstition should be on the wane, it is easy "to forget that thousands of high school teachers of biology, in many of our southern states, are still afraid to teach the theory of evolution for fear of losing their jobs." Today creationism has spread northward and mutated into the oxymoronic form of "creation science."
  • the differences between science and pseudoscience. On the one extreme we have ideas that are most certainly false, "such as the dianetic view that a one-day-old embryo can make sound recordings of its mother's conversation." In the borderlands between the two "are theories advanced as working hypotheses, but highly debatable because of the lack of sufficient data." Of these Gardner selects a most propitious propitious example: "the theory that the universe is expanding." That theory would now fall at the other extreme end of the spectrum, where lie "theories al- most certainly true, such as the belief that the Earth is round or that men and beasts are distant cousins."
  • How can we tell if someone is a scientific crank? Gardner offers this advice: (1) "First and most important of these traits is that cranks work in almost total isolation from their colleagues." Cranks typically do not understand how the scientific process operates—that they need to try out their ideas on colleagues, attend conferences and publish their hypotheses in peer-reviewed journals before announcing to the world their startling discovery. Of course, when you explain this to them they say that their ideas are too radical for the conservative scientific establishment to accept.
  • (2) "A second characteristic of the pseudo-scientist, which greatly strengthens his isolation, is a tendency toward paranoia," which manifests itself in several ways: (1) He considers himself a genius. (2) He regards his colleagues, without exception, as ignorant blockheads....(3) He believes himself unjustly persecuted and discriminated against. The recognized societies refuse to let him lecture. The journals reject his papers and either ignore his books or assign them to "enemies" for review. It is all part of a dastardly plot. It never occurs to the crank that this opposition may be due to error in his work....(4) He has strong compulsions to focus his attacks on the greatest scientists and the best-established theories. When Newton was the outstanding name in physics, eccentric works in that science were violently anti-Newton. Today, with Einstein the father-symbol of authority, a crank theory of physics is likely to attack Einstein....(5) He often has a tendency to write in a complex jargon, in many cases making use of terms and phrases he himself has coined.
  • "If the present trend continues," Gardner concludes, "we can expect a wide variety of these men, with theories yet unimaginable, to put in their appearance in the years immediately ahead. They will write impressive books, give inspiring lectures, organize exciting cults. They may achieve a following of one—or one million. In any case, it will be well for ourselves and for society if we are on our guard against them."
  •  
    May 23, 2010 | 31 comments Hermits and Cranks: Lessons from Martin Gardner on Recognizing Pseudoscientists Fifty years ago Gardner launched the modern skeptical movement. Unfortunately, much of what he wrote about is still current today By Michael Shermer   
Weiye Loh

Understanding the universe: Order of creation | The Economist - 0 views

  • In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
  • The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
  • It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
  • ...2 more annotations...
  • The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
  • Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
  •  
    Order of creation Even Stephen Hawking doesn't quite manage to explain why we are here
Weiye Loh

Rationally Speaking: Sagan beats Dawkins. In related news, education overcomes supersti... - 0 views

  •  
    People are drawn to creationism out of emotional fears of personal annihilation, not by reasoned discourse. And It seems that education might trump people's fear of mortality enough to make them understand that science is more sound than religion when it comes to explaining the natural world.
Weiye Loh

The Origins of "Basic Research" - 0 views

  • For many scientists, "basic research" means "fundamental" or "pure" research conducted without consideration of practical applications. At the same time, policy makers see "basic research" as that which leads to societal benefits including economic growth and jobs.
  • The mechanism that has allowed such divergent views to coexist is of course the so-called "linear model" of innovation, which holds that investments in "basic research" are but the first step in a sequence that progresses through applied research, development, and application. As recently explained in a major report of the US National Academy of Sciences: "[B]asic research ... has the potential to be transformational to maintain the flow of new ideas that fuel the economy, provide security, and enhance the quality of life" (Rising Above the Gathering Storm).
  • A closer look at the actual history of Google reveals how history becomes mythology. The 1994 NSF project that funded the scientific work underpinning the search engine that became Google (as we know it today) was conducted from the start with commercialization in mind: "The technology developed in this project will provide the 'glue' that will make this worldwide collection usable as a unified entity, in a scalable and economically viable fashion." In this case, the scientist following his curiosity had at least one eye simultaneously on commercialization.
  • ...1 more annotation...
  • In their appeal for more funding for scientific research, Leshner and Cooper argued that: "Across society, we don't have to look far for examples of basic research that paid off." They cite the creation of Google as a prime example of such payoffs: "Larry Page and Sergey Brin, then a National Science Foundation [NSF] fellow, did not intend to invent the Google search engine. Originally, they were intrigued by a mathematical challenge ..." The appealing imagery of a scientist who simply follows his curiosity and then makes a discovery with a large societal payoff is part of the core mythology of post-World War II science policies. The mythology shapes how governments around the world organize, account for, and fund research. A large body of scholarship has critiqued postwar science policies and found that, despite many notable successes, the science policies that may have made sense in the middle of the last century may need updating in the 21st century. In short, investments in "basic research" are not enough. Benoit Godin has asserted (PDF) that: "The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications." Or as Leshner and Cooper declare in The Washington Post: "Federal investments in R&D have fueled half of the nation's economic growth since World War II."
Weiye Loh

Roger Pielke Jr.'s Blog: New Bridges Column: The Origins of "Basic Research" - 0 views

  •  
    "The appealing imagery of a scientist who simply follows his curiosity and then makes a discovery with a large societal payoff is part of the core mythology of post-World War II science policies. The mythology shapes how governments around the world organize, account for, and fund research. A large body of scholarship has critiqued postwar science policies and found that, despite many notable successes, the science policies that may have made sense in the middle of the last century may need updating in the 21st century. In short, investments in "basic research" are not enough. Benoit Godin has asserted (PDF) that: "The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications." Or as Leshner and Cooper declare in The Washington Post: "Federal investments in R&D have fueled half of the nation's economic growth since World War II." A closer look at the actual history of Google reveals how history becomes mythology. The 1994 NSF project that funded the scientific work underpinning the search engine that became Google (as we know it today) was conducted from the start with commercialization in mind: "The technology developed in this project will provide the 'glue' that will make this worldwide collection usable as a unified entity, in a scalable and economically viable fashion." In this case, the scientist following his curiosity had at least one eye simultaneously on commercialization."
Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Weiye Loh

Let's make science metrics more scientific : Article : Nature - 0 views

  • Measuring and assessing academic performance is now a fact of scientific life.
  • Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use1
  • Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
  • ...15 more annotations...
  • narrow or biased measures of scientific achievement can lead to narrow and biased science.
  • Global demand for, and interest in, metrics should galvanize stakeholders — national funding agencies, scientific research organizations and publishing houses — to combine forces. They can set an agenda and foster research that establishes sound scientific metrics: grounded in theory, built with high-quality data and developed by a community with strong incentives to use them.
  • Scientists are often reticent to see themselves or their institutions labelled, categorized or ranked. Although happy to tag specimens as one species or another, many researchers do not like to see themselves as specimens under a microscope — they feel that their work is too complex to be evaluated in such simplistic terms. Some argue that science is unpredictable, and that any metric used to prioritize research money risks missing out on an important discovery from left field.
    • Weiye Loh
       
      It is ironic that while scientists feel that their work are too complex to be evaluated in simplistic terms or matrics, they nevertheless feel ok to evaluate the world in simplistic terms. 
  • It is true that good metrics are difficult to develop, but this is not a reason to abandon them. Rather it should be a spur to basing their development in sound science. If we do not press harder for better metrics, we risk making poor funding decisions or sidelining good scientists.
  • Metrics are data driven, so developing a reliable, joined-up infrastructure is a necessary first step.
  • We need a concerted international effort to combine, augment and institutionalize these databases within a cohesive infrastructure.
  • On an international level, the issue of a unique researcher identification system is one that needs urgent attention. There are various efforts under way in the open-source and publishing communities to create unique researcher identifiers using the same principles as the Digital Object Identifier (DOI) protocol, which has become the international standard for identifying unique documents. The ORCID (Open Researcher and Contributor ID) project, for example, was launched in December 2009 by parties including Thompson Reuters and Nature Publishing Group. The engagement of international funding agencies would help to push this movement towards an international standard.
  • if all funding agencies used a universal template for reporting scientific achievements, it could improve data quality and reduce the burden on investigators.
    • Weiye Loh
       
      So in future, we'll only have one robust matric to evaluate scientific contribution? hmm...
  • Importantly, data collected for use in metrics must be open to the scientific community, so that metric calculations can be reproduced. This also allows the data to be efficiently repurposed.
  • As well as building an open and consistent data infrastructure, there is the added challenge of deciding what data to collect and how to use them. This is not trivial. Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes4 and even the production of YouTube videos.
  • Perhaps publications in these different media should be weighted differently in different fields.
  • There needs to be a greater focus on what these data mean, and how they can be best interpreted.
  • This requires the input of social scientists, rather than just those more traditionally involved in data capture, such as computer scientists.
  • An international data platform supported by funding agencies could include a virtual 'collaboratory', in which ideas and potential solutions can be posited and discussed. This would bring social scientists together with working natural scientists to develop metrics and test their validity through wikis, blogs and discussion groups, thus building a community of practice. Such a discussion should be open to all ideas and theories and not restricted to traditional bibliometric approaches.
  • Far-sighted action can ensure that metrics goes beyond identifying 'star' researchers, nations or ideas, to capturing the essence of what it means to be a good scientist.
  •  
    Let's make science metrics more scientific Julia Lane1 Top of pageAbstract To capture the essence of good science, stakeholders must combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity, says Julia Lane.
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

Does "Inclusion" Matter for Open Government? (The Answer Is, Very Much Indeed... - 0 views

  • But in the context of the Open Government Partnership and the 70 or so countries that have already committed themselves to this or are in the process I’m not sure that the world can afford to wait to see whether this correlation is direct, indirect or spurious especially if we can recognize that in the world of OGP, the currency of accumulation and concentration is not raw economic wealth but rather raw political power.
  • in the same way as there appears to be an association between the rise of the Internet and increasing concentrations of wealth one might anticipate that the rise of Internet enabled structures of government might be associated with the increasing concentration of political power in fewer and fewer hands and particularly the hands of those most adept at manipulating the artifacts and symbols of the new Internet age.
  • I am struck by the fact that while the OGP over and over talks about the importance and value and need for Open Government there is no similar or even partial call for Inclusive Government.  I’ve argued elsewhere how “Open”, in the absence of attention being paid to ensuring that the pre-conditions for the broadest base of participation will almost inevitably lead to the empowerment of the powerful. What I fear with the OGP is that by not paying even a modicum of attention to the issue of inclusion or inclusive development and participation that all of the idealism and energy that is displayed today in Brasilia is being directed towards the creation of the Governance equivalents of the Internet billionaires whatever that might look like.
  • ...1 more annotation...
  • crowd sourced public policy
  •  
    alongside the rise of the Internet and the empowerment of the Internet generation has emerged the greatest inequalities of wealth and privilege that any of the increasingly Internet enabled economies/societies have experienced at least since the great Depression and perhaps since the beginnings of systematic economic record keeping.  The association between the rise of inequality and the rise of the Internet has not yet been explained and if may simply be a coincidence but somehow I'm doubtful and we await a newer generation of rather more critical and less dewey economists to give us the models and explanations for this co-evolution.
Weiye Loh

The Death of Postmodernism And Beyond | Philosophy Now - 0 views

  • Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture. Some of the texts (‘The Library of Babel’) were written even before their parents were born. Replace this cache with other postmodern stalwarts – Beloved, Flaubert’s Parrot, Waterland, The Crying of Lot 49, Pale Fire, Slaughterhouse 5, Lanark, Neuromancer, anything by B.S. Johnson – and the same applies. It’s all about as contemporary as The Smiths, as hip as shoulder pads, as happening as Betamax video recorders. These are texts which are just coming to grips with the existence of rock music and television; they mostly do not dream even of the possibility of the technology and communications media – mobile phones, email, the internet, computers in every house powerful enough to put a man on the moon – which today’s undergraduates take for granted.
  • somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.
  • Postmodernism, like modernism and romanticism before it, fetishised [ie placed supreme importance on] the author, even when the author chose to indict or pretended to abolish him or herself. But the culture we have now fetishises the recipient of the text to the degree that they become a partial or whole author of it. Optimists may see this as the democratisation of culture; pessimists will point to the excruciating banality and vacuity of the cultural products thereby generated (at least so far).
  • ...17 more annotations...
  • Pseudo-modernism also encompasses contemporary news programmes, whose content increasingly consists of emails or text messages sent in commenting on the news items. The terminology of ‘interactivity’ is equally inappropriate here, since there is no exchange: instead, the viewer or listener enters – writes a segment of the programme – then departs, returning to a passive role. Pseudo-modernism also includes computer games, which similarly place the individual in a context where they invent the cultural content, within pre-delineated limits. The content of each individual act of playing the game varies according to the particular player.
  • The pseudo-modern cultural phenomenon par excellence is the internet. Its central act is that of the individual clicking on his/her mouse to move through pages in a way which cannot be duplicated, inventing a pathway through cultural products which has never existed before and never will again. This is a far more intense engagement with the cultural process than anything literature can offer, and gives the undeniable sense (or illusion) of the individual controlling, managing, running, making up his/her involvement with the cultural product. Internet pages are not ‘authored’ in the sense that anyone knows who wrote them, or cares. The majority either require the individual to make them work, like Streetmap or Route Planner, or permit him/her to add to them, like Wikipedia, or through feedback on, for instance, media websites. In all cases, it is intrinsic to the internet that you can easily make up pages yourself (eg blogs).
  • Where once special effects were supposed to make the impossible appear credible, CGI frequently [inadvertently] works to make the possible look artificial, as in much of Lord of the Rings or Gladiator. Battles involving thousands of individuals have really happened; pseudo-modern cinema makes them look as if they have only ever happened in cyberspace.
  • Similarly, television in the pseudo-modern age favours not only reality TV (yet another unapt term), but also shopping channels, and quizzes in which the viewer calls to guess the answer to riddles in the hope of winning money.
  • The purely ‘spectacular’ function of television, as with all the arts, has become a marginal one: what is central now is the busy, active, forging work of the individual who would once have been called its recipient. In all of this, the ‘viewer’ feels powerful and is indeed necessary; the ‘author’ as traditionally understood is either relegated to the status of the one who sets the parameters within which others operate, or becomes simply irrelevant, unknown, sidelined; and the ‘text’ is characterised both by its hyper-ephemerality and by its instability. It is made up by the ‘viewer’, if not in its content then in its sequence – you wouldn’t read Middlemarch by going from page 118 to 316 to 401 to 501, but you might well, and justifiably, read Ceefax that way.
  • A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity.
  • If scholars give the date they referenced an internet page, it is because the pages disappear or get radically re-cast so quickly. Text messages and emails are extremely difficult to keep in their original form; printing out emails does convert them into something more stable, like a letter, but only by destroying their essential, electronic state.
  • The cultural products of pseudo-modernism are also exceptionally banal
  • Much text messaging and emailing is vapid in comparison with what people of all educational levels used to put into letters.
  • A triteness, a shallowness dominates all.
  • In music, the pseudo-modern supersedingof the artist-dominated album as monolithic text by the downloading and mix-and-matching of individual tracks on to an iPod, selected by the listener, was certainly prefigured by the music fan’s creation of compilation tapes a generation ago. But a shift has occurred, in that what was a marginal pastime of the fan has become the dominant and definitive way of consuming music, rendering the idea of the album as a coherent work of art, a body of integrated meaning, obsolete.
  • To a degree, pseudo-modernism is no more than a technologically motivated shift to the cultural centre of something which has always existed (similarly, metafiction has always existed, but was never so fetishised as it was by postmodernism). Television has always used audience participation, just as theatre and other performing arts did before it; but as an option, not as a necessity: pseudo-modern TV programmes have participation built into them.
  • Whereas postmodernism called ‘reality’ into question, pseudo-modernism defines the real implicitly as myself, now, ‘interacting’ with its texts. Thus, pseudo-modernism suggests that whatever it does or makes is what is reality, and a pseudo-modern text may flourish the apparently real in an uncomplicated form: the docu-soap with its hand-held cameras (which, by displaying individuals aware of being regarded, give the viewer the illusion of participation); The Office and The Blair Witch Project, interactive pornography and reality TV; the essayistic cinema of Michael Moore or Morgan Spurlock.
  • whereas postmodernism favoured the ironic, the knowing and the playful, with their allusions to knowledge, history and ambivalence, pseudo-modernism’s typical intellectual states are ignorance, fanaticism and anxiety
  • pseudo-modernism lashes fantastically sophisticated technology to the pursuit of medieval barbarism – as in the uploading of videos of beheadings onto the internet, or the use of mobile phones to film torture in prisons. Beyond this, the destiny of everyone else is to suffer the anxiety of getting hit in the cross-fire. But this fatalistic anxiety extends far beyond geopolitics, into every aspect of contemporary life; from a general fear of social breakdown and identity loss, to a deep unease about diet and health; from anguish about the destructiveness of climate change, to the effects of a new personal ineptitude and helplessness, which yield TV programmes about how to clean your house, bring up your children or remain solvent.
  • Pseudo-modernism belongs to a world pervaded by the encounter between a religiously fanatical segment of the United States, a largely secular but definitionally hyper-religious Israel, and a fanatical sub-section of Muslims scattered across the planet: pseudo-modernism was not born on 11 September 2001, but postmodernism was interred in its rubble.
  • pseudo-modernist communicates constantly with the other side of the planet, yet needs to be told to eat vegetables to be healthy, a fact self-evident in the Bronze Age. He or she can direct the course of national television programmes, but does not know how to make him or herself something to eat – a characteristic fusion of the childish and the advanced, the powerful and the helpless. For varying reasons, these are people incapable of the “disbelief of Grand Narratives” which Lyotard argued typified postmodernists
  •  
    Postmodern philosophy emphasises the elusiveness of meaning and knowledge. This is often expressed in postmodern art as a concern with representation and an ironic self-awareness. And the argument that postmodernism is over has already been made philosophically. There are people who have essentially asserted that for a while we believed in postmodern ideas, but not any more, and from now on we're going to believe in critical realism. The weakness in this analysis is that it centres on the academy, on the practices and suppositions of philosophers who may or may not be shifting ground or about to shift - and many academics will simply decide that, finally, they prefer to stay with Foucault [arch postmodernist] than go over to anything else. However, a far more compelling case can be made that postmodernism is dead by looking outside the academy at current cultural production.
Jody Poh

Web tools help protect human rights activists - 7 views

1) I think it depends what is being censored. I think things like opinions should not be censored because it is violating the natural rights of a human. However, one can argue that online censors...

"Online censorship" "digital rights" 'Internet privacy tools"

juliet huang

Virus as a call for help, as a part of a larger social problem - 7 views

I agree with this view, and I also add on that yes, it is probably more profitable for the capitalist, wired society to continue creating anti-virus programs, open more it repair shops etc, than to...

Virus

Inosha Wickrama

Pirate Bay Victory - 11 views

http://www.telegraph.co.uk/technology/news/4686584/Pirate-Bay-victory-after-illegal-file-sharing-charges-dropped.html Summary: The Pirate Bay, the biggest file-sharing internet site which was accu...

Weiye Loh

Skepticblog » Flaws in Creationist Logic - 0 views

  • making a false analogy here by confusing the origin of life with the later evolution of life. The watch analogy was specifically offered to say that something which is complex and displays design must have been created and designed by a creator. Therefore, since we see complexity and design in life it too must have had a creator. But all the life that we know – that life which is being pointed to as complex and designed – is the result of a process (evolution) that has worked over billions of years. Life can grow, reproduce, and evolve. Watches cannot – so it is not a valid analogy.
  • Life did emerge from non-living matter, but that is irrelevant to the point. There was likely a process of chemical evolution – but still the non-living precursors to life were just chemicals, they did not display the design or complexity apparent in a watch. Ankur’s attempt to rescue this false analogy fails. And before someone has a chance to point it out – yes, I said that life displays design. It displays bottom-up evolutionary design, not top-down intelligent design. This refers to another fallacy of creationists – the assumption that all design is top down. But nature demonstrates that this is a false assumption.
  • An increase in variation is an increase in information – it takes more information to describe the greater variety. By any actual definition of information – variation increases information. Also, as I argued, when you have gene duplication you are physically increasing the number of information carrying units – that is an increase in information. There is simply no way to avoid the mountain of genetic evidence that genetic information has increased over evolutionary time through evolutionary processes.
  •  
    FLAWS IN CREATIONIST LOGIC
Weiye Loh

Essay - The End of Tenure? - NYTimes.com - 0 views

  • The cost of a college education has risen, in real dollars, by 250 to 300 percent over the past three decades, far above the rate of inflation. Elite private colleges can cost more than $200,000 over four years. Total student-loan debt, at nearly $830 billion, recently surpassed total national credit card debt. Meanwhile, university presidents, who can make upward of $1 million annually, gravely intone that the $50,000 price tag doesn’t even cover the full cost of a year’s education.
  • Then your daughter reports that her history prof is a part-time adjunct, who might be making $1,500 for a semester’s work. There’s something wrong with this picture.
  • The higher-ed jeremiads of the last generation came mainly from the right. But this time, it’s the tenured radicals — or at least the tenured liberals — who are leading the charge. Hacker is a longtime contributor to The New York Review of Books and the author of the acclaimed study “Two Nations: Black and White, Separate, Hostile, Unequal,”
  • ...6 more annotations...
  • And these two books arrive at a time, unlike the early 1990s, when universities are, like many students, backed into a fiscal corner. Taylor writes of walking into a meeting one day and learning that Columbia’s endowment had dropped by “at least” 30 percent. Simply brushing off calls for reform, however strident and scattershot, may no longer be an option.
  • The labor system, for one thing, is clearly unjust. Tenured and tenure-track professors earn most of the money and benefits, but they’re a minority at the top of a pyramid. Nearly two-thirds of all college teachers are non-tenure-track adjuncts like Matt Williams, who told Hacker and Dreifus he had taught a dozen courses at two colleges in the Akron area the previous year, earning the equivalent of about $8.50 an hour by his reckoning. It is foolish that graduate programs are pumping new Ph.D.’s into a world without decent jobs for them. If some programs were phased out, teaching loads might be raised for some on the tenure track, to the benefit of undergraduate education.
  • it might well be time to think about vetoing Olympic-quality athletic ­facilities and trimming the ranks of administrators. At Williams, a small liberal arts college renowned for teaching, 70 percent of employees do something other than teach.
  • But Hacker and Dreifus go much further, all but calling for an end to the role of universities in the production of knowledge. Spin off the med schools and research institutes, they say. University presidents “should be musing about education, not angling for another center on antiterrorist technologies.” As for the humanities, let professors do research after-hours, on top of much heavier teaching schedules. “In other occupations, when people feel there is something they want to write, they do it on their own time and at their own expense,” the authors declare. But it seems doubtful that, say, “Battle Cry of Freedom,” the acclaimed Civil War history by Princeton’s James McPherson, could have been written on the weekends, or without the advance spadework of countless obscure monographs. If it is false that research invariably leads to better teaching, it is equally false to say that it never does.
  • Hacker’s home institution, the public Queens College, which has a spartan budget, commuter students and a three-or-four-course teaching load per semester. Taylor, by contrast, has spent his career on the elite end of higher education, but he is no less disillusioned. He shares Hacker and Dreifus’s concerns about overspecialized research and the unintended effects of tenure, which he believes blocks the way to fresh ideas. Taylor has backed away from some of the most incendiary proposals he made last year in a New York Times Op-Ed article, cheekily headlined “End the University as We Know It” — an article, he reports, that drew near-universal condemnation from academics and near-universal praise from everyone else. Back then, he called for the flat-out abolition of traditional departments, to be replaced by temporary, “problem-centered” programs focusing on issues like Mind, Space, Time, Life and Water. Now, he more realistically suggests the creation of cross-­disciplinary “Emerging Zones.” He thinks professors need to get over their fear of corporate partnerships and embrace efficiency-enhancing technologies.
  • It is not news that America is a land of haves and have-nots. It is news that colleges are themselves dividing into haves and have-nots; they are becoming engines of inequality. And that — not whether some professors can afford to wear Marc Jacobs — is the real scandal.
  •  
    The End of Tenure? By CHRISTOPHER SHEA Published: September 3, 2010
Weiye Loh

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
Weiye Loh

To Die of Having Lived: an article by Richard Rapport | The American Scholar - 0 views

  • Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
  • Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death.
  • generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty.
    • Weiye Loh
       
      Isn't it always the case? That dichotomous relationships aren't clearly and equally demarcated but some how we attempt to split them up... through polemical discourses and rhetorics...
  • ...13 more annotations...
  • Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
    • Weiye Loh
       
      In what situations do we hear the word "nearly" and in what situations do we hear the word "certain"? When we're dealing with a person's life, we hear "nearly", but when we're dealing with climate science we hear "certain"? 
  • Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution.
  • The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
  • Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
  • Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
    • Weiye Loh
       
      Perhaps then, the value of life is not so much life in itself per se, but rather the transactive amount it generates. 
  • While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit.
  • the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
  • Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
  • best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy.
  • , not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed.
  • Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
  • Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
  • Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?
  •  
    Spring 2010 To Die of Having Lived A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • THE primacy of this ideal is very recent. In the late 1970s, clearly a full thirty years after World War II, it all came about quite abruptly. And the ascendancy of rights as we now understand them came as a response, in part, to developments in the academy.
  • There were versions of utilitarianism, some scornful of rights (with Jeremy Bentham describing them as “nonsense upon stilts”), others that accepted that rights have important social functions (as in John Stuart Mill), but none of them asserted that rights were fundamental in ethical and political thinking.
  • There were various kinds of historicism—the English thinker Michael Oakeshott’s conservative traditionalism and the American scholar Richard Rorty’s postmodern liberalism, for example—that viewed human values as cultural creations, whose contents varied significantly from society to society. There was British theorist Isaiah Berlin’s value pluralism, which held that while some values are universally human, they conflict with one another in ways that do not always have a single rational solution. There were also varieties of Marxism which understood rights in explicitly historical terms.
  • ...2 more annotations...
  • human rights were discussed—when they were mentioned at all—as demands made in particular times and places. Some of these demands might be universal in scope—that torture be prohibited everywhere was frequently (though not always) formulated in terms of an all-encompassing necessity, but no one imagined that human rights comprised the only possible universal morality.
  • the notion that rights are the foundation of society came only with the rise of the Harvard philosopher John Rawls’s vastly influential A Theory of Justice (1971). In the years following, it slowly came to be accepted that human rights were the bottom line in political morality.
1 - 20 of 43 Next › Last »
Showing 20 items per page