Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Programming" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
9More

What humans know that Watson doesn't - CNN.com - 0 views

  • One of the most frustrating experiences produced by the winter from hell is dealing with the airlines' automated answer systems. Your flight has just been canceled and every second counts in getting an elusive seat. Yet you are stuck in an automated menu spelling out the name of your destination city.
  • Even more frustrating is knowing that you will never get to ask the question you really want to ask, as it isn't an option: "If I drive to Newark and board my Flight to Tel Aviv there will you cancel my whole trip, as I haven't started from my ticketed airport of origin, Ithaca?"
  • A human would immediately understand the question and give you an answer. That's why knowledgeable travelers rush to the nearest airport when they experience a cancellation, so they have a chance to talk to a human agent who can override the computer, rather than rebook by phone (more likely wait on hold and listen to messages about how wonderful a destination Tel Aviv is) or talk to a computer.
  • ...6 more annotations...
  • There is no doubt the IBM supercomputer Watson gave an impressive performance on "Jeopardy!" this week. But I was worried by the computer's biggest fluff Tuesday night. In answer to the question about naming a U.S. city whose first airport is named after a World War II hero and its second after a World War II battle, it gave Toronto, Ontario. Not even close!
  • Both the humans on the program knew the correct answer: Chicago. Even a famously geographically challenged person like me
  • Why did I know it? Because I have spent enough time stranded at O'Hare to have visited the monument to Butch O'Hare in the terminal. Watson, who has not, came up with the wrong answer. This reveals precisely what Watson lacks -- embodiment.
  • Watson has never traveled anywhere. Humans travel, so we know all sorts of stuff about travel and airports that a computer doesn't know. It is the informal, tacit, embodied knowledge that is the hardest for computers to grasp, but it is often such knowledge that is most crucial to our lives.
  • Providing unique answers to questions limited to around 25 words is not the same as dealing with real problems of an emotionally distraught passenger in an open system where there may not be a unique answer.
  • Watson beating the pants out of us on "Jeopardy!" is fun -- rather like seeing a tractor beat a human tug-of-war team. Machines have always been better than humans at some tasks.
15More

Cancer resembles life 1 billion years ago, say astrobiologists - microbiology, genomics... - 0 views

  • astrobiologists, working with oncologists in the US, have suggested that cancer resembles ancient forms of life that flourished between 600 million and 1 billion years ago.
  • Read more about what this discovery means for cancer research.
  • The genes that controlled the behaviour of these early multicellular organisms still reside within our own cells, managed by more recent genes that keep them in check.It's when these newer controlling genes fail that the older mechanisms take over, and the cell reverts to its earlier behaviours and grows out of control.
  • ...11 more annotations...
  • The new theory, published in the journal Physical Biology, has been put forward by two leading figures in the world of cosmology and astrobiology: Paul Davies, director of the Beyond Center for Fundamental Concepts in Science, Arizona State University; and Charles Lineweaver, from the Australian National University.
  • According to Lineweaver, this suggests that cancer is an atavism, or an evolutionary throwback.
  • In the paper, they suggest that a close look at cancer shows similarities with early forms of multicellular life.
  • “Unlike bacteria and viruses, cancer has not developed the capacity to evolve into new forms. In fact, cancer is better understood as the reversion of cells to the way they behaved a little over one billion years ago, when humans were nothing more than loose-knit colonies of only partially differentiated cells. “We think that the tumours that develop in cancer patients today take the same form as these simple cellular structures did more than a billion years ago,” he said.
  • One piece of evidence to support this theory is that cancers appear in virtually all metazoans, with the notable exception of the bizarre naked mole rat."This quasi-ubiquity suggests that the mechanisms of cancer are deep-rooted in evolutionary history, a conjecture that receives support from both paleontology and genetics," they write.
  • the genes that controlled this early multi-cellular form of life are like a computer operating system's 'safe mode', and when there are failures or mutations in the more recent genes that manage the way cells specialise and interact to form the complex life of today, then the earlier level of programming takes over.
  • Their notion is in contrast to a prevailing theory that cancer cells are 'rogue' cells that evolve rapidly within the body, overcoming the normal slew of cellular defences.
  • However, Davies and Lineweaver point out that cancer cells are highly cooperative with each other, if competing with the host's cells. This suggests a pre-existing complexity that is reminiscent of early multicellular life.
  • cancers' manifold survival mechanisms are predictable, and unlikely to emerge spontaneously through evolution within each individual in such a consistent way.
  • The good news is that this means combating cancer is not necessarily as complex as if the cancers were rogue cells evolving new and novel defence mechanisms within the body.Instead, because cancers fall back on the same evolved mechanisms that were used by early life, we can expect them to remain predictable, thus if they're susceptible to treatment, it's unlikely they'll evolve new ways to get around it.
  • If the atavism hypothesis is correct, there are new reasons for optimism," they write.
  •  
    Feature: Inside DNA vaccines bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Alexion acquires technology for MoCD therapy More > Most Popular Media Releases Cancer resembles life 1 billion years ago, say astrobiologists Feature: The challenge of a herpes simplex vaccine Feature: Proteomics power of pawpaw bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Immune system boosting hormone might lead to HIV cure Biotechnology Directory Company Profile Check out this company's profile and more in the Biotechnology Directory! Biotechnology Directory Find company by name Find company by category Latest Jobs Senior Software Developer / Java Analyst Programm App Support Developer - Java / J2ee Solutions Consultant - VIC Technical Writer Product Manager (Fisheye/Crucible)   BUYING GUIDES Portable Multimedia Players Digital Cameras Digital Video Cameras LATEST PRODUCTS HTC Wildfire S Android phone (preview) Panasonic LUMIX DMC-GH2 digital camera HTC Desire S Android phone (preview) Qld ICT minister Robert Schwarten retires Movie piracy costs Aus economy $1.37 billion in 12 months: AFACT Wireless smartphones essential to e-health: CSIRO Aussie outsourcing CRM budgets to soar in 2011: Ovum Federal government to evaluate education revolution targets Business continuity planning - more than just disaster recovery Proving the value of IT - Part one 5 open source security projects to watch In-memory computing Information security in 2011 EFA shoots down 'unproductive' AFACT movie piracy study In Pictures: IBM hosts Galactic dinner Emerson Network Power launches new infrastructure solutions Consumers not smart enough for smartphones? Google one-ups Apple online media subscription service M2M offerings expand as more machines go online India cancels satellite spectrum deal after controversy Lenovo profit rises in Q3 on strong PC sales in China Taiwan firm to supply touch sensors to Samsung HP regains top position in India's PC market Copyright 20
18More

'There Is No Values-Free Form Of Education,' Says U.S. Philosopher - Radio Fr... - 0 views

  • from the earliest years, education should be based primarily on exploration, understanding in depth, and the development of logical, critical thinking. Such an emphasis, she says, not only produces a citizenry capable of recognizing and rooting out political jingoism and intolerance. It also produces people capable of questioning authority and perceived wisdom in ways that enhance innovation and economic competitiveness. Nussbaum warns against a narrow educational focus on technical competence.
  • a successful, long-term democracy depends on a citizenry with certain qualities that can be fostered by education.
  • The first is the capacity we associate in the Western tradition with Socrates, but it certainly appears in all traditions -- that is, the ability to think critically about proposals that are brought your way, to analyze an argument, to distinguish a good argument from a bad argument. And just in general, to lead what Socrates called “the examined life.” Now that’s, of course, important because we know that people are very prone to go along with authority, with fashion, with peer pressure. And this kind of critical enlivened citizenry is the only thing that can keep democracy vital.
  • ...15 more annotations...
  • it can be trained from very early in a child’s education. There’re ways that you can get quite young children to recognize what’s a good argument and what’s a bad argument. And as children grow older, it can be done in a more and more sophisticated form until by the time they’re undergraduates in universities they would be studying Plato’s dialogues for example and really looking at those tricky arguments and trying to figure out how to think. And this is important not just for the individual thinking about society, but it’s important for the way people talk to each other. In all too many public discussions people just throw out slogans and they throw out insults. And what democracy needs is listening. And respect. And so when people learn how to analyze an argument, then they look at what the other person’s saying differently. And they try to take it apart, and they think: “Well, do I share some of those views and where do I differ here?” and so on. And this really does produce a much more deliberative, respectful style of public interaction.
  • The second [quality] is what I call “the ability to think as a citizen of the whole world.” We’re all narrow and this is again something that we get from our animal heritage. Most non-human animals just think about the group. But, of course, in this world we need to think, first of all, our whole nation -- its many different groups, minority and majority. And then we need to think outside the nation, about how problems involving, let’s say, the environment or global economy and so on need cooperative resolution that brings together people from many different nations.
  • That’s complicated and it requires learning a lot of history, and it means learning not just to parrot some facts about history but to think critically about how to assess historical evidence. It means learning how to think about the global economy. And then I think particularly important in this era, it means learning something about the major world religions. Learning complicated, nonstereotypical accounts of those religions because there’s so much fear that’s circulating around in every country that’s based usually on just inadequate stereotypes of what Muslims are or whatever. So knowledge can at least begin to address that.
  • the third thing, which I think goes very closely with the other two, is what I call “the narrative imagination,” which is the ability to put yourself in the shoes of another person to have some understanding of how the world looks from that point of view. And to really have that kind of educated sympathy with the lives of others. Now again this is something we come into the world with. Psychologists have now found that babies less than a year old are able to take up the perspective of another person and do things, see things from that perspective. But it’s very narrow and usually people learn how to think about what their parents are thinking and maybe other family members but we need to extend that and develop it, and learn how the world looks from the point of view of minorities in our own culture, people outside our culture, and so on.
  • since we can’t go to all the places that we need to understand -- it’s accomplished by reading narratives, reading literature, drama, participating through the arts in the thought processes of another culture. So literature and the arts are the major ways we would develop and extend that capacity.
  • For many years, the leading model of development ... used by economists and international agencies measuring welfare was simply that for a country to develop means to increase [its] gross domestic product per capita. Now, in recent years, there has been a backlash to that because people feel that it just doesn’t ask enough about what goods are really doing for people, what can people really do and be.
  • so since 1990s the United Nations’ development program has produced annually what’s called a “Human Development Report” that looks at things like access to education, access to health care. In other words, a much richer menu of human chances and opportunities that people have. And at the theoretical end I’ve worked for about 20 years now with economist Amartya Sen, who won the Nobel Prize in 1998 for economics. And we’ve developed this as account of -- so for us what it is for a country to do better is to enhance the set of capabilities meaning substantial opportunities that people have to lead meaningful, fruitful lives. And then I go on to focus on a certain core group of those capabilities that I think ought to be protected by constitutional law in every country.
  • Life; health; bodily integrity; the development of senses, imagination, and thought; the development of practical reason; opportunities to have meaningful affiliations both friendly and political with other people; the ability to have emotional health -- not to be in other words dominated by overwhelming fear and so on; the ability to have a productive relationship with the environment and the world of nature; the ability to play and have leisure time, which is something that I think people don’t think enough about; and then, finally, control over one’s material and social environment, some measure of control. Now of course, each of these is very abstract, and I specify them further. Although I also think that each country needs to finally specify them with its own particular circumstances in view.
  • when kids learn in a classroom that just makes them sit in a chair, well, they can take in something in their heads, but it doesn’t make them competent at negotiating in the world. And so starting, at least, with Jean Jacques Rousseau in the 18th century, people thought: “Well, if we really want people to be independent citizens in a democracy that means that we can’t have whole classes of people who don’t know how to do anything, who are just simply sitting there waiting to be waited on in practical matters.” And so the idea that children should participate in their practical environment came out of the initial democratizing tendencies that went running through the 18th century.
  • even countries who absolutely do not want that kind of engaged citizenry see that for the success of business these abilities are pretty important. Both Singapore and China have conducted mass education reforms over the last five years because they realized that their business cultures don’t have enough imagination and they also don’t have enough critical thinking, because you can have awfully corrupt business culture if no one is willing to say the unpleasant word or make a criticism.
  • So they have striven to introduce more critical thinking and more imagination into their curricula. But, of course, for them, they want to cordon it off -- they want to do it in the science classroom, in the business classroom, but not in the politics classroom. Well, we’ll see -- can they do that? Can they segment it that way? I think democratic thinking is awfully hard to segment as current events in the Middle East are showing us. It does have the tendency to spread.
  • so maybe the people in Singapore and China will not like the end result of what they tried to do or maybe the reform will just fail, which is equally likely -- I mean the educational reform.
  • if you really don’t want democracy, this is not the education for you. It had its origins in the ancient Athenian democracy which was a very, very strong participatory democracy and it is most at home in really true democracy, where our whole goal is to get each and every person involved and to get them thinking about things. So, of course, if politicians have ambivalence about that goal they may well not want this kind of education.
  • when we bring up children in the family or in the school, we are always engineering. I mean, there is no values-free form of education in the world. Even an education that just teaches you a list of facts has values built into it. Namely, it gives a negative value to imagination and to the critical faculties and a very high value to a kind of rote, technical competence. So, you can't avoid shaping children.
  • ncreasingly the child should be in control and should become free. And that's what the critical thinking is all about -- it's about promoting freedom as the child goes on. So, the end product should be an adult who is really thinking for him- or herself about the direction of society. But you don't get freedom just by saying, "Oh, you are free." Progressive educators that simply stopped teaching found out very quickly that that didn't produce freedom. Even some of the very extreme forms of progressive school where children were just allowed to say every day what it was they wanted to learn, they found that didn't give the child the kind of mastery of self and of the world that you really need to be a free person.
3More

Sex: New York City unveils condom finder for smartphones, users satisfied - National Cu... - 0 views

  • he application uses GPS technology and is available for the iPhone and Android devices. The over-the-air (OTA) downloadable app has access to New York City's more than 1,000 free condom outlets. When a user launches a search for rubbers, the nearest five locations are shown, allowing for enough time to act before the mood is lost.
  • The smartphone application that locates free condoms was a huge hit for New Yorkers this Valentine's Day, users said Tuesday, Feb. 15. The program was launched by the New York City Health Department to help the turned-on find protection at a moment's notice--no matter where they are.
  • The health department has come under significant fire for its free condom initiative. Parents have complained that it urges children to experiment with sex. "We're not promoting sex," Sweeney said. "We're promoting safer sex. In New York City and around the country, adolescents and pre-adolescents have sex whether you give them condoms or not."
30More

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
20More

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
12More

The Ashtray: The Ultimatum (Part 1) - NYTimes.com - 0 views

  • “Under no circumstances are you to go to those lectures. Do you hear me?” Kuhn, the head of the Program in the History and Philosophy of Science at Princeton where I was a graduate student, had issued an ultimatum. It concerned the philosopher Saul Kripke’s lectures — later to be called “Naming and Necessity” — which he had originally given at Princeton in 1970 and planned to give again in the Fall, 1972.
  • Whiggishness — in history of science, the tendency to evaluate and interpret past scientific theories not on their own terms, but in the context of current knowledge. The term comes from Herbert Butterfield’s “The Whig Interpretation of History,” written when Butterfield, a future Regius professor of history at Cambridge, was only 31 years old. Butterfield had complained about Whiggishness, describing it as “…the study of the past with direct and perpetual reference to the present” – the tendency to see all history as progressive, and in an extreme form, as an inexorable march to greater liberty and enlightenment. [3] For Butterfield, on the other hand, “…real historical understanding” can be achieved only by “attempting to see life with the eyes of another century than our own.” [4][5].
  • Kuhn had attacked my Whiggish use of the term “displacement current.” [6] I had failed, in his view, to put myself in the mindset of Maxwell’s first attempts at creating a theory of electricity and magnetism. I felt that Kuhn had misinterpreted my paper, and that he — not me — had provided a Whiggish interpretation of Maxwell. I said, “You refuse to look through my telescope.” And he said, “It’s not a telescope, Errol. It’s a kaleidoscope.” (In this respect, he was probably right.) [7].
  • ...9 more annotations...
  • I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ” [8] ¶He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.” ¶And then I added, “…except for someone who imagines himself to be God.” ¶It was at this point that Kuhn threw the ashtray at me.
  • I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.
  • That's the problem with relativism: Who's to say who's right and who's wrong? Somehow I'm not surprised to hear Kuhn was an ashtray-hurler. In the end, what other argument could he make?
  • For us to have a conversation and come to an agreement about the meaning of some word without having to refer to some outside authority like a dictionary, we would of necessity have to be satisfied that our agreement was genuine and not just a polite acknowledgement of each others' right to their opinion, can you agree with that? If so, then let's see if we can agree on the meaning of the word 'know' because that may be the crux of the matter. When I use the word 'know' I mean more than the capacity to apprehend some aspect of the world through language or some other represenational symbolism. Included in the word 'know' is the direct sensorial perception of some aspect of the world. For example, I sense the floor that my feet are now resting upon. I 'know' the floor is really there, I can sense it. Perhaps I don't 'know' what the floor is made of, who put it there, and other incidental facts one could know through the usual symbolism such as language as in a story someone tells me. Nevertheless, the reality I need to 'know' is that the floor, or whatever you may wish to call the solid - relative to my body - flat and level surface supported by more structure then the earth, is really there and reliably capable of supporting me. This is true and useful knowledge that goes directly from the floor itself to my knowing about it - via sensation - that has nothing to do with my interpretive system.
  • Now I am interested in 'knowing' my feet in the same way that my feet and the whole body they are connected to 'know' the floor. I sense my feet sensing the floor. My feet are as real as the floor and I know they are there, sensing the floor because I can sense them. Furthermore, now I 'know' that it is 'I' sensing my feet, sensing the floor. Do you see where I am going with this line of thought? I am including in the word 'know' more meaning than it is commonly given by everyday language. Perhaps it sounds as if I want to expand on the Cartesian formula of cogito ergo sum, and in truth I prefer to say I sense therefore I am. It is my sensations of the world first and foremost that my awareness, such as it is, is actively engaged with reality. Now, any healthy normal animal senses the world but we can't 'know' if they experience reality as we do since we can't have a conversation with them to arrive at agreement. But we humans can have this conversation and possibly agree that we can 'know' the world through sensation. We can even know what is 'I' through sensation. In fact, there is no other way to know 'I' except through sensation. Thought is symbolic representation, not direct sensing, so even though the thoughtful modality of regarding the world may be a far more reliable modality than sensation in predicting what might happen next, its very capacity for such accurate prediction is its biggest weakness, which is its capacity for error
  • Sensation cannot be 'wrong' unless it is used to predict outcomes. Thought can be wrong for both predicting outcomes and for 'knowing' reality. Sensation alone can 'know' reality even though it is relatively unreliable, useless even, for making predictions.
  • If we prioritize our interests by placing predictability over pure knowing through sensation, then of course we will not value the 'knowledge' to be gained through sensation. But if we can switch the priorities - out of sheer curiosity perhaps - then we can enter a realm of knowledge through sensation that is unbelievably spectacular. Our bodies are 'made of' reality, and by methodically exercising our nascent capacity for self sensing, we can connect our knowing 'I' to reality directly. We will not be able to 'know' what it is that we are experiencing in the way we might wish, which is to be able to predict what will happen next or to represent to ourselves symbolically what we might experience when we turn our attention to that sensation. But we can arrive at a depth and breadth of 'knowing' that is utterly unprecedented in our lives by operating that modality.
  • One of the impressions that comes from a sustained practice of self sensing is a clearer feeling for what "I" is and why we have a word for that self referential phenomenon, seemingly located somewhere behind our eyes and between our ears. The thing we call "I" or "me" depending on the context, turns out to be a moving point, a convergence vector for a variety of images, feelings and sensations. It is a reference point into which certain impressions flow and out of which certain impulses to act diverge and which may or may not animate certain muscle groups into action. Following this tricky exercize in attention and sensation, we can quickly see for ourselves that attention is more like a focused beam and awareness is more like a diffuse cloud, but both are composed of energy, and like all energy they vibrate, they oscillate with a certain frequency. That's it for now.
  • I loved the writer's efforts to find a fixed definition of “Incommensurability;” there was of course never a concrete meaning behind the word. Smoke and mirrors.
8More

Let There Be More Efficient Light - NYTimes.com - 0 views

  • LAST week Michele Bachmann, a Republican representative from Minnesota, introduced a bill to roll back efficiency standards for light bulbs, which include a phasing out of incandescent bulbs in favor of more energy-efficient bulbs. The “government has no business telling an individual what kind of light bulb to buy,” she declared.
  • But this opposition ignores another, more important bit of American history: the critical role that government-mandated standards have played in scientific and industrial innovation.
  • inventions alone weren’t enough to guarantee progress. Indeed, at the time the lack of standards for everything from weights and measures to electricity — even the gallon, for example, had eight definitions — threatened to overwhelm industry and consumers with a confusing array of incompatible choices.
  • ...5 more annotations...
  • This wasn’t the case everywhere. Germany’s standards agency, established in 1887, was busy setting rules for everything from the content of dyes to the process for making porcelain; other European countries soon followed suit. Higher-quality products, in turn, helped the growth in Germany’s trade exceed that of the United States in the 1890s. America finally got its act together in 1894, when Congress standardized the meaning of what are today common scientific measures, including the ohm, the volt, the watt and the henry, in line with international metrics. And, in 1901, the United States became the last major economic power to establish an agency to set technological standards. The result was a boom in product innovation in all aspects of life during the 20th century. Today we can go to our hardware store and choose from hundreds of light bulbs that all conform to government-mandated quality and performance standards.
  • Technological standards not only promote innovation — they also can help protect one country’s industries from falling behind those of other countries. Today China, India and other rapidly growing nations are adopting standards that speed the deployment of new technologies. Without similar requirements to manufacture more technologically advanced products, American companies risk seeing the overseas markets for their products shrink while innovative goods from other countries flood the domestic market. To prevent that from happening, America needs not only to continue developing standards, but also to devise a strategy to apply them consistently and quickly.
  • The best approach would be to borrow from Japan, whose Top Runner program sets energy-efficiency standards by identifying technological leaders in a particular industry — say, washing machines — and mandating that the rest of the industry keep up. As technologies improve, the standards change as well, enabling a virtuous cycle of improvement. At the same time, the government should work with businesses to devise multidimensional standards, so that consumers don’t balk at products because they sacrifice, say, brightness and cost for energy efficiency.
  • This is not to say that innovation doesn’t bring disruption, and American policymakers can’t ignore the jobs that are lost when government standards sweep older technologies into the dustbin of history. An effective way forward on light bulbs, then, would be to apply standards only to those manufacturers that produce or import in large volume. Meanwhile, smaller, legacy light-bulb producers could remain, cushioning the blow to workers and meeting consumer demand.
  • Technologies and the standards that guide their deployment have revolutionized American society. They’ve been so successful, in fact, that the role of government has become invisible — so much so that even members of Congress should be excused for believing the government has no business mandating your choice of light bulbs.
6More

Some Scientists Fear Computer Chips Will Soon Hit a Wall - NYTimes.com - 0 views

  • The problem has the potential to counteract an important principle in computing that has held true for decades: Moore’s Law. It was Gordon Moore, a founder of Intel, who first predicted that the number of transistors that could be nestled comfortably and inexpensively on an integrated circuit chip would double roughly every two years, bringing exponential improvements in consumer electronics.
  • In their paper, Dr. Burger and fellow researchers simulated the electricity used by more than 150 popular microprocessors and estimated that by 2024 computing speed would increase only 7.9 times, on average. By contrast, if there were no limits on the capabilities of the transistors, the maximum potential speedup would be nearly 47 times, the researchers said.
  • Some scientists disagree, if only because new ideas and designs have repeatedly come along to preserve the computer industry’s rapid pace of improvement. Dr. Dally of Nvidia, for instance, is sanguine about the future of chip design. “The good news is that the old designs are really inefficient, leaving lots of room for innovation,” he said.
  • ...3 more annotations...
  • Shekhar Y. Borkar, a fellow at Intel Labs, called Dr. Burger’s analysis “right on the dot,” but added: “His conclusions are a little different than what my conclusions would have been. The future is not as golden as it used to be, but it’s not bleak either.” Dr. Borkar cited a variety of new design ideas that he said would help ease the limits identified in the paper. Intel recently developed a way to vary the power consumed by different parts of a processor, making it possible to have both slower, lower-power transistors as well as faster-switching ones that consume more power. Increasingly, today’s processor chips contain two or more cores, or central processing units, that make it possible to use multiple programs simultaneously. In the future, Intel computers will have different kinds of cores optimized for different kinds of problems, only some of which require high power.
  • And while Intel announced in May that it had found a way to use 3-D design to crowd more transistors onto a single chip, that technology does not solve the energy problem described in the paper about dark silicon. The authors of the paper said they had tried to account for some of the promised innovation, and they argued that the question was how far innovators could go in overcoming the power limits.
  • “It’s one of those ‘If we don’t innovate, we’re all going to die’ papers,” Dr. Patterson said in an e-mail. “I’m pretty sure it means we need to innovate, since we don’t want to die!”
2More

There Is Such A Thing As A Free Coffee | The Utopianist - Think Bigger - 0 views

  • Overall, the ratio of people taking versus giving is 2-1. Stark has a truly grand vision: “It’s literally giving people hope. Ultimately the goal is for more people to do this kind of thing. I admit it seems a little frivolous to give away coffee to people with iPhones. But imagine if you had a CVS card and you could give someone $10 for their Alzheimer’s medication. The concept of frictionless social giving is very attractive. And this is just the beginning of that.” It’s easy enough to text a number to make a donation during times of disaster, and many do it, but the concern may still exist over “where” the money is going; systems with re-loadable cards are straightforward and in some way more transparent (after all, the users probably have their own, personal, cards), serving to spur people into donating even more. I say let’s expand this — I cannot wait to see it act elsewhere — some sort of school card, perhaps? Download the full-sized card here; before you go, check the balance on Twitter — updated every couple of minutes, Stark wrote the program himself. “Like” Jonathan’s Starbucks Card on Facebook to spread the word; and when you want to donate, simply log on to the Starbucks website and reload card number 6061006913522430.
  •  
    Programmer Jonathan Stark, vice president of Mobiquity, has begun a truly cool experiment: sharing his Starbucks card with the world. While researching ways one can pay-by-mobile, Stark took an interesting perspective on Starbucks' system. He realized there was (at the time) no app for Android users, so he simply took a picture of his card and posted it online. He loaded it with $30 and then encouraged others to use it - and reload it, if they see fit. Not surprisingly, people took him up on it. Since those $30, the card has seen over $9,000 worth of anonymous donations. Stark says that "every time the balance gets really high, it brings out the worst in people: Someone goes down to Starbucks and makes a huge purchase. I don't know if they are buying coffee beans or mugs, or transferring money to their own card or what. But as long as the balance stays low, say $20 to $30, it seems like it manages itself. I haven't put any money on it in a while. All the money going through the card right now is the kindness of strangers."
5More

AP IMPACT: Framed for child porn - by a PC virus by AP: Yahoo! Tech - 0 views

  • Pedophiles can exploit virus-infected PCs to remotely store and view their stash without fear they'll get caught. Pranksters or someone trying to frame you can tap viruses to make it appear that you surf illegal Web sites.
  • Whatever the motivation, you get child porn on your computer — and might not realize it until police knock at your door.
  • In 2007, Fiola's bosses became suspicious after the Internet bill for his state-issued laptop showed that he used 4 1/2 times more data than his colleagues. A technician found child porn in the PC folder that stores images viewed online. Fiola was fired and charged with possession of child pornography, which carries up to five years in prison. He endured death threats, his car tires were slashed and he was shunned by friends. Fiola and his wife fought the case, spending $250,000 on legal fees. They liquidated their savings, took a second mortgage and sold their car. An inspection for his defense revealed the laptop was severely infected. It was programmed to visit as many as 40 child porn sites per minute — an inhuman feat. While Fiola and his wife were out to dinner one night, someone logged on to the computer and porn flowed in for an hour and a half. Prosecutors performed another test and confirmed the defense findings. The charge was dropped — 11 months after it was filed.
    • Weiye Loh
       
      The law is reason beyond passion. Yet, reasons may be flawed, bounded, or limited by our in irrationality. Who are we to blame if we are victims of such false accusation? Is it right then to carry on with these proceedings just so those who are truly guilty won't get away scot-free? 
  • ...1 more annotation...
  • The Fiolas say they have health problems from the stress of the case. They say they've talked to dozens of lawyers but can't get one to sue the state, because of a cap on the amount they can recover. "It ruined my life, my wife's life and my family's life," he says. The Massachusetts attorney general's office, which charged Fiola, declined interview requests.
13More

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
21More

Rationally Speaking: Evolution as pseudoscience? - 0 views

  • I have been intrigued by an essay by my colleague Michael Ruse, entitled “Evolution and the idea of social progress,” published in a collection that I am reviewing, Biology and Ideology from Descartes to Dawkins (gotta love the title!), edited by Denis Alexander and Ronald Numbers.
  • Ruse's essay in the Alexander-Numbers collection questions the received story about the early evolution of evolutionary theory, which sees the stuff that immediately preceded Darwin — from Lamarck to Erasmus Darwin — as protoscience, the immature version of the full fledged science that biology became after Chuck's publication of the Origin of Species. Instead, Ruse thinks that pre-Darwinian evolutionists really engaged in pseudoscience, and that it took a very conscious and precise effort on Darwin’s part to sweep away all the garbage and establish a discipline with empirical and theoretical content analogous to that of the chemistry and physics of the time.
  • Ruse asserts that many serious intellectuals of the late 18th and early 19th century actually thought of evolution as pseudoscience, and he is careful to point out that the term “pseudoscience” had been used at least since 1843 (by the physiologist Francois Magendie)
  • ...17 more annotations...
  • Ruse’s somewhat surprising yet intriguing claim is that “before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason.”
  • Indeed, the link between evolution and the idea of human social-cultural progress was very strong before Darwin, and was one of the main things Darwin got rid of.
  • The encyclopedist Denis Diderot was typical in this respect: “The Tahitian is at a primary stage in the development of the world, the European is at its old age. The interval separating us is greater than that between the new-born child and the decrepit old man.” Similar nonsensical views can be found in Lamarck, Erasmus, and Chambers, the anonymous author of The Vestiges of the Natural History of Creation, usually considered the last protoscientific book on evolution to precede the Origin.
  • On the other side of the divide were social conservatives like the great anatomist George Cuvier, who rejected the idea of evolution — according to Ruse — not as much on scientific grounds as on political and ideological ones. Indeed, books like Erasmus’ Zoonomia and Chambers’ Vestiges were simply not much better than pseudoscientific treatises on, say, alchemy before the advent of modern chemistry.
  • people were well aware of this sorry situation, so much so that astronomer John Herschel referred to the question of the history of life as “the mystery of mysteries,” a phrase consciously adopted by Darwin in the Origin. Darwin set out to solve that mystery under the influence of three great thinkers: Newton, the above mentioned Herschel, and the philosopher William Whewell (whom Darwin knew and assiduously frequented in his youth)
  • Darwin was a graduate of the University of Cambridge, which had also been Newton’s home. Chuck got drilled early on during his Cambridge education with the idea that good science is about finding mechanisms (vera causa), something like the idea of gravitational attraction underpinning Newtonian mechanics. He reflected that all the talk of evolution up to then — including his grandfather’s — was empty, without a mechanism that could turn the idea into a scientific research program.
  • The second important influence was Herschel’s Preliminary Discourse on the Study of Natural Philosophy, published in 1831 and read by Darwin shortly thereafter, in which Herschel sets out to give his own take on what today we would call the demarcation problem, i.e. what methodology is distinctive of good science. One of Herschel’s points was to stress the usefulness of analogical reasoning
  • Finally, and perhaps most crucially, Darwin also read (twice!) Whewell’s History of the Inductive Sciences, which appeared in 1837. In it, Whewell sets out his notion that good scientific inductive reasoning proceeds by a consilience of ideas, a situation in which multiple independent lines of evidence point to the same conclusion.
  • the first part of the Origin, where Darwin introduces the concept of natural selection by way of analogy with artificial selection can be read as the result of Herschel’s influence (natural selection is the vera causa of evolution)
  • the second part of the book, constituting Darwin's famous “long argument,” applies Whewell’s method of consilience by bringing in evidence from a number of disparate fields, from embryology to paleontology to biogeography.
  • What, then, happened to the strict coupling of the ideas of social and biological progress that had preceded Darwin? While he still believed in the former, the latter was no longer an integral part of evolution, because natural selection makes things “better” only in a relative fashion. There is no meaningful sense in which, say, a large brain is better than very fast legs or sharp claws, as long as you still manage to have dinner and avoid being dinner by the end of the day (or, more precisely, by the time you reproduce).
  • Ruse’s claim that evolution transitioned not from protoscience to science, but from pseudoscience, makes sense to me given the historical and philosophical developments. It wasn’t the first time either. Just think about the already mentioned shift from alchemy to chemistry
  • Of course, the distinction between pseudoscience and protoscience is itself fuzzy, but we do have what I think are clear examples of the latter that cannot reasonably be confused with the former, SETI for one, and arguably Ptolemaic astronomy. We also have pretty obvious instances of pseudoscience (the usual suspects: astrology, ufology, etc.), so the distinction — as long as it is not stretched beyond usefulness — is interesting and defensible.
  • It is amusing to speculate which, if any, of the modern pseudosciences (cryonics, singularitarianism) might turn out to be able to transition in one form or another to actual sciences. To do so, they may need to find their philosophically and scientifically savvy Darwin, and a likely bet — if history teaches us anything — is that, should they succeed in this transition, their mature form will look as different from the original as chemistry and alchemy. Or as Darwinism and pre-Darwinian evolutionism.
  • Darwin called the Origin "one long argument," but I really do think that recognizing that the book contains (at least) two arguments could help to dispel that whole "just a theory" canard. The first half of the book is devoted to demonstrating that natural selection is the true cause of evolution; vera causa arguments require proof that the cause's effect be demonstrated as fact, so the second half of the book is devoted to a demonstration that evolution has really happened. In other words, evolution is a demonstrable fact and natural selection is the theory that explains that fact, just as the motion of the planets is a fact and gravity is a theory that explains it.
  • Cryogenics is the study of the production of low temperatures and the behavior of materials at those temperatures. It is a legitimate branch of physics and has been for a long time. I think you meant 'cryonics'.
  • The Singularity means different things to different people. It is uncharitable to dismiss all "singularitarians" by debunking Kurzweil. He is low hanging fruit. Reach for something higher.
  •  
    "before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason."
8More

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • Even if individual researchers are prone to falling in love with their own theories, the broader processes of peer review and institutionalized skepticism are designed to ensure that, eventually, the best ideas prevail.
  • Modern science originated from an attempt to weed out such subjective lapses
  • Our individual responses to the conclusions that science reaches, however, are quite another matter. Ironically, in part because researchers employ so much nuance and strive to disclose all remaining sources of uncertainty, scientific evidence is highly susceptible to selective reading and misinterpretation.
  • ...5 more annotations...
  • a large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs.
  • In a classic 1979 experiment (PDF), pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies—and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more "convincing."
  • According to research by Yale Law School professor Dan Kahan and his colleagues, people's deep-seated views about morality, and about the way society should be ordered, strongly predict whom they consider to be a legitimate scientific expert in the first place—and thus where they consider "scientific consensus" to lie on contested issues.
  • people rejected the validity of a scientific source because its conclusion contradicted their deeply held views—and thus the relative risks inherent in each scenario.
  • When political scientists Brendan Nyhan and Jason Reifler showed subjects fake newspaper articles (PDF) in which this was first suggested (in a 2004 quote from President Bush) and then refuted (with the findings of the Bush-commissioned Iraq Survey Group report, which found no evidence of active WMD programs in pre-invasion Iraq), they found that conservatives were more likely than before to believe the claim.
1More

BioMed Central | Full text | Mistaken Identifiers: Gene name errors can be introduced i... - 0 views

  • Background When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. Results A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered. Conclusions Users of Excel for analyses involving gene names should be aware of this problem, which can cause genes, including medically important ones, to be lost from view and which has contaminated even carefully curated public databases. We provide work-arounds and scripts for circumventing the problem.
« First ‹ Previous 41 - 55 of 55
Showing 20 items per page