Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged designer

Rss Feed Group items tagged

Weiye Loh

Designing Minds: Uncovered Video Profiles of Prominent Designers | Brain Pickings - 0 views

  • My favorite quote about what is art and what is design and what might be the difference comes from Donald Judd: ‘Design has to work, art doesn’t.’ And these things all have to work. They have a function outside my desire for self-expression.” ~ Stefan Sagmeister

  • When designers are given the opportunity to have a bigger role, real change, real transformation actually happens.” ~ Yves Behar

  •  
    In 2008, a now-defunct podcast program by Adobe called Designing Minds - not to be confused with frogdesign's excellent design mind magazine - did a series of video profiles of prominent artists and designers, including Stefan Sagmeister (whose Things I have learned in my life so far isn't merely one of the best-produced, most beautiful design books of the past decade, it's also a poignant piece of modern existential philosophy), Yves Behar (of One Laptop Per Child fame), Marian Bantjes (whose I Wonder remains my favorite typographic treasure) and many more, offering a rare glimpse of these remarkable creators' life stories, worldviews and the precious peculiarities that make them be who they are and create what they create
Weiye Loh

Google's Marissa Mayer Assaults Designers With Data | Designerati | Fast Company - 0 views

  • The irony was not lost on anyone in attendance at AIGA's national conference in Memphis last weekend. Marissa Mayer, "keeper" of the Google homepage since 1998, walked into a room filled with over 1,200 mostly graphic designers to talk about how well design worked at the design-dismissive Google. She even had the charts and graphs of user-tested research to prove it, she said.
  • In an almost robotic delivery, Mayer acknowledged that design was never the primary concern when developing the site. When she mentioned to founder Sergey Brin that he might want to do something to spiff up the brand-new homepage for users, his response was uncomfortably eloquent: "I don't do HTML."
  • About the now-notorious claim that she once tested 41 shades of blue? All true. Turns out Google was using two different colors of blue, one on the homepage, one on the Gmail page. To find out which was more effective so they could standardize it across the system, they tested an imperceptible range of blues between the two. The winning color, according to dozens of charts and graphs, was not too green, not too red.
  • ...1 more annotation...
  • This kind of over-analytical testing was exactly why designer Doug Bowman made a very public break from Google earlier this year. "I had a recent debate over whether a border should be 3, 4, or 5 pixels wide and was asked to prove my case," he wrote in a post after his departure. Maybe he couldn't, but someone won a recent battle to widen the search box by a few pixels, the most major change for the homepage in quite some time.
  •  
    I don't really know where this fits but I find this really amusing. The article is about how Google uses data, very specific data to determine their designs, almost to the point of being anal (to me). I wonder if this is what it means by challenging forth the nature (human mind) to reveal.
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

Skepticblog » Flaws in Creationist Logic - 0 views

  • making a false analogy here by confusing the origin of life with the later evolution of life. The watch analogy was specifically offered to say that something which is complex and displays design must have been created and designed by a creator. Therefore, since we see complexity and design in life it too must have had a creator. But all the life that we know – that life which is being pointed to as complex and designed – is the result of a process (evolution) that has worked over billions of years. Life can grow, reproduce, and evolve. Watches cannot – so it is not a valid analogy.
  • Life did emerge from non-living matter, but that is irrelevant to the point. There was likely a process of chemical evolution – but still the non-living precursors to life were just chemicals, they did not display the design or complexity apparent in a watch. Ankur’s attempt to rescue this false analogy fails. And before someone has a chance to point it out – yes, I said that life displays design. It displays bottom-up evolutionary design, not top-down intelligent design. This refers to another fallacy of creationists – the assumption that all design is top down. But nature demonstrates that this is a false assumption.
  • An increase in variation is an increase in information – it takes more information to describe the greater variety. By any actual definition of information – variation increases information. Also, as I argued, when you have gene duplication you are physically increasing the number of information carrying units – that is an increase in information. There is simply no way to avoid the mountain of genetic evidence that genetic information has increased over evolutionary time through evolutionary processes.
  •  
    FLAWS IN CREATIONIST LOGIC
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
lee weiting

designer babies - 0 views

  •  
    this topic will be discussed in the presentation in class. However, i feel that maybe i should also start a discussion here regarding the topic on designer baby. This is an article regarding designer baby. Designer baby has raised many ethical issues, such as the right of fetus etc. Many people are against designer baby. However this article points out an advantage of designer baby that is to save sick people. if it can save a live, i do not think is unethical to do it. On the other hand, i will think is unethical if one knows of the presence of some serious genetic mutation and still choose to pass it on to the future generation. i think the most basic form of ethics is not to do harm to people whenever possible. strive for a balance, strive for the best. In my point of view, this is what determine if an action is ethical or not. from this perspective, i will think that designer baby is ethical and should be allowed. Any other views? =)
Weiye Loh

Designers Make Data Much Easier to Digest - NYTimes.com - 0 views

  • On the benefit side, people become more engaged when they can filter information that is presented visually and make discoveries on their own. On the risk side, Professor Shneiderman says, tools as powerful as visualizations have the potential to mislead or confuse consumers. And privacy implications arise, he says, as increasing amounts of personal, housing, medical and financial data become widely accessible, searchable and viewable.
  • In the 1990s, Professor Shneiderman developed tree mapping, which uses interlocking rectangles to represent complicated data sets. The rectangles are sized and colored to convey different kinds of information, like revenue or geographic region, says Jim Bartoo, the chief executive of the Hive Group, a software company that uses tree mapping to help companies and government agencies monitor operational data. When executives or plant managers see the nested rectangles grouped together, he adds, they should be able to immediately spot anomalies or trends. In one tree-map visualization of a sales department on the Hive Group site, red tiles represent underperforming sales representatives while green tiles represent people who exceeded their sales quotas. So it’s easy to identify the best sales rep in the company: the biggest green tile. But viewers can also reorganize the display — by region, say, or by sales manager — to see whether patterns exist that explain why some employees are falling behind. “It’s the ability of the human brain to pick out size and color” that makes tree mapping so intuitive, Mr. Bartoo says. Information visualization, he adds, “suddenly starts answering questions that you didn’t know you had.”
  • data visualization is no longer just a useful tool for researchers and corporations. It’s also an entertainment and marketing vehicle.
  • ...2 more annotations...
  • In 2009, for example, Stamen Design, a technology and design studio in San Francisco, created a live visualization of Twitter traffic during the MTV Video Music awards. In the animated graphic, floating bubbles, each displaying a photograph of a celebrity, expanded or contracted depending on the volume of Twitter activity about each star. The project provided a visceral way for viewers to understand which celebrities dominated Twitter talk in real time, says Eric Rodenbeck, the founder and creative director of Stamen Design.
  • Designers once created visual representations of data that would steer viewers to information that seemed the most important or newsworthy, he says; now they create visualizations that contain attractive overview images and then let users direct their own interactive experience — wherever it may take them. “It’s not about leading with a certain view anymore,” he says. “It’s about delivering the view that gets the most participation and engagement.”
Weiye Loh

What If The Very Theory That Underlies Why We Need Patents Is Wrong? | Techdirt - 0 views

  • Scott Walker points us to a fascinating paper by Carliss Y. Baldwin and Eric von Hippel, suggesting that some of the most basic theories on which the patent system is based are wrong, and because of that, the patent system might hinder innovation.
  • numerous other research papers and case studies that suggest that the patent system quite frequently hinders innovation, but this one approaches it from a different angle than ones we've seen before, and is actually quite convincing. It looks at the putative putative theory that innovation comes from a direct profit motive of a single corporation looking to sell the good in market, and for that to work, the company needs to take the initial invention and get temporary monopoly protection to keep out competitors in order to recoup the cost of research and development.
  • the paper goes through a whole bunch of studies suggesting that quite frequently innovation happens through a very different process: either individuals or companies directly trying to solve a problem they themselves have (i.e., the initial motive is not to profit directly from sales, but to help them in something they were doing) or through a much more collaborative process, whereby multiple parties all contribute to the process of innovation, somewhat openly, recognizing that as each contributes some, everyone benefits. As the report notes: This result hinges on the fact that the innovative design itself is a non-rival good: each participant in a collaborative effort gets the value of the whole design, but incurs only a fraction of the design cost.
  • ...5 more annotations...
  • patents are designed to make that sort of thing more difficult, because it assumes that the initial act of invention is the key point, rather than all the incremental innovations built on top of it that all parties can benefit from.
  • the report points to numerous studies that show, when given the chance, many companies freely share their ideas with others, recognizing the direct benefit they get.
  • Even more importantly, the paper finds that due to technological advances and the ability to more rapidly and easily communicate and collaborate widely, these forms of innovation (innovation for direct use as well as collaborative innovation) are becoming more and more viable across a variety of industries, which in the past may have relied more on the old way of innovating (single company innovative for the profit of selling that product).
  • because of the ease of communication and collaboration these days, there's tremendous incentive for those companies that innovate for their own use to collaborate with others, since the benefit from others improving as well help improve their own uses. Thus, the overall incentives are to move much more to a collaborative form of innovation in the market. That has huge implications for a patent system designed to help the "old model" of innovation (producer inventing for the market) and not the increasingly regular one (collaborative innovation for usage).
  • no one is saying that producer-based innovation (company inventing to sell on the market) doesn't occur or won't continue to occur. But it is an open policy question as to whether or not our innovation policies should favor that model over other models -- when evidence suggests that a significant amount of innovation occurs in these other ways -- and that amount is growing rapidly.
  •  
    What If The Very Theory That Underlies Why We Need Patents Is Wrong? from the collaborative-innovation-at-work dept
Weiye Loh

Rationally Speaking: A new eugenics? - 0 views

  • an interesting article I read recently, penned by Julian Savulescu for the Practical Ethics blog.
  • Savulescu discusses an ongoing controversy in Germany about genetic testing of human embryos. The Leopoldina, Germany’s equivalent of the National Academy of Sciences, has recommended genetic testing of pre-implant embryos, to screen for serious and incurable defects. The German Chancellor, Angela Merkel, has agreed to allow a parliamentary vote on this issue, but also said that she personally supports a ban on this type of testing. Her fear is that the testing would quickly lead to “designer babies,” i.e. to parents making choices about their unborn offspring based not on knowledge about serious disease, but simply because they happen to prefer a particular height or eye color.
  • He infers from Merkel’s comments (and many similar others) that people tend to think of selecting traits like eye color as eugenics, while acting to avoid incurable disease is not considered eugenics. He argues that this is exactly wrong: eugenics, as he points out, means “well born,” so eugenicists have historically been concerned with eliminating traits that would harm society (Wendell Holmes’ “three generation of imbeciles”), not with simple aesthetic choices. As Savulescu puts it: “[eugenics] is selecting embryos which are better, in this context, have better lives. Being healthy rather than sick is ‘better.’ Having blond hair and blue eyes is not in any plausible sense ‘better,’ even if people mistakenly think so.”
  • ...9 more annotations...
  • And there is another, related aspect of discussions about eugenics that should be at the forefront of our consideration: what was particularly objectionable about American and Nazi early 20th century eugenics is that the state, not individuals, were to make decisions about who could reproduce and who couldn’t. Savulescu continues: “to grant procreative liberty is the only way to avoid the objectionable form of eugenics that the Nazis practiced.” In other words, it makes all the difference in the world if it is an individual couple who decides to have or not have a baby, or if it is the state that imposes a particular reproductive choice on its citizenry.
  • but then Savulescu expands his argument to a point where I begin to feel somewhat uncomfortable. He says: “[procreative liberty] involves the freedom to choose a child with red hair or blond hair or no hair.”
  • Savulescu has suddenly sneaked into his argument for procreative liberty the assumption that all choices in this area are on the same level. But while it is hard to object to action aimed at avoiding devastating diseases, it is not quite so obvious to me what arguments favor the idea of designer babies. The first intervention can be justified, for instance, on consequentialist grounds because it reduces the pain and suffering of both the child and the parents. The second intervention is analogous to shopping for a new bag, or a new car, which means that it commodifies the act of conceiving a baby, thus degrading its importance. I’m not saying that that in itself is sufficient to make it illegal, but the ethics of it is different, and that difference cannot simply be swept under the broad rug of “procreative liberty.”
  • designing babies is to treat them as objects, not as human beings, and there are a couple of strong philosophical traditions in ethics that go squarely against that (I’m thinking, obviously, of Kant’s categorical imperative, as well as of virtue ethics; not sure what a consequentialist would say about this, probably she would remain neutral on the issue).
  • Commodification of human beings has historically produced all sorts of bad stuff, from slavery to exploitative prostitution, and arguably to war (after all, we are using our soldiers as means to gain access to power, resources, territory, etc.)
  • And of course, there is the issue of access. Across-the-board “procreative liberty” of the type envisioned by Savulescu will cost money because it requires considerable resources.
  • imagine that these parents decide to purchase the ability to produce babies that have the type of characteristics that will make them more successful in society: taller, more handsome, blue eyed, blonde, more symmetrical, whatever. We have just created yet another way for the privileged to augment and pass their privileges to the next generation — in this case literally through their genes, not just as real estate or bank accounts. That would quickly lead to an even further divide between the haves and the have-nots, more inequality, more injustice, possibly, in the long run, even two different species (why not design your babies so that they can’t breed with certain types of undesirables, for instance?). Is that the sort of society that Savulescu is willing to envision in the name of his total procreative liberty? That begins to sounds like the libertarian version of the eugenic ideal, something potentially only slightly less nightmarish than the early 20th century original.
  • Rich people already have better choices when it comes to their babies. Taller and richer men can choose between more attractive and physically fit women and attractive women can choose between more physically fit and rich men. So it is reasonable to conclude that on average rich and attractive people already have more options when it comes to their offspring. Moreover no one is questioning their right to do so and this is based on a respect for a basic instinct which we all have and which is exactly why these people would choose to have a DB. Is it fair for someone to be tall because his daddy was rich and married a supermodel but not because his daddy was rich and had his DNA resequenced? Is it former good because its natural and the latter bad because its not? This isn't at all obvious to me.
  • Not to mention that rich people can provide better health care, education and nutrition to their children and again no one is questioning their right to do so. Wouldn't a couple of inches be pretty negligible compared to getting into a good school? Aren't we applying double standards by objecting to this issue alone? Do we really live in a society that values equal opportunities? People (may) be equal before the law but they are not equal to each other and each one of us is tacitly accepting that fact when we acknowledge the social hierarchy (in other words, every time we interact with someone who is our superior). I am not crazy about this fact but that's just how people are and this has to be taken into account when discussing this.
Weiye Loh

Some Scientists Fear Computer Chips Will Soon Hit a Wall - NYTimes.com - 0 views

  • The problem has the potential to counteract an important principle in computing that has held true for decades: Moore’s Law. It was Gordon Moore, a founder of Intel, who first predicted that the number of transistors that could be nestled comfortably and inexpensively on an integrated circuit chip would double roughly every two years, bringing exponential improvements in consumer electronics.
  • In their paper, Dr. Burger and fellow researchers simulated the electricity used by more than 150 popular microprocessors and estimated that by 2024 computing speed would increase only 7.9 times, on average. By contrast, if there were no limits on the capabilities of the transistors, the maximum potential speedup would be nearly 47 times, the researchers said.
  • Some scientists disagree, if only because new ideas and designs have repeatedly come along to preserve the computer industry’s rapid pace of improvement. Dr. Dally of Nvidia, for instance, is sanguine about the future of chip design. “The good news is that the old designs are really inefficient, leaving lots of room for innovation,” he said.
  • ...3 more annotations...
  • Shekhar Y. Borkar, a fellow at Intel Labs, called Dr. Burger’s analysis “right on the dot,” but added: “His conclusions are a little different than what my conclusions would have been. The future is not as golden as it used to be, but it’s not bleak either.” Dr. Borkar cited a variety of new design ideas that he said would help ease the limits identified in the paper. Intel recently developed a way to vary the power consumed by different parts of a processor, making it possible to have both slower, lower-power transistors as well as faster-switching ones that consume more power. Increasingly, today’s processor chips contain two or more cores, or central processing units, that make it possible to use multiple programs simultaneously. In the future, Intel computers will have different kinds of cores optimized for different kinds of problems, only some of which require high power.
  • And while Intel announced in May that it had found a way to use 3-D design to crowd more transistors onto a single chip, that technology does not solve the energy problem described in the paper about dark silicon. The authors of the paper said they had tried to account for some of the promised innovation, and they argued that the question was how far innovators could go in overcoming the power limits.
  • “It’s one of those ‘If we don’t innovate, we’re all going to die’ papers,” Dr. Patterson said in an e-mail. “I’m pretty sure it means we need to innovate, since we don’t want to die!”
Weiye Loh

flaneurose: The KK Chemo Misdosage Incident - 0 views

  • Labelling the pump that dispenses in ml/hr in a different color from the pump that dispenses in ml/day would be an obvious remedy that would have addressed the KK incident. It's the common-sensical solution that anyone can think of.
  • Sometimes, design flaws like that really do occur because engineers can't see the wood for the trees.
  • But sometimes the team is aware of these issues and highlights them to management, but the manufacturer still proceeds as before. Why is that? Because in addition to design principles, one must be mindful that there are always business considerations at play as well. Manufacturing two (or more) separate designs for pumps incurs greater costs, eliminates the ability to standardize across pumps, increases holding inventory, and overall increases complexity of business and manufacturing processes, and decreases economies of scale. All this naturally reduces profitability.It's not just pumps. Even medicines are typically sold in identical-looking vials with identically colored vial caps, with only the text on the vial labels differentiating them in both drug type and concentration. You can imagine what kinds of accidents can potentially happen there.
  • ...2 more annotations...
  • Legally, the manufacturer has clearly labelled on the pump (in text) the appropriate dosing regime, or for a medicine vial, the type of drug and concentration. The manufacturer has hence fulfilled its duty. Therefore, if there are any mistakes in dosing, the liability for the error lies with the hospital and not the manufacturer of the product. The victim of such a dosing error can be said to be an "externalized cost"; the beneficiaries of the victim's suffering are the manufacturer, who enjoys greater profitability, the hospital, which enjoys greater cost-savings, and the public, who save on healthcare. Is it ethical of the manufacturer, to "pass on" liability to the hospital? To make it difficult (or at least not easy) for the hospital to administer the right dosage? Maybe the manufacturer is at fault, but IMHO, it's very hard to say.
  • When a chemo incident like the one that happened in KK occurs, there are cries of public remonstration, and the pendulum may swing the other way. Hospitals might make the decision to purchase more expensive and better designed pumps (that is, if they are available). Then years down the road, when a bureaucrat (or a management consultant) with an eye to trim costs looks through the hospital purchasing orders, they may make the suggestion that $XXX could be saved by buying the generic version of such-and-such a product, instead of the more expensive version. And they would not be wrong, just...myopic.Then the cycle starts again.Sometimes it's not only about human factors. It could be about policy, or human nature, or business fundamentals, or just the plain old, dysfunctional way the world works.
    • Weiye Loh
       
      Interesting article. Explains clearly why our 'ethical' considerations is always only limited to a particular context and specific considerations. 
Weiye Loh

Understanding the universe: Order of creation | The Economist - 0 views

  • In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
  • The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
  • It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
  • ...2 more annotations...
  • The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
  • Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
  •  
    Order of creation Even Stephen Hawking doesn't quite manage to explain why we are here
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Weiye Loh

Rationally Speaking: The sorry state of higher education - 0 views

  • two disconcerting articles crossed my computer screen, both highlighting the increasingly sorry state of higher education, though from very different perspectives. The first is “Ed Dante’s” (actually a pseudonym) piece in the Chronicle of Higher Education, entitled The Shadow Scholar. The second is Gregory Petsko’s A Faustian Bargain, published of all places in Genome Biology.
  • There is much to be learned by educators in the Shadow Scholar piece, except the moral that “Dante” would like us to take from it. The anonymous author writes:“Pointing the finger at me is too easy. Why does my business thrive? Why do so many students prefer to cheat rather than do their own work? Say what you want about me, but I am not the reason your students cheat.
  • The point is that plagiarism and cheating happen for a variety of reasons, one of which is the existence of people like Mr. Dante and his company, who set up a business that is clearly unethical and should be illegal. So, pointing fingers at him and his ilk is perfectly reasonable. Yes, there obviously is a “market” for cheating in higher education, and there are complex reasons for it, but he is in a position similar to that of the drug dealer who insists that he is simply providing the commodity to satisfy society’s demand. Much too easy of a way out, and one that doesn’t fly in the case of drug dealers, and shouldn’t fly in the case of ghost cheaters.
  • ...16 more annotations...
  • As a teacher at the City University of New York, I am constantly aware of the possibility that my students might cheat on their tests. I do take some elementary precautionary steps
  • Still, my job is not that of the policeman. My students are adults who theoretically are there to learn. If they don’t value that learning and prefer to pay someone else to fake it, so be it, ultimately it is they who lose in the most fundamental sense of the term. Just like drug addicts, to return to my earlier metaphor. And just as in that other case, it is enablers like Mr. Dante who simply can’t duck the moral blame.
  • n open letter to the president of SUNY-Albany, penned by molecular biologist Gregory Petsko. The SUNY-Albany president has recently announced the closing — for budgetary reasons — of the departments of French, Italian, Classics, Russian and Theater Arts at his university.
  • Petsko begins by taking on one of the alleged reasons why SUNY-Albany is slashing the humanities: low enrollment. He correctly points out that the problem can be solved overnight at the stroke of a pen: stop abdicating your responsibilities as educators and actually put constraints on what your students have to take in order to graduate. Make courses in English literature, foreign languages, philosophy and critical thinking, the arts and so on, mandatory or one of a small number of options that the students must consider in order to graduate.
  • But, you might say, that’s cheating the market! Students clearly don’t want to take those courses, and a business should cater to its customers. That type of reasoning is among the most pernicious and idiotic I’ve ever heard. Students are not clients (if anything, their parents, who usually pay the tuition, are), they are not shopping for a new bag or pair of shoes. They do not know what is best for them educationally, that’s why they go to college to begin with. If you are not convinced about how absurd the students-as-clients argument is, consider an analogy: does anyone with functioning brain cells argue that since patients in a hospital pay a bill, they should be dictating how the brain surgeon operates? I didn’t think so.
  • Petsko then tackles the second lame excuse given by the president of SUNY-Albany (and common among the upper administration of plenty of public universities): I can’t do otherwise because of the legislature’s draconian cuts. Except that university budgets are simply too complicated for there not to be any other option. I know this first hand, I’m on a special committee at my own college looking at how to creatively deal with budget cuts handed down to us from the very same (admittedly small minded and dysfunctional) New York state legislature that has prompted SUNY-Albany’s action. As Petsko points out, the president there didn’t even think of involving the faculty and staff in a broad discussion of how to deal with the crisis, he simply announced the cuts on a Friday afternoon and then ran for cover. An example of very poor leadership to say the least, and downright hypocrisy considering all the talk that the same administrator has been dishing out about the university “community.”
  • Finally, there is the argument that the humanities don’t pay for their own way, unlike (some of) the sciences (some of the time). That is indubitably true, but irrelevant. Universities are not businesses, they are places of higher learning. Yes, of course they need to deal with budgets, fund raising and all the rest. But the financial and administrative side has one goal and one goal only: to provide the best education to the students who attend that university.
  • That education simply must include the sciences, philosophy, literature, and the arts, as well as more technical or pragmatic offerings such as medicine, business and law. Why? Because that’s the kind of liberal education that makes for an informed and intelligent citizenry, without which our democracy is but empty talk, and our lives nothing but slavery to the marketplace.
  • Maybe this is not how education works in the US. I thought that general (or compulsory) education (ie. up to high school) is designed to make sure that citizens in a democratic country can perform their civil duties. A balanced and well-rounded education, which includes a healthy mixture of science and humanities, is indeed very important for this purpose. However, college-level education is for personal growth and therefore the person must have a large say about what kind of classes he or she chooses to take. I am disturbed by Massimo's hospital analogy. Students are not ill. They don't go to college to be cured, or to be good citizens. They go to college to learn things that *they* want to learn. Patients are passive. Students are not.I agree that students typically do not know what kind of education is good for them. But who does?
  • students do have a saying in their education. They pick their major, and there are electives. But I object to the idea that they can customize their major any way they want. That assumes they know what the best education for them is, they don't. That's the point of education.
  • The students are in your class to get a good grade, any learning that takes place is purely incidental. Those good grades will look good on their transcript and might convince a future employer that they are smart and thus are worth paying more.
  • I don't know what the dollar to GPA exchange rate is these days, but I don't doubt that there is one.
  • Just how many of your students do you think will remember the extensive complex jargon of philosophy more than a couple of months after they leave your classroom?
  • and our lives nothing but slavery to the marketplace.We are there. Welcome. Where have you been all this time? In a capitalistic/plutocratic society money is power (and free speech too according to the supreme court). Money means a larger/better house/car/clothing/vacation than your neighbor and consequently better mating opportunities. You can mostly blame the women for that one I think just like the peacock's tail.
  • If a student of surgery fails to learn they might maim, kill or cripple someone. If an engineer of airplanes fails to learn they might design a faulty aircraft that fails and kills people. If a student of chemistry fails to learn they might design a faulty drug with unintended and unfortunate side effects, but what exactly would be the harm if a student of philosophy fails to learn Aristotle had to say about elements or Plato had to say about perfect forms? These things are so divorced from people's everyday activities as to be rendered all but meaningless.
  • human knowledge grows by leaps and bounds every day, but human brain capacity does not, so the portion of human knowledge you can personally hold gets smaller by the minute. Learn (and remember) as much as you can as fast as you can and you will still lose ground. You certainly have your work cut out for you emphasizing the importance of Thales in the Age of Twitter and whatever follows it next year.
Weiye Loh

American Airlines worker fired for replying to web user complaint - Telegraph - 0 views

  • American Airlines has been caught in a row over customer engagement after it fired a contract worker for responding to a complaint about their website.
  • Mr Curtis, an American web designer, was unimpressed by his experience using the the AA.com website, and made that clear in a lengthy open letter to the company on his blog, complete with a suggested redesign of the homepage (see the gallery above), saying he would be “ashamed” of the site. He also suggested that they fire their design team.
  • Mr X, a web designer, responded to the letter, saying in a long email that Mr Curtis was "so very right" about the problems of the website, but that it was less to do with staff incompetence and more to do with the internal culture of the airline. Mr X also told Mr Curtis that they were improving the website, but that it was a slow process. By speaking to Mr Curtis, however, Mr X was in breach of a non-disclosure agreement (NDA) he had signed with AA, barring him from revealing sensitive information.
  • ...1 more annotation...
  • after bosses at American Airlines became aware of Mr X's response, they searched through their email database, found his identity and fired him for a breach of the NDA. Mr Curtis says he is "horrified" at Mr X's treatment. He said on his blog: "AA fired Mr X because he cared. They fired him because he cared enough to reach out to a dissatisfied customer and help clear the company's name in the best way he could."
Weiye Loh

Learn to love uncertainty and failure, say leading thinkers | Edge question | Science |... - 0 views

  • Being comfortable with uncertainty, knowing the limits of what science can tell us, and understanding the worth of failure are all valuable tools that would improve people's lives, according to some of the world's leading thinkers.
  • he ideas were submitted as part of an annual exercise by the web magazine Edge, which invites scientists, philosophers and artists to opine on a major question of the moment. This year it was, "What scientific concept would improve everybody's cognitive toolkit?"
  • the public often misunderstands the scientific process and the nature of scientific doubt. This can fuel public rows over the significance of disagreements between scientists about controversial issues such as climate change and vaccine safety.
  • ...13 more annotations...
  • Carlo Rovelli, a physicist at the University of Aix-Marseille, emphasised the uselessness of certainty. He said that the idea of something being "scientifically proven" was practically an oxymoron and that the very foundation of science is to keep the door open to doubt.
  • "A good scientist is never 'certain'. Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain: because the good scientist will be ready to shift to a different point of view if better elements of evidence, or novel arguments emerge. Therefore certainty is not only something of no use, but is in fact damaging, if we value reliability."
  • physicist Lawrence Krauss of Arizona State University agreed. "In the public parlance, uncertainty is a bad thing, implying a lack of rigour and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time," he said.
  • however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, no measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning."
  • Neil Gershenfeld, director of the Massachusetts Institute of Technology's Centre for Bits and Atoms wants everyone to know that "truth" is just a model. "The most common misunderstanding about science is that scientists seek and find truth. They don't – they make and test models," he said.
  • Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features – violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom."
  • writer and web commentator Clay Shirky suggested that people should think more carefully about how they see the world. His suggestion was the Pareto principle, a pattern whereby the top 1% of the population control 35% of the wealth or, on Twitter, the top 2% of users send 60% of the messages. Sometimes known as the "80/20 rule", the Pareto principle means that the average is far from the middle.It is applicable to many complex systems, "And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world," said Shirky. "We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming."
  • Kevin Kelly, editor-at-large of Wired, pointed to the value of negative results. "We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced."
  • Michael Shermer, publisher of the Skeptic Magazine, wrote about the importance of thinking "bottom up not top down", since almost everything in nature and society happens this way.
  • But most people don't see things that way, said Shermer. "Bottom up reasoning is counterintuitive. This is why so many people believe that life was designed from the top down, and why so many think that economies must be designed and that countries should be ruled from the top down."
  • Roger Schank, a psychologist and computer scientist, proposed that we should all know the true meaning of "experimentation", which he said had been ruined by bad schooling, where pupils learn that scientists conduct experiments and if we copy exactly what they did in our high school labs we will get the results they got. "In effect we learn that experimentation is boring, is something done by scientists and has nothing to do with our daily lives."Instead, he said, proper experiments are all about assessing and gathering evidence. "In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment. But people who don't see their actions as experiments, and those who don't know how to reason carefully from data, will continue to learn less well from their own experiences than those who do
  • Lisa Randall, a physicist at Harvard University, argued that perhaps "science" itself would be a useful concept for wider appreciation. "The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned – while appreciating and categorising the extent and limitations of what we know – plays a big role in how we think.
  • "Many words that summarise the nature of science such as 'cause and effect', 'predictions', and 'experiments', as well as words that describe probabilistic results such as 'mean', 'median', 'standard deviation', and the notion of 'probability' itself help us understand more specifically what this means and how to interpret the world and behaviour within it."
Paul Melissa

Designer Babies? - 2 views

  •  
    On the Early Show, viewers were asked if designer babies were ethical. Medical specialists have predicted that in 10-20 years time, designer babies will be more wide-spread. On one hand, this is a private domestic choice of individuals and parents. However, is it not performing plastic surgery on a child disregarding her/his choice and opinion even before they are born?
Weiye Loh

In University Supercomputing, the Fastest May No Longer Be the Best - Technology - The ... - 0 views

  •  
    With big money and competitiveness at stake, smarter - not faster - designs may be winners. 
Weiye Loh

Effect of alcohol on risk of coronary heart diseas... [Vasc Health Risk Manag. 2006] - ... - 0 views

  • Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good.
  • . The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real.
  • However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one.
  •  
    Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake.
kenneth yang

CYBER TROOPERS MAKE ARREST FOR SEXUAL SOLICITATION OF A MINOR - 8 views

BALTIMORE, Aug. 12 -- The Maryland State Police issued the following news release: A man who had been making online plans to allegedly have sex with someone he thought was a 13-year old girl, had h...

started by kenneth yang on 18 Aug 09 no follow-up yet
1 - 20 of 97 Next › Last »
Showing 20 items per page