Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged mind

Rss Feed Group items tagged

Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

Designing Minds: Uncovered Video Profiles of Prominent Designers | Brain Pickings - 0 views

  • My favorite quote about what is art and what is design and what might be the difference comes from Donald Judd: ‘Design has to work, art doesn’t.’ And these things all have to work. They have a function outside my desire for self-expression.” ~ Stefan Sagmeister

  • When designers are given the opportunity to have a bigger role, real change, real transformation actually happens.” ~ Yves Behar

  •  
    In 2008, a now-defunct podcast program by Adobe called Designing Minds - not to be confused with frogdesign's excellent design mind magazine - did a series of video profiles of prominent artists and designers, including Stefan Sagmeister (whose Things I have learned in my life so far isn't merely one of the best-produced, most beautiful design books of the past decade, it's also a poignant piece of modern existential philosophy), Yves Behar (of One Laptop Per Child fame), Marian Bantjes (whose I Wonder remains my favorite typographic treasure) and many more, offering a rare glimpse of these remarkable creators' life stories, worldviews and the precious peculiarities that make them be who they are and create what they create
Weiye Loh

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
Weiye Loh

CultureLab: Thoughts within thoughts make us human - 0 views

  • Corballis reckons instead that the thought processes that made language possible were non-linguistic, but had recursive properties to which language adapted: "Where Chomsky views thought through the lens of language, I prefer to view language though the lens of thought." From this, says Corballis, follows a better understanding of how humans actually think - and a very different perspective on language and its evolution.
  • So how did recursion help ancient humans pull themselves up by their cognitive bootstraps? It allowed us to engage in mental time travel, says Corballis, the recursive operation whereby we recall past episodes into present consciousness and imagine future ones, and sometimes even insert fictions into reality.
  • theory of mind is uniquely highly developed in humans: I may know not only what you are thinking, says Corballis, but also that you know what I am thinking. Most - but not all - language depends on this capability.
  • ...3 more annotations...
  • Corballis's theories also help make sense of apparent anomalies such as linguist and anthropologist Daniel's Everett's work on the Pirahã, an Amazonian people who hit the headlines because of debates over whether their language has any words for colours, and, crucially, numbers. Corballis now thinks that the Pirahã language may not be that unusual, and cites the example of other languages from oral cultures, such as the Iatmul language of New Guinea, which is also said to lack recursion.
  • The emerging point is that recursion developed in the mind and need not be expressed in a language. But, as Corballis is at pains to point out, although recursion was critical to the evolution of the human mind, it is not one of those "modules" much beloved of evolutionary psychologists, many of which are said to have evolved in the Pleistocene. Nor did it depend on some genetic mutation or the emergence of some new neuron or brain structure. Instead, he suggests it came of progressive increases in short-term memory and capacity for hierarchical organisation - all dependent in turn on incremental increases in brain size.
  • But as Corballis admits, this brain size increase was especially rapid in the Pleistocene. These incremental changes can lead to sudden more substantial jumps - think water boiling or balloons popping. In mathematics these shifts are called catastrophes. So, notes Corballis, wryly, "we may perhaps conclude that the emergence of the human mind was catastrophic". Let's hope that's not too prescient.
  •  
    His new book, The Recursive Mind: The origins of human language, thought, and civilization, is a fascinating and well-grounded exposition of the nature and power of recursion. In its ultra-reasonable way, this is quite a revolutionary book because it attacks key notions about language and thought. Most notably, it disputes the idea, argued especially by linguist Noam Chomsky, that thought is fundamentally linguistic - in other words, you need language before you can have thoughts.
Weiye Loh

Libertarianism Is Marxism of the Right - 4 views

http://www.commongroundcommonsense.org/forums/lofiversion/index.php/t21933.html "Because 95 percent of the libertarianism one encounters at cocktail parties, on editorial pages, and on Capitol Hil...

Libertarianism Marxism

started by Weiye Loh on 28 Aug 09 no follow-up yet
Weiye Loh

Greening the screen » Scienceline - 0 views

  • But not all documentaries take such a novel approach. Randy Olson, a marine biologist-turned-filmmaker at the University of Southern California, is a harsh critic of what he sees as a very literal-minded, information-heavy approach within the environmental film genre. Well-intentioned environmental documentary filmmakers are just “making their same, boring, linear, one-dimensional explorations of issues,” said Olson. “The public’s not buying it.”
  • The problem may run deeper than audience tallies — after all, An Inconvenient Truth currently ranks as the sixth-highest grossing documentary in the United States. However, a 2010 study by social psychologist Jessica Nolan found that while the film increased viewers’ concern about global warming, that concern didn’t translate into any substantial action a month later.
  • To move a larger audience to action, Olson advocates a shift from the literal-minded world of documentary into the imaginative world of narrative.
  • ...4 more annotations...
  • One organization using this approach is the Science and Entertainment Exchange, a program of the National Academy of Sciences. The Exchange puts writers, producers, and directors in touch with scientists and engineers who can answer specific questions or just brainstorm ideas. For example, writers for the TV show Fringe changed their original plot point of mind control through hypnosis to magnetic manipulation of brain waves after speaking with a neuroscientist at the Salk Institute for Biological Studies in La Jolla, California.
  • Hollywood, Health and Society (HHS), a program of the Centers for Disease Control and Prevention, takes a similar approach by providing free resources to the entertainment industry. HHS connects writers and producers — from prime time dramas like Law and Order and House to daytime soap operas – with experts who can provide accurate health information for their scripts.
  • HHS Director Sandra Buffington admits that environmental issues, especially climate change, pose particular challenges for communicators because at first glance, they are not as immediately relevant as personal health issues. However, she believes that by focusing on real, human stories — climate refugees displaced by rising water levels, farmers unable to grow food because of drought, children sick because of outbreaks of malaria — the issues of the planet will crystallize into something tangible. All scientists need to do is provide the information, and the professional creative storytellers will do the rest, she says.
  • Olson also takes a cue from television. He points to the rise of reality TV shows as a clear indication of where the general public interest lies. If environmentalists want to capture that interest, Olson thinks they need to start experimenting with these innovative types of unscripted forms. “That’s where the cutting edge exists,” he said.
  •  
    For environmentalists trying to use entertainment to shape broad public attitudes and behaviors, nothing could be more important than understanding how to reach these hard-to-get people. Something that will speak to them, something that will change their minds, and most importantly, something that will incite them to action. A documentary might not be that something.
Weiye Loh

Random Thoughts Of A Free Thinker: "'Intellectual inoculation' the best defence" - 0 views

  •  
    ""'Intellectual inoculation' the best defence": "It is time Singaporeans stopped looking to systems to safeguard children. Such overreliance on the state spells trouble. If a child were to go to a country without such checks, the affinity for the virtual world may take over. In the war of minds, every young mind needs to be educated to erect its own barriers. This is primarily the responsibility of the family. The state can only do so much.""
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Meenatchi

Scientists use computer to 'read minds' on screen - 1 views

Article Summary: http://www.telegraph.co.uk/news/newstopics/howaboutthat/6482189/Scientists-use-computer-to-read-minds-on-screen.html The article talks about the discovery on the ability to read ...

online ethics progress technology

started by Meenatchi on 03 Nov 09 no follow-up yet
Jude John

What's so Original in Academic Research? - 26 views

Thanks for your comments. I may have appeared to be contradictory, but what I really meant was that ownership of IP should not be a motivating factor to innovate. I realise that in our capitalistic...

Weiye Loh

The internet: is it changing the way we think? | Technology | The Observer - 0 views

  • American magazine the Atlantic lobs an intellectual grenade into our culture. In the summer of 1945, for example, it published an essay by the Massachusetts Institute of Technology (MIT) engineer Vannevar Bush entitled "As We May Think". It turned out to be the blueprint for what eventually emerged as the world wide web. Two summers ago, the Atlantic published an essay by Nicholas Carr, one of the blogosphere's most prominent (and thoughtful) contrarians, under the headline "Is Google Making Us Stupid?".
  • Carr wrote, "I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn't going – so far as I can tell – but it's changing. I'm not thinking the way I used to think. I can feel it most strongly when I'm reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument and I'd spend hours strolling through long stretches of prose. That's rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I'm always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle."
  • Carr's target was not really the world's leading search engine, but the impact that ubiquitous, always-on networking is having on our cognitive processes. His argument was that our deepening dependence on networking technology is indeed changing not only the way we think, but also the structure of our brains.
  • ...9 more annotations...
  • Carr's article touched a nerve and has provoked a lively, ongoing debate on the net and in print (he has now expanded it into a book, The Shallows: What the Internet Is Doing to Our Brains). This is partly because he's an engaging writer who has vividly articulated the unease that many adults feel about the way their modi operandi have changed in response to ubiquitous networking.
  • Who bothers to write down or memorise detailed information any more, for example, when they know that Google will always retrieve it if it's needed again? The web has become, in a way, a global prosthesis for our collective memory.
  • easy to dismiss Carr's concern as just the latest episode of the moral panic that always accompanies the arrival of a new communications technology. People fretted about printing, photography, the telephone and television in analogous ways. It even bothered Plato, who argued that the technology of writing would destroy the art of remembering.
  • many commentators who accept the thrust of his argument seem not only untroubled by its far-reaching implications but are positively enthusiastic about them. When the Pew Research Centre's Internet & American Life project asked its panel of more than 370 internet experts for their reaction, 81% of them agreed with the proposition that "people's use of the internet has enhanced human intelligence".
  • As a writer, thinker, researcher and teacher, what I can attest to is that the internet is changing our habits of thinking, which isn't the same thing as changing our brains. The brain is like any other muscle – if you don't stretch it, it gets both stiff and flabby. But if you exercise it regularly, and cross-train, your brain will be flexible, quick, strong and versatile.
  • he internet is analogous to a weight-training machine for the brain, as compared with the free weights provided by libraries and books. Each method has its advantage, but used properly one works you harder. Weight machines are directive and enabling: they encourage you to think you've worked hard without necessarily challenging yourself. The internet can be the same: it often tells us what we think we know, spreading misinformation and nonsense while it's at it. It can substitute surface for depth, imitation for originality, and its passion for recycling would surpass the most committed environmentalist.
  • I've seen students' thinking habits change dramatically: if information is not immediately available via a Google search, students are often stymied. But of course what a Google search provides is not the best, wisest or most accurate answer, but the most popular one.
  • But knowledge is not the same thing as information, and there is no question to my mind that the access to raw information provided by the internet is unparalleled and democratising. Admittance to elite private university libraries and archives is no longer required, as they increasingly digitise their archives. We've all read the jeremiads that the internet sounds the death knell of reading, but people read online constantly – we just call it surfing now. What they are reading is changing, often for the worse; but it is also true that the internet increasingly provides a treasure trove of rare books, documents and images, and as long as we have free access to it, then the internet can certainly be a force for education and wisdom, and not just for lies, damned lies, and false statistics.
  • In the end, the medium is not the message, and the internet is just a medium, a repository and an archive. Its greatest virtue is also its greatest weakness: it is unselective. This means that it is undiscriminating, in both senses of the word. It is indiscriminate in its principles of inclusion: anything at all can get into it. But it also – at least so far – doesn't discriminate against anyone with access to it. This is changing rapidly, of course, as corporations and governments seek to exert control over it. Knowledge may not be the same thing as power, but it is unquestionably a means to power. The question is, will we use the internet's power for good, or for evil? The jury is very much out. The internet itself is disinterested: but what we use it for is not.
  •  
    The internet: is it changing the way we think? American writer Nicholas Carr's claim that the internet is not only shaping our lives but physically altering our brains has sparked a lively and ongoing debate, says John Naughton. Below, a selection of writers and experts offer their opinion
Weiye Loh

Titans of science: David Attenborough meets Richard Dawkins | Science | The Guardian - 0 views

  • What is the one bit of science from your field that you think everyone should know?David Attenborough: The unity of life.Richard Dawkins: The unity of life that comes about through evolution, since we're all descended from a single common ancestor. It's almost too good to be true, that on one planet this extraordinary complexity of life should have come about by what is pretty much an intelligible process. And we're the only species capable of understanding it.
  • RD: I know you're working on a programme about Cambrian and pre-Cambrian fossils, David. A lot of people might think, "These are very old animals, at the beginning of evolution; they weren't very good at what they did." I suspect that isn't the case?DA: They were just as good, but as generalists, most were ousted from the competition.RD: So it probably is true there's a progressive element to evolution in the short term but not in the long term – that when a lineage branches out, it gets better for about five million years but not 500 million years. You wouldn't see progressive improvement over that kind of time scale.DA: No, things get more and more specialised. Not necessarily better.RD: The "camera" eyes of any modern animal would be better than what had come before.DA: Certainly... but they don't elaborate beyond function. When I listen to a soprano sing a Handel aria with an astonishing coloratura from that particular larynx, I say to myself, there has to be a biological reason that was useful at some stage. The larynx of a human being did not evolve without having some function. And the only function I can see is sexual attraction.RD: Sexual selection is important and probably underrated.DA: What I like to think is that if I think the male bird of paradise is beautiful, my appreciation of it is precisely the same as a female bird of paradise.
    • Weiye Loh
       
      Is survivability really all about sex and reproduction of future generation? 
  • People say Richard Feynman had one of these extraordinary minds that could grapple with ideas of which I have no concept. And you hear all the ancillary bits – like he was a good bongo player – that make him human. So I admire this man who could not only deal with string theory but also play the bongos. But he is beyond me. I have no idea what he was talking of.
  • ...6 more annotations...
  • RD: There does seem to be a sense in which physics has gone beyond what human intuition can understand. We shouldn't be too surprised about that because we're evolved to understand things that move at a medium pace at a medium scale. We can't cope with the very tiny scale of quantum physics or the very large scale of relativity.
  • DA: A physicist will tell me that this armchair is made of vibrations and that it's not really here at all. But when Samuel Johnson was asked to prove the material existence of reality, he just went up to a big stone and kicked it. I'm with him.
  • RD: It's intriguing that the chair is mostly empty space and the thing that stops you going through it is vibrations or energy fields. But it's also fascinating that, because we're animals that evolved to survive, what solidity is to most of us is something you can't walk through.
  • the science of the future may be vastly different from the science of today, and you have to have the humility to admit when you don't know. But instead of filling that vacuum with goblins or spirits, I think you should say, "Science is working on it."
  • DA: Yes, there was a letter in the paper [about Stephen Hawking's comments on the nonexistence of God] saying, "It's absolutely clear that the function of the world is to declare the glory of God." I thought, what does that sentence mean?!
  • What is the most difficult ethical dilemma facing science today?DA: How far do you go to preserve individual human life?RD: That's a good one, yes.DA: I mean, what are we to do with the NHS? How can you put a value in pounds, shillings and pence on an individual's life? There was a case with a bowel cancer drug – if you gave that drug, which costs several thousand pounds, it continued life for six weeks on. How can you make that decision?
  •  
    Of mind and matter: David Attenborough meets Richard Dawkins We paired up Britain's most celebrated scientists to chat about the big issues: the unity of life, ethics, energy, Handel - and the joy of riding a snowmobile
Weiye Loh

Julian Baggini: If science has not actually killed God, it has rendered Him unrecognisa... - 0 views

  • If top scientists such as John Polkinghorne and Bernard d'Espagnat believe in God, that challenges the simplistic claim that science and religion are completely incompatible. It doesn't hurt that this message is being pushed with the help of the enormous wealth of the Templeton Foundation, which funds innumerable research programmes, conferences, seminars and prizes as a kind of marriage guidance service to religion and science.
  • why on earth should physicists hold this exalted place in the theological firmament?
  • it can almost be reduced to a linguistic mistake: thinking that because both physicists and theologians study fundamental forces of some kind, they must study fundamental forces of the same kind.
  • ...9 more annotations...
  • If, as Sacks argues, science is about the how and religion the why, then scientists are not authorities on religion at all. Hawking's opinions about God would carry no more weight than his taxi driver's. Believers and atheists should remove physicists from the front line and send in the philosophers and theologians as cannon fodder once again.
  • But is Sacks right? Science certainly trails a destructive path through a lot of what has traditionally passed for religion. People accuse Richard Dawkins of attacking a baby version of religion, but the fact is that there are still millions of people who do believe in the literal truth of Genesis, Noah's Ark and all. Clearly science does destroy this kind of religious faith, totally and mercilessly. Scientists are authorities on religion when they declare the earth is considerably more than 6,000 years old.
  • But they insist that religion is no longer, if it ever was, in the business of trying to come up with proto-scientific explanations of how the universe works. If that is accepted, science and religion can make their peace and both rule over their different magisteria, as the biologist Stephen Jay Gould put it.
  • People have been making a lot in the past few days of Hawking's famous sentence in A Brief History of Time: "If we discover a complete theory, it would be a triumph of human reason – for then we should know the mind of God."
  • Hawking's "mind of God" was never anything more than a metaphor for an understanding of the universe which is complete and objective. Indeed, it has been evident for some time that Hawking does not believe in anything like the traditional God of religion. "You can call the laws of science 'God' if you like," he told Channel 4 earlier this year, "but it wouldn't be a personal God that you could meet, and ask questions."
  • there is no room in the universe of Hawking or most other scientists for the activist God of the Bible. That's why so few leading scientists are religious in any traditional sense.
  • This point is often overlooked by apologists who grasp at any straw science will hold out for them. Such desperate clinging happened, disgracefully, in the last years of the philosopher Antony Flew's life. A famous atheist, Flew was said to have changed his mind, persuaded that the best explanation for the "fine-tuning"of the universe – very precise way that its conditions make life possible – was some kind of intentional design. But what was glossed over was that he was very clear that this designer was nothing like the traditional God of the Abrahamic faiths. It was, he clearly said, rather the Deist Deist God, or the God of Aristotle, one who might set the ball rolling but then did no more than watch it trundle off over the horizon. This is no mere quibble. The deist God does not occupy some halfway house between atheism and theism. Replace Yaweh with the deist God and the Bible would make less sense than if you'd substituted Brian for Jesus.
  • it is not true that science challenges only the most primitive, literal forms of religion. It is probably going too far to say that sciencemakes the God of Christianity, Judaism and Islam impossible, but it certainly makes him very unlikely indeed.
  • to think that their findings, and those of other scientists, have nothing to say about the credibility of religious faith is just wishful thinking. In the scientific universe, God is squeezed until his pips squeak. If he survives, then he can't do so without changing his form. Only faith makes it possible to look at such a distorted, scientifically respectable deity and claim to recognise the same chap depicted on the ceiling of the Sistine Chapel. For those without faith, that God is clearly dead, and, yes, science helped to kill him.
  •  
    Julian Baggini: If science has not actually killed God, it has rendered Him unrecognisable There is no room in the universe of Hawking or most other scientists for the activist God of the Bible
Weiye Loh

TPM: The Philosophers' Magazine | Is morality relative? Depends on your personality - 0 views

  • no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
  • If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
  • in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here
  • ...8 more annotations...
  • when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
  • consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
  • Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
  • Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
  • People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
  • To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
  • The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
  • If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Weiye Loh

Revenge Rape and Reason is Ty Oliver Mcdowell a Rapist or a Victim - 0 views

  • Most people who have heard about the Craig’s list rape by proxy of the Wyoming woman that occurred in December have been shocked by not only the brutal rape of a woman who was an innocent victim of an ex boyfriends sick mind, but also by the rapist who actually committed the crime. Many people believe that both men should get what they deserve. But what exactly does that mean in the case of Ty Oliver McDowell? Should the man be convicted of a Rape? Or is he perhaps a victim in the diabolical scheme of Jebidah James Stripe?
  • Posing as the victim, Stripe placed an ad complete with picture on Craig’s list. He stated in the ad that he was the woman and that she wanted to fulfill a sexual fantasy in which she was raped. Stating specifically in the ad that she was looking for an aggressive male who had little regard for women.
  • If McDowell is telling the truth, he saw the ad and emailed the woman, who was stripe posing as the woman, and they communicated by messenger back and forth as she detailed her fantasy and exactly what she would like done. After, discussing the fantasy. McDowell then on December 11, 2009 broke into the woman’s home, tied her to a chair, held a knife to her throat, and raped her. Thus fulfilling what he claims he believed to be the woman’s fantasy. At this stage we have no reason to disbelieve his story. But, was his belief and actions based on that belief reasonable?
  • ...9 more annotations...
  • In determining how reasonable a persons actions are we have to look at what a normal person would do in the same situation.
  • The idea that women have rape fantasies have been perpetuated by men’s magazines, and pornographic movies and books. A certain segment of the male population is going to believe that such as a fantasy exists in the minds women. And sadly, though rare, it does exist in the minds of a few women, as hard as that is for most of us to accept. This fantasy obviously appeals to many men or they would not be watching these movies or buying these magazines, even normal men who would never commit a rape may harbor such fantasies. So, while the idea makes most people’s skin crawl it was “reasonable” for McDowell to believe that a woman could harbor this fantasy.
  • But, would a reasonable man act upon it? Everything within most of us shouts no. But, the truth is there are many couples who in the privacy of their homes act out fantasies include bondage fantasies. So, is it less reasonable that a man who has such a fantasy would, if he could find a woman that shares that fantasy act on it? The truth is that his actions may well be considered reasonable in the face of the facts as we now know them.
  • There are those who claim this man was a rapist ready to happen, and while I don’t necessarily disagree I also believe we will never know. There are probably thousands if not millions of people who have sexual fantasies both big and small that they have never acted upon. This man could have been one of them. On the other hand his enthusiasm in acting out this fantasy may well be an indication that he would have at sometime committed such an act on a woman he knew to be unwilling.
  • What is most disturbing is Stripe’s actions. By setting up the rape fantasy the way he did, by communicating with McDowell while pretending to be the victim, he set up a situation where the victim herself could not stop what was happening. No matter how many times she told McDowell to stop, how tearfully she begged, he was primed by Stripe to believe that this was all part of the playacting.
  • Let’s not forget Craig’s list. Until we make laws making it illegal for such ads as these to get posted there are going to be sites such as these who will make their money uncaring who gets hurt in the process. In fact, the more notoriety this site seems to get, the more people seem to want to use it.
  • Just on the rape fantasy for women part, a number of studies show it to be a fairly significant fantasy that about 1/3 to 2/3 of women have. Nothing can condone what he did, but it's easy to believe he may have thought she was okay with it. There are many people who play out bondage and torture fantasy and we can't judge them.
  • Well, it doesnt look like the Judge bought McDowell's story. He was sentenced to 60 years to life in prison. The same sentence that Stipe received.
  • It does say something of McDowell that he voluntarily changed his plea from "not guilty" to "guilty." From reviewing many reports on this case, it appears that, once he realized what really happened, he wanted to make this as easy on the woman as possible. His remorse at what he accidentally did must be mixed with the horror he knows he unwittingly created.Perhaps the real case yet to come is McDowell's teaming up with the woman in a civil case against Stipe, the real criminal.
Weiye Loh

A Data State of Mind | Think Quarterly - 0 views

  • Rosling has maintained a fact-based worldview – an understanding of how global health trends act as a signifier for economic development based on hard data. Today, he argues, countries and corporations alike need to adopt that same data-driven understanding of the world if they are to make sense of the changes we are experiencing in this new century, and the opportunities and challenges that lie ahead.
  • the world has changed so much, what people need isn’t more data but a new mindset. They need a new storage system that can handle this new information. But what I have found over the years is that the CEOs of the biggest companies are actually those that already have the most fact-based worldview, more so than in media, academia or politics. Those CEOs that haven’t grasped the reality of the world have already failed in business. If they don’t understand what is happening in terms of potential new markets in the Middle East, Africa and so on, they are out. So the bigger and more international the organisation, the more fact-based the CEO’s worldview is likely to be. The problem is that they are slow in getting their organisation to follow.
  • Companies as a whole are stuck in the rut of an old mindset. They think in outworn categories and follow habits and assumptions that are not, or only rarely, based on fact.
  • ...10 more annotations...
  • For instance, in terms of education levels, we no longer live in a world that is divided into the West and the rest; our world today stretches from Canada to Yemen with all the other countries somewhere in between. There’s a broad spectrum of levels
  • even when people act within a fact-based worldview, they are used to talking with sterile figures. They are used to standing on a podium, clicking through slide shows in PowerPoint rather than interacting with their presentation. The problem is that companies have a strict separation between their IT department, where datasets are produced, and the design department, so hardly any presenters are proficient in both. Yet this is what we need. Getting people used to talking with animated data is, to my mind, a literacy project.
  • What’s important today is not just financial data but child mortality rates, the number of children per women, education levels, etc. In the world today, it’s not money that drags people into modern times, it’s people that drag money into modern times.
  • I can demonstrate human resources successes in Asia through health being improved, family size decreasing and then education levels increasing. That makes sense: when more children survive, parents accept that there is less need for multiple births, and they can afford to put their children through school. So Pfizer have moved their research and development of drugs to Asia, where there are brilliant young people who are amazing at developing drugs. It’s realising this kind of change that’s important.
  • The problem isn’t that specialised companies lack the data they need, it’s that they don’t go and look for it, they don’t understand how to handle it.
  • What is so strong with animation is that it provides that mindset shift in market segmentation. We can see where there are highly developed countries with a good economy and a healthy and well-educated staff.
  • At the moment, I’m quarrelling with Sweden’s Minister of Foreign Affairs. He says that the West has to make sure its lead over the rest of the world doesn’t erode. This is a completely wrong attitude. Western Europe and other high-income countries have to integrate themselves into the world in the same way big companies are doing. They have to look at the advantages, resources and markets that exist in different places around the world.
  • And some organisations aren’t willing to share their data, even though it would be a win-win situation for everybody and we would do much better in tackling the problems we need to tackle. Last April, the World Bank caved in and finally embraced an open data policy, but the OECD uses tax money to compile data and then sells it in a monopolistic way. The Chinese Statistical Bureau provides data more easily than the OECD. The richest countries in the world don’t have the vision to change.
  • ‘database hugging disorder’
  • we have to instil a clear division of labour between those who provide the datasets – like the World Bank, the World Health Organisation or companies themselves – those who provide new technologies to access or process them, like Google or Microsoft, and those who ‘play’ with them and give data meaning. It’s like a great concert: you need a Mozart or a Chopin to write wonderful music, then you need the instruments and finally the musicians.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.  First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.  Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.  The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.  For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.  This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).  Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.  Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.  De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116).  
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!  Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.  A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
Weiye Loh

journalism.sg » Tin Pei Ling's baptism of fire: Should bloggers have lit the ... - 0 views

  • That is nothing, though, compared with the attack by Temasek Review, the anonymously-run website with lofty ambitions “to foster an informed, educated, thinking and proactive citizenry.” The website delved into her personal life – even questioning her motives for marrying her husband – to present her as a materialistic, social climbing monster. Such attacks have also been flying around social media.
  • Never mind that Tin (unlike most high-flying PAP candidates) has several years’ grassroots experience; sections of the online community have dismissed the possibility that someone so young – she is in her 20s – could serve in the highest forum in the land. (I recall feeling similarly skeptical when Eunice Olsen was put up as an NMP. She proved me wrong and I have learnt not to prejudge.)
  • Siew Kum Hong, hardly a PAP apologist, has had the intellectual honesty and moral courage to come out swiftly in his blog against this distasteful turn of events.
  • ...5 more annotations...
  • some others have argued that election candidates should expect such a baptism of fire. One blogger, while agreeing that the incident was “unfortunate”, said with Nietzsche-like logic, “If Ms. Tin is made of sterner stuff, she’ll live through this. If our future political leaders don’t have the tenacity to look past the Glee-like slushies and take the hit for the citizens of Singapore, then I don’t think they deserve my vote in the first place.”
  • how Tin and her party leaders respond to this episode will say a lot about their preparedness for the new terrain.
  • This, however, doesn’t really excuse those who have chosen to corrupt that terrain.
  • Some online posters have argued that the PAP is just reaping what it has sown: it has made life ugly for those who dare to enter Opposition politics, deterring many able individuals from joining other parties; now it’s payback time, time for the PAP can get a taste of its own medicine. Certainly, the online world should help to level what is undoubtedly a tilted offline playing field. This imperative is what motivates some of Singapore’s best online journalism.
  • Websites that say they want to help raise the level of Singapore’s political discourse shouldn’t go lower than the politicians themselves.
  •  
    Never mind that Tin (unlike most high-flying PAP candidates) has several years' grassroots experience; sections of the online community have dismissed the possibility that someone so young - she is in her 20s - could serve in the highest forum in the land. (I recall feeling similarly skeptical when Eunice Olsen was put up as an NMP. She proved me wrong and I have learnt not to prejudge.)
Weiye Loh

Crashing Into Stereotypes, Bryan Caplan | EconLog | Library of Economics and Liberty - 0 views

  • The trite official theme of the movie - the evils of narrow-minded prejudice - could have sunk the whole project. But as in a lot of compelling fiction, the official theme of Crash contradicts the details of the story. If you are paying attention, it soon becomes obvious that virtually none of the characters suffer from "narrow-minded prejudice." No one makes up their grievances out of thin air. Instead, the characters mostly engage in statistical discrimination. They generalize from their experience to form stereotypes about the members of different ethnic groups (including their own!), and act on those stereotypes when it is costly to make case-by-case judgments (as it usually is). In the story, moreover, stereotypes are almost invariably depicted as statistically accurate. Young black men are more likely to be car thieves; white cops are more likely to abuse black suspects; and Persians have bad tempers. Of course, the story also makes the point that some members of these groups violate the stereotype. But that "insight" is basic to all statistical reasoning.
  • the rule in Crash is that busy people see others as average members of their groups until proven otherwise.
  • It is particularly interesting that Crash illustrates one of the deep truths of models of statistical discrimination: The real social conflict is not between groups, but within groups. People who are below-average for their group make life worse for people who are above-average for their group. Women who get job training and then quit to have children hurt the careers of single-minded career women, because they reduce the profitability of the average woman. This lesson is beautifully expressed in the scene where the successful black t.v. producer (Terrence Howard) chews out the black teen-ager (Chris "Ludacris" Bridges) who unsuccessfully tried to car-jack him: You embarrass me. You embarrass yourself.
  •  
    If you really want to improve your group's image, telling other groups to stop stereotyping won't work. The stereotype is based on the underlying distribution of fact. It is far more realistic to turn your complaining inward, and pressure the bad apples in your group to stop pulling down the average.
1 - 20 of 110 Next › Last »
Showing 20 items per page