Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Imagination

Rss Feed Group items tagged

Weiye Loh

Reclaiming the Imagination - Opinionator Blog - NYTimes.com - 1 views

  • Why did humans evolve the capacity to imagine alternatives to reality? Was story-telling in prehistoric times like the peacock’s tail, of no direct practical use but a good way of attracting a mate? It kept Scheherazade Scheherazade alive through those one thousand and one nights — in the story.
  • imagining turns out to be much more reality-directed than the stereotype implies.
  • A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities.
  • ...2 more annotations...
  • Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.
  • we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place.
  •  
    Reclaiming the Imagination By TIMOTHY WILLIAMSON
Weiye Loh

Rationally Speaking: Are Intuitions Good Evidence? - 0 views

  • Is it legitimate to cite one’s intuitions as evidence in a philosophical argument?
  • appeals to intuitions are ubiquitous in philosophy. What are intuitions? Well, that’s part of the controversy, but most philosophers view them as intellectual “seemings.” George Bealer, perhaps the most prominent defender of intuitions-as-evidence, writes, “For you to have an intuition that A is just for it to seem to you that A… Of course, this kind of seeming is intellectual, not sensory or introspective (or imaginative).”2 Other philosophers have characterized them as “noninferential belief due neither to perception nor introspection”3 or alternatively as “applications of our ordinary capacities for judgment.”4
  • Philosophers may not agree on what, exactly, intuition is, but that doesn’t stop them from using it. “Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” Brian Talbot says.5 Typically, the way this works is that a philosopher challenges a theory by applying it to a real or hypothetical case and showing that it yields a result which offends his intuitions (and, he presumes, his readers’ as well).
  • ...16 more annotations...
  • For example, John Searle famously appealed to intuition to challenge the notion that a computer could ever understand language: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output)… If the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” Should we take Searle’s intuition that such a system would not constitute “understanding” as good evidence that it would not? Many critics of the Chinese Room argument argue that there is no reason to expect our intuitions about intelligence and understanding to be reliable.
  • Ethics leans especially heavily on appeals to intuition, with a whole school of ethicists (“intuitionists”) maintaining that a person can see the truth of general ethical principles not through reason, but because he “just sees without argument that they are and must be true.”6
  • Intuitions are also called upon to rebut ethical theories such as utilitarianism: maximizing overall utility would require you to kill one innocent person if, in so doing, you could harvest her organs and save five people in need of transplants. Such a conclusion is taken as a reductio ad absurdum, requiring utilitarianism to be either abandoned or radically revised – not because the conclusion is logically wrong, but because it strikes nearly everyone as intuitively wrong.
  • British philosopher G.E. Moore used intuition to argue that the existence of beauty is good irrespective of whether anyone ever gets to see and enjoy that beauty. Imagine two planets, he said, one full of stunning natural wonders – trees, sunsets, rivers, and so on – and the other full of filth. Now suppose that nobody will ever have the opportunity to glimpse either of those two worlds. Moore concluded, “Well, even so, supposing them quite apart from any possible contemplation by human beings; still, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would."7
  • Although similar appeals to intuition can be found throughout all the philosophical subfields, their validity as evidence has come under increasing scrutiny over the last two decades, from philosophers such as Hilary Kornblith, Robert Cummins, Stephen Stich, Jonathan Weinberg, and Jaakko Hintikka (links go to representative papers from each philosopher on this issue). The severity of their criticisms vary from Weinberg’s warning that “We simply do not know enough about how intuitions work,” to Cummins’ wholesale rejection of philosophical intuition as “epistemologically useless.”
  • One central concern for the critics is that a single question can inspire totally different, and mutually contradictory, intuitions in different people.
  • For example, I disagree with Moore’s intuition that it would be better for a beautiful planet to exist than an ugly one even if there were no one around to see it. I can’t understand what the words “better” and “worse,” let alone “beautiful” and “ugly,” could possibly mean outside the domain of the experiences of conscious beings
  • If we want to take philosophers’ intuitions as reason to believe a proposition, then the existence of opposing intuitions leaves us in the uncomfortable position of having reason to believe both a proposition and its opposite.
  • “I suspect there is overall less agreement than standard philosophical practice presupposes, because having the ‘right’ intuitions is the entry ticket to various subareas of philosophy,” Weinberg says.
  • But the problem that intuitions are often not universally shared is overshadowed by another problem: even if an intuition is universally shared, that doesn’t mean it’s accurate. For in fact there are many universal intuitions that are demonstrably false.
  • People who have not been taught otherwise typically assume that an object dropped out of a moving plane will fall straight down to earth, at exactly the same latitude and longitude from which it was dropped. What will actually happen is that, because the object begins its fall with the same forward momentum it had while it was on the plane, it will continue to travel forward, tracing out a curve as it falls and not a straight line. “Considering the inadequacies of ordinary physical intuitions, it is natural to wonder whether ordinary moral intuitions might be similarly inadequate,” Princeton’s Gilbert Harman has argued,9 and the same could be said for our intuitions about consciousness, metaphysics, and so on.
  • We can’t usually “check” the truth of our philosophical intuitions externally, with an experiment or a proof, the way we can in physics or math. But it’s not clear why we should expect intuitions to be true. If we have an innate tendency towards certain intuitive beliefs, it’s likely because they were useful to our ancestors.
  • But there’s no reason to expect that the intuitions which were true in the world of our ancestors would also be true in other, unfamiliar contexts
  • And for some useful intuitions, such as moral ones, “truth” may have been beside the point. It’s not hard to see how moral intuitions in favor of fairness and generosity would have been crucial to the survival of our ancestors’ tribes, as would the intuition to condemn tribe members who betrayed those reciprocal norms. If we can account for the presence of these moral intuitions by the fact that they were useful, is there any reason left to hypothesize that they are also “true”? The same question could be asked of the moral intuitions which Jonathan Haidt has classified as “purity-based” – an aversion to incest, for example, would clearly have been beneficial to our ancestors. Since that fact alone suffices to explain the (widespread) presence of the “incest is morally wrong” intuition, why should we take that intuition as evidence that “incest is morally wrong” is true?
  • The still-young debate over intuition will likely continue to rage, especially since it’s intertwined with a rapidly growing body of cognitive and social psychological research examining where our intuitions come from and how they vary across time and place.
  • its resolution bears on the work of literally every field of analytic philosophy, except perhaps logic. Can analytic philosophy survive without intuition? (If so, what would it look like?) And can the debate over the legitimacy of appeals to intuition be resolved with an appeal to intuition?
Weiye Loh

'There Is No Values-Free Form Of Education,' Says U.S. Philosopher - Radio Fr... - 0 views

  • from the earliest years, education should be based primarily on exploration, understanding in depth, and the development of logical, critical thinking. Such an emphasis, she says, not only produces a citizenry capable of recognizing and rooting out political jingoism and intolerance. It also produces people capable of questioning authority and perceived wisdom in ways that enhance innovation and economic competitiveness. Nussbaum warns against a narrow educational focus on technical competence.
  • a successful, long-term democracy depends on a citizenry with certain qualities that can be fostered by education.
  • The first is the capacity we associate in the Western tradition with Socrates, but it certainly appears in all traditions -- that is, the ability to think critically about proposals that are brought your way, to analyze an argument, to distinguish a good argument from a bad argument. And just in general, to lead what Socrates called “the examined life.” Now that’s, of course, important because we know that people are very prone to go along with authority, with fashion, with peer pressure. And this kind of critical enlivened citizenry is the only thing that can keep democracy vital.
  • ...15 more annotations...
  • it can be trained from very early in a child’s education. There’re ways that you can get quite young children to recognize what’s a good argument and what’s a bad argument. And as children grow older, it can be done in a more and more sophisticated form until by the time they’re undergraduates in universities they would be studying Plato’s dialogues for example and really looking at those tricky arguments and trying to figure out how to think. And this is important not just for the individual thinking about society, but it’s important for the way people talk to each other. In all too many public discussions people just throw out slogans and they throw out insults. And what democracy needs is listening. And respect. And so when people learn how to analyze an argument, then they look at what the other person’s saying differently. And they try to take it apart, and they think: “Well, do I share some of those views and where do I differ here?” and so on. And this really does produce a much more deliberative, respectful style of public interaction.
  • The second [quality] is what I call “the ability to think as a citizen of the whole world.” We’re all narrow and this is again something that we get from our animal heritage. Most non-human animals just think about the group. But, of course, in this world we need to think, first of all, our whole nation -- its many different groups, minority and majority. And then we need to think outside the nation, about how problems involving, let’s say, the environment or global economy and so on need cooperative resolution that brings together people from many different nations.
  • That’s complicated and it requires learning a lot of history, and it means learning not just to parrot some facts about history but to think critically about how to assess historical evidence. It means learning how to think about the global economy. And then I think particularly important in this era, it means learning something about the major world religions. Learning complicated, nonstereotypical accounts of those religions because there’s so much fear that’s circulating around in every country that’s based usually on just inadequate stereotypes of what Muslims are or whatever. So knowledge can at least begin to address that.
  • the third thing, which I think goes very closely with the other two, is what I call “the narrative imagination,” which is the ability to put yourself in the shoes of another person to have some understanding of how the world looks from that point of view. And to really have that kind of educated sympathy with the lives of others. Now again this is something we come into the world with. Psychologists have now found that babies less than a year old are able to take up the perspective of another person and do things, see things from that perspective. But it’s very narrow and usually people learn how to think about what their parents are thinking and maybe other family members but we need to extend that and develop it, and learn how the world looks from the point of view of minorities in our own culture, people outside our culture, and so on.
  • since we can’t go to all the places that we need to understand -- it’s accomplished by reading narratives, reading literature, drama, participating through the arts in the thought processes of another culture. So literature and the arts are the major ways we would develop and extend that capacity.
  • For many years, the leading model of development ... used by economists and international agencies measuring welfare was simply that for a country to develop means to increase [its] gross domestic product per capita. Now, in recent years, there has been a backlash to that because people feel that it just doesn’t ask enough about what goods are really doing for people, what can people really do and be.
  • so since 1990s the United Nations’ development program has produced annually what’s called a “Human Development Report” that looks at things like access to education, access to health care. In other words, a much richer menu of human chances and opportunities that people have. And at the theoretical end I’ve worked for about 20 years now with economist Amartya Sen, who won the Nobel Prize in 1998 for economics. And we’ve developed this as account of -- so for us what it is for a country to do better is to enhance the set of capabilities meaning substantial opportunities that people have to lead meaningful, fruitful lives. And then I go on to focus on a certain core group of those capabilities that I think ought to be protected by constitutional law in every country.
  • Life; health; bodily integrity; the development of senses, imagination, and thought; the development of practical reason; opportunities to have meaningful affiliations both friendly and political with other people; the ability to have emotional health -- not to be in other words dominated by overwhelming fear and so on; the ability to have a productive relationship with the environment and the world of nature; the ability to play and have leisure time, which is something that I think people don’t think enough about; and then, finally, control over one’s material and social environment, some measure of control. Now of course, each of these is very abstract, and I specify them further. Although I also think that each country needs to finally specify them with its own particular circumstances in view.
  • when kids learn in a classroom that just makes them sit in a chair, well, they can take in something in their heads, but it doesn’t make them competent at negotiating in the world. And so starting, at least, with Jean Jacques Rousseau in the 18th century, people thought: “Well, if we really want people to be independent citizens in a democracy that means that we can’t have whole classes of people who don’t know how to do anything, who are just simply sitting there waiting to be waited on in practical matters.” And so the idea that children should participate in their practical environment came out of the initial democratizing tendencies that went running through the 18th century.
  • even countries who absolutely do not want that kind of engaged citizenry see that for the success of business these abilities are pretty important. Both Singapore and China have conducted mass education reforms over the last five years because they realized that their business cultures don’t have enough imagination and they also don’t have enough critical thinking, because you can have awfully corrupt business culture if no one is willing to say the unpleasant word or make a criticism.
  • So they have striven to introduce more critical thinking and more imagination into their curricula. But, of course, for them, they want to cordon it off -- they want to do it in the science classroom, in the business classroom, but not in the politics classroom. Well, we’ll see -- can they do that? Can they segment it that way? I think democratic thinking is awfully hard to segment as current events in the Middle East are showing us. It does have the tendency to spread.
  • so maybe the people in Singapore and China will not like the end result of what they tried to do or maybe the reform will just fail, which is equally likely -- I mean the educational reform.
  • if you really don’t want democracy, this is not the education for you. It had its origins in the ancient Athenian democracy which was a very, very strong participatory democracy and it is most at home in really true democracy, where our whole goal is to get each and every person involved and to get them thinking about things. So, of course, if politicians have ambivalence about that goal they may well not want this kind of education.
  • when we bring up children in the family or in the school, we are always engineering. I mean, there is no values-free form of education in the world. Even an education that just teaches you a list of facts has values built into it. Namely, it gives a negative value to imagination and to the critical faculties and a very high value to a kind of rote, technical competence. So, you can't avoid shaping children.
  • ncreasingly the child should be in control and should become free. And that's what the critical thinking is all about -- it's about promoting freedom as the child goes on. So, the end product should be an adult who is really thinking for him- or herself about the direction of society. But you don't get freedom just by saying, "Oh, you are free." Progressive educators that simply stopped teaching found out very quickly that that didn't produce freedom. Even some of the very extreme forms of progressive school where children were just allowed to say every day what it was they wanted to learn, they found that didn't give the child the kind of mastery of self and of the world that you really need to be a free person.
Weiye Loh

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
Weiye Loh

Joe Queenan: My 6,128 Favorite Books - WSJ.com - 0 views

  •  
    "If you have read 6,000 books in your lifetime, or even 600, it's probably because at some level you find "reality" a bit of a disappointment. People in the 19th century fell in love with "Ivanhoe" and "The Count of Monte Cristo" because they loathed the age they were living through. Women in our own era read "Pride and Prejudice" and "Jane Eyre" and even "The Bridges of Madison County"-a dimwit, hayseed reworking of "Madame Bovary"-because they imagine how much happier they would be if their husbands did not spend quite so much time with their drunken, illiterate golf buddies down at Myrtle Beach."
Weiye Loh

The New Republic: Lessons From China And Singapore : NPR - 0 views

  • What do educators in Singapore and China do? By their own internal accounts, they do a great deal of rote learning and "teaching to the test." Even if our sole goal was to produce students who would contribute maximally to national economic growth — the primary, avowed goal of education in Singapore and China — we should reject their strategies, just as they themselves have rejected them.
  • both nations have conducted major educational reforms, concluding that a successful economy requires nourishing analytical abilities, active problem-solving, and the imagination required for innovation.
  • Observers of current practices in both Singapore and China conclude that the reforms have not really been implemented. Teacher pay is still linked to test scores, and thus the incentive structure to effectuate real change is lacking. In general, it's a lot easier to move toward rote learning than to move away from it
  • ...3 more annotations...
  • Moreover, the reforms are cabined by these authoritarian nations' fear of true critical freedom. In Singapore, nobody even attempts to use the new techniques when teaching about politics and contemporary problems. "Citizenship education" typically takes the form of analyzing a problem, proposing several possible solutions, and then demonstrating how the one chosen by government is the right one for Singapore.
  • One professor of communications (who has since left Singapore) reported on a recent attempt to lead a discussion of the libel suits in her class: "I can feel the fear in the room. …You can cut it with a knife."
  • Singapore and China are terrible models of education for any nation that aspires to remain a pluralistic democracy. They have not succeeded on their own business-oriented terms, and they have energetically suppressed imagination and analysis when it comes to the future of the nation and the tough choices that lie before it. If we want to turn to Asia for models, there are better ones to be found: Korea's humanistic liberal arts tradition, and the vision of Tagore and like-minded Indian educators.
  •  
    The New Republic: Lessons From China And Singapore by MARTHA C. NUSSBAUM
Jianwei Tan

Dominic Utton: How to scam a scammer |From the Guardian |The Guardian - 0 views

  •  
    Summary: Some people may have heard of the Nigerian 419 scams that were very infamous quite a few years back. These scammers who supposedly operated out of Nigeria created elaborate stories and solicited for help through e-mails. Although the initial intention of the e-mail is to ask for help, subsequent correspondences usually result in the scammer requesting for monetary aid through wire transfer. This person, Mike, has taken it upon himself to declare war on these scammers, baiting them to believe that he would send money to them but in actual fact plays pranks on them. The pranks played range from telling silly stories and wasting the scammer's time to persuading the scammer to get tattooed in order to get the money. Question: Scams are, without a doubt, unethical and probably criminal activities. However, is the act of scamming a would-be scammer an ethical thing to do? Problem: Let's imagine a situation where the scammer and the scambaiter (the person scamming the scammer) are from the same country or even the same state, thus both parties would be subject to the same laws. If the scammer were to try and launch a scam and instead was scambaited into severe consequences (I think getting tattoed is quite severe), should the scambaiter be prosecuted by the legal system?
Weiye Loh

Online "Toon porn" - 20 views

I must correct that never in my arguments did I mentioned that the interpreter is the problem. I was merely answering YZ's question if cartoon characters can be deemed as representative of human be...

online cartoon anime pornography ethics

Weiye Loh

The Way We Live Now - I Tweet, Therefore I Am - NYTimes.com - 0 views

  • Each Twitter post seemed a tacit referendum on who I am, or at least who I believe myself to be. The grocery-store episode telegraphed that I was tuned in to the Seinfeldian absurdities of life; my concern about women’s victimization, however sincere, signaled that I also have a soul. Together they suggest someone who is at once cynical and compassionate, petty yet deep. Which, in the end, I’d say, is pretty accurate.
  • Distilling my personality provided surprising focus, making me feel stripped to my essence. It forced me, for instance, to pinpoint the dominant feeling as I sat outside with my daughter listening to E.B. White. Was it my joy at being a mother? Nostalgia for my own childhood summers? The pleasures of listening to the author’s quirky, underinflected voice? Each put a different spin on the occasion, of who I was within it. Yet the final decision (“Listening to E.B. White’s ‘Trumpet of the Swan’ with Daisy. Slow and sweet.”) was not really about my own impressions: it was about how I imagined — and wanted — others to react to them. That gave me pause. How much, I began to wonder, was I shaping my Twitter feed, and how much was Twitter shaping me?
  • sociologist Erving Goffman famously argued that all of life is performance: we act out a role in every interaction, adapting it based on the nature of the relationship or context at hand. Twitter has extended that metaphor to include aspects of our experience that used to be considered off-set: eating pizza in bed, reading a book in the tub, thinking a thought anywhere, flossing. Effectively, it makes the greasepaint permanent, blurring the lines not only between public and private but also between the authentic and contrived self. If all the world was once a stage, it has now become a reality TV show: we mere players are not just aware of the camera; we mug for it.
  • ...3 more annotations...
  • Second Life, Facebook, MySpace, Twitter — has shifted not only how we spend our time but also how we construct identity. For her coming book, “Alone Together,” Sherry Turkle, a professor at M.I.T., interviewed more than 400 children and parents about their use of social media and cellphones. Among young people especially she found that the self was increasingly becoming externally manufactured rather than internally developed: a series of profiles to be sculptured and refined in response to public opinion. “On Twitter or Facebook you’re trying to express something real about who you are,” she explained. “But because you’re also creating something for others’ consumption, you find yourself imagining and playing to your audience more and more. So those moments in which you’re supposed to be showing your true self become a performance. Your psychology becomes a performance.” Referring to “The Lonely Crowd,” the landmark description of the transformation of the American character from inner- to outer-directed, Turkle added, “Twitter is outer-directedness cubed.”
  • when every thought is externalized, what becomes of insight? When we reflexively post each feeling, what becomes of reflection? When friends become fans, what happens to intimacy? The risk of the performance culture, of the packaged self, is that it erodes the very relationships it purports to create, and alienates us from our own humanity.
  • I am trying to gain some perspective on the perpetual performer’s self-consciousness. That involves trying to sort out the line between person and persona, the public and private self.
  •  
    THE WAY WE LIVE NOW I Tweet, Therefore I Am
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

homunculus: I can see clearly now - 0 views

  • Here’s a little piece I wrote for Nature news. To truly appreciate this stuff you need to take a look at the slideshow. There will be a great deal more on early microscopy in my next book, probably called Curiosity and scheduled for next year.
  • The first microscopes were a lot better than they are given credit for. That’s the claim of microscopist Brian Ford, based at Cambridge University and a specialist in the history and development of these instruments.
  • Ford says it is often suggested that the microscopes used by the earliest pioneers in the seventeenth century, such as Robert Hooke and Antony van Leeuwenhoek, gave only very blurred images of structures such as cells and micro-organisms. Hooke was the first to record cells, seen in thin slices of cork, while Leeuwenhoek described tiny ‘animalcules’, invisible to the naked eye, in rain water in 1676. The implication is that these breakthroughs in microscopic biology involved more than a little guesswork and invention. But Ford has looked again at the capabilities of some of Leeuwenhoek’s microscopes, and says ‘the results were breathtaking’. ‘The images were comparable with those you would obtain from a modern light microscope’, he adds in an account of his experiments in Microscopy and Analysis [1].
  • ...5 more annotations...
  • The poor impression of the seventeenth-century instruments, says Ford, is due to bad technique in modern reconstructions. In contrast to the hazy images shown in some museums and television documentaries, careful attention to such factors as lighting can produce micrographs of startling clarity using original microscopes or modern replicas.
  • Ford was able to make some of these improvements when he was granted access to one of Leeuwenhoek’s original microscopes owned by the Utrecht University Museum in the Netherlands. Leeuwenhoek made his own instruments, which had only a single lens made from a tiny bead of glass mounted in a metal frame. These simple microscopes were harder to make and to use than the more familiar two-lens compound microscope, but offered greater resolution.
  • Hooke popularized microscopy in his 1665 masterpiece Micrographia, which included stunning engravings of fleas, mites and the compound eyes of flies. The diarist Samuel Pepys judged it ‘the most ingenious book that I ever read in my life’. Ford’s findings show that Hooke was not, as some have imagined, embellishing his drawings from imagination, but should genuinely have been able to see such things as the tiny hairs on the flea’s legs.
  • Even Hooke was temporarily foxed, however, when he was given the duty of reproducing the results described by Leeuwenhoek, a linen merchant of Delft, in a letter to the Royal Society. It took him over a year before he could see these animalcules, whereupon he wrote that ‘I was very much surprised at this so wonderful a spectacle, having never seen any living creature comparable to these for smallness.’ ‘The abilities of those pioneer microscopists were so much greater than has been recognized’ says Ford. He attributes this misconception to the fact that ‘no longer is microscopy properly taught.’
  • Reference1. Ford, B. J. Microsc. Anal. March 2011 (in press).
  •  
    The first microscopes were a lot better than they are given credit for.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Valerie Oon

Ethics discussion based on new movie, "Surrogates" - 8 views

This movie upset me. I don't think the director developed the premise and plot to the potential it could have reached. Quite a shallow interpretation. But it does raise some intrigue. I'm a bit stu...

technology future empowerment destruction

Weiye Loh

Roger Pielke Jr.'s Blog: IPCC and COI: Flashback 2004 - 0 views

  • In this case the NGOs and other groups represent environmental and humanitarian groups that have put together a report (in PDF) on what they see as needed and unnecessary policy actions related to climate change. They put together a nice glossy report with findings and recommendations such as: *Limit global temperature rise to 2 degrees (Celsius, p. 4) *Extracting the World Bank from fossil fuels (p. 15) *Opposing the inclusion of carbon sinks in the [Kyoto] Protocol (p. 22)
  • It is troubling that the Chair of the IPCC would lend his name and organizational affiliation to a set of groups with members engaged actively in political advocacy on climate change. Even if Dr. Pachauri feels strongly about the merit of the political agenda proposed by these groups, at a minimum his endorsement creates a potential perception that the IPCC has an unstated political agenda. This is compounded by the fact that the report Dr. Pachauri tacitly endorses contains statements that are scientifically at odds with those of the IPCC.
  • perhaps most troubling is that by endorsing this group’s agenda he has opened the door for those who would seek to discredit the IPCC by alleging exactly such a bias. (And don’t be surprised to see such statements forthcoming.) If the IPCC’s role is indeed to act as an honest broker, then it would seem to make sense that its leadership ought not blur that role by endorsing, tacitly or otherwise, the agendas of particular groups. There are plenty of appropriate places for political advocacy on climate change, but the IPCC does not seem to me to be among those places.
  • ...1 more annotation...
  • Organized by the New Economics Foundation and the Working Group on Climate and Development, the report (in PDF) is actually pretty good and contains much valuable information on climate change and development (that is, once you get past the hype of the press release and its lack of precision in disaggregating climate and vulnerability as sources of climate-related impacts). The participating organizations have done a nice job integrating considerations of climate change and development, a perspective that is certainly needed. More generally, the IPCC suffers because it no longer considers “policy options” under its mandate. Since its First Assessment Report when it did consider policy options, the IPCC has eschewed responsibility for developing and evaluating a wide range of possible policy options on climate change. By deciding to policy outside of its mandate since 1992, the IPCC, ironically, leaves itself more open to charges of political bias. It is time for the IPCC to bring policy back in, both because we need new and innovative options on climate, but also because the IPCC has great potential to serve as an honest broker. But until it does, its leadership would be well served to avoid either the perception or the reality of endorsing particular political perspectives.
  •  
    Consider the following imaginary scenario. NGOs and a few other representatives of the oil and gas industry decide to band together to produce a report on what they see as needed and unnecessary policy actions related to climate change. They put together a nice glossy report with findings and recommendations such as: *Coal is the fuel of the future, we must mine more. *CO2 regulations are too costly. *Climate change will be good for agriculture. In addition, the report contains some questionable scientific statements and associations. Imagine further that the report contains a preface authored by a prominent scientist who though unpaid for his work lends his name and credibility to the report. How might that scientist be viewed by the larger community? Answers that come to mind include: "A tool of industry," "Discredited," "Biased," "Political Advocate." It is likely that in such a scenario that connection of the scientist to the political advocacy efforts of the oil and gas industry would provide considerable grist for opponents of the oil and gas industry, and specifically a basis for highlighting the appearance or reality of a compromised position of the scientist. Fair enough?
Weiye Loh

New Statesman - Johann Hari and media standards - 0 views

  • Consistency is a virtue. One cannot attack - in any principled terms - the reactionary and the credulous, the knavish and the foolish, for a casual approach to sources, data, and evidence, or for disregarding normal journalistic standards, if when it is a leading liberal writer that is caught out it is somehow exceptional. It simply smacks of shallow partisanship.
  • inconsistency also undermines the normative claims for the superiority of a liberal and critical approach.How can one sensibly call out the "other side" on any given issue in terms which one would not apply to one's "own side"?
  •  
    now that Johann Hari has apologised, one wonders if many who rushed to his support should apologise too. There were many liberal, rational, and atheistic writers and pundits who defended him on Twitter on terms they would never have extended to a conservative, religious, or quack writer or pundit exposed as making a similar sort of mistake. Naming names would be inflammatory; and they, and their followers, know who they are. What is important here is the basic principle of consistency and its value. Just imagine had it been, say, Peter Hitchens, Garry Bushell, Richard Littlejohn, Rod Liddle, Toby Young, Guido Fawkes, Melanie Phillips, Damian Thompson, Daniel Hannan, Christopher Booker, Andrew Roberts, Nadine Dorries, and so on, who had been caught out indulging in some similar malpractice. Would the many liberal or atheistic writers and pundits who sought to defend (or "put into perspective") Hari have been so charitable? Of course not.
Weiye Loh

"The Particle-Emissions Dilemma" by Henning Rodhe | Project Syndicate - 0 views

  • according to the United Nations’ Intergovernmental Panel on Climate Change, the cooling effect of white particles may counteract as much as about half of the warming effect of carbon dioxide. So, if all white particles were removed from the atmosphere, global warming would increase considerably.CommentsView/Create comment on this paragraphThe dilemma is that all particles, whether white or black, constitute a serious problem for human health. Every year, an estimated two million people worldwide die prematurely, owing to the effects of breathing polluted air. Furthermore, sulfur-rich white particles contribute to the acidification of soil and water.
  • Naturally, measures targeting soot and other short-lived particles must not undermine efforts to reduce CO2 emissions. In the long term, emissions of CO2 and other long-lived greenhouse gases constitute the main problem. But a reduction in emissions of soot (and other short-lived climate pollutants) could alleviate the pressures on the climate in the coming decades.
  • what do we do about white particles? How do we weigh improved health and reduced mortality rates for hundreds of thousands of people against the serious consequences of global warming?CommentsView/Create comment on this paragraphIt is difficult to imagine that any country’s officials would knowingly submit their population to higher health risks by not acting to reduce white particles solely because they counteract global warming. On the contrary, sulfur emissions have been reduced over the last few decades in both Europe and North America, owing to a desire to promote health and counter acidification; and China, too, seems to be taking measures to reduce sulfur emissions and improve the country’s terrible air quality. But, in other parts of the world where industrialization is accelerating, sulfur emissions continue to increase.
  • ...2 more annotations...
  • Nobel laureate Paul Crutzen has suggested another solution: manipulate the climate by releasing white sulfur particles high up in the stratosphere, where they would remain for several years, exerting a proven cooling effect on Earth’s climate without affecting human health. In 1991, the eruption of Mount Pinatubo in the Philippines created a haze of sulfur in the higher atmosphere that cooled the entire planet approximately half a degree Celsius for two years afterwards.
  • View/Create comment on this paragraphOther methods of geoengineering – that is, consciously manipulating the climate – include painting the roofs of houses white in order to increase the reflection of sunlight, covering deserts with reflective plastic, and fertilizing the seas with iron in order to increase the absorption of CO2.
  •  
    Particle emissions into Earth's atmosphere affect both human health and the climate. So we should limit them, right? For health reasons, yes, we should indeed do that; but, paradoxically, limiting such emissions would cause global warming to increase
Weiye Loh

Balderdash - 0 views

  • A letter Paul wrote to complain about the "The Dead Sea Scrolls" exhibition at the Arts House:To Ms. Amira Osman (Marketing and Communications Manager),cc.Colin Goh, General Manager,Florence Lee, Depury General ManagerDear Ms. Osman,I visited the Dead Sea Scrolls “exhibition” today with my wife. Thinking that it was from a legitimate scholarly institute or (how naïve of me!) the Israel Antiquities Authority, I was looking forward to a day of education and entertainment.Yet when I got it, much of the exhibition (and booklets) merely espouses an evangelical (fundamentalist) view of the Bible – there are booklets on the inerrancy of the Bible, on how archaeology has proven the Bible to be true etc.Apart from these there are many blatant misrepresentations of the state of archaeology and mainstream biblical scholarship:a) There was initial screening upon entry of a 5-10 minute pseudo-documentary on the Dead Sea Scrolls. A presenter (can’t remember the name) was described as a “biblical archaeologist” – a term that no serious archaeologist working in the Levant would apply to him or herself. (Some prefer the term “Syro-Palestinian archaeologist” but almost all reject the term “biblical archaeologist”). See the book by Thomas W. Davis, “Shifting Sands: The Rise and Fall of Biblical Archaeology”, Oxford, New York 2004. Davis is an actual archaeologist working in the field and the book tells why the term “Biblical archaeologist” is not considered a legitimate term by serious archaeologist.b) In the same presentation, the presenter made the erroneous statement that the entire old testament was translated into Greek in the third century BCE. This is a mistake – only the Pentateuch (the first five books of the Old Testament) was translated during that time. Note that this ‘error’ is not inadvertent but is a familiar claim by evangelical apologists who try to argue for an early date of all the books of the Old testament - if all the books have been translated by the third century BCE obviously these books must all have been written before then! This flies against modern scholarship which show that some books in the Old Testament such as the Book of Daniel was written only in the second century BCE]The actual state of scholarship on the Septuagint [The Greek translation of the Bible] is accurately given in the book by Ernst Würthwein, “The Text of the Old Testament” – Eerdmans 1988 pp.52-54c) Perhaps the most blatant error was one which claimed that the “Magdalene fragments” – which contains the 26th chapter of the Gospel of Matthew is dated to 50 AD!!! Scholars are unanimous in dating these fragments to 200 AD. The only ‘scholar’ cited that dated these fragments to 50 AD was the German papyrologist Carsten Thiede – a well know fundamentalist. This is what Burton Mack (a critical – legitimate – NT scholar) has to say about Thiede’s eccentric dating “From a critical scholar's point of view, Thiede's proposal is an example of just how desperate the Christian imagination can become in the quest to argue for the literal facticity of the Christian gospels” [Mack, Burton L., “Who Wrote the New Testament?:The Making of the Christian Myth” HarperCollins, San Francisco 1995] Yet the dating of 50 AD is presented as though it is a scholarly consensus position!In fact the last point was so blatant that I confronted the exhibitors. (Tak Boleh Tahan!!) One American exhibitor told me that “Yes, it could have been worded differently, but then we would have to change the whole display” (!!). When I told him that this was not a typo but a blatant attempt to deceive, he mentioned that Theide’s views are supported by “The Dallas Theological Seminary” – another well know evangelical institute!I have no issue with the religious strengthening their faith by having their own internal exhibitions on historical artifacts etc. But when it is presented to the public as a scholarly exhibition – this is quite close to being dishonest.I felt cheated of the $36 dollars I paid for the tickets and of the hour that I spent there before realizing what type of exhibition it was.I am disappointed with The Art House for show casing this without warning potential visitors of its clear religious bias.Yours sincerely,Paul TobinTo their credit, the Arts House speedily replied.
    • Weiye Loh
       
      The issue of truth is indeed so maddening. Certainly, the 'production' of truth has been widely researched and debated by scholars. Spivak for example, argued for the deconstruction by means of questioning the privilege of identity so that someone is believed to have the truth. And along the same line, albeit somewhat misunderstood I feel, It was mentioned in class that somehow people who are oppressed know better.
Weiye Loh

TODAYonline | World | The photo that's caused a stir - 0 views

  • reporters had not specifically asked the family's permission to publish them and that his parents had not wanted the photographs to be used. "There was no question that the photo had news value," AP senior managing editor John Daniszewski said. "But we also were very aware the family wished for the picture not to be seen."After lengthy internal discussions, AP concluded that the photo was a part of the war they needed to convey.
  • The US Defence Secretary, Mr Robert Gates, condemned the decision by the news agency Associated Press (AP) to publish the picture. "I cannot imagine the pain and suffering Lance Corporal Bernard's death has caused his family. Why your organisation would purposefully defy the family's wishes, knowing full well that it will lead to yet more anguish, is beyond me,"
  • ...1 more annotation...
  • the picture illustrated the sacrifice and the bravery of those fighting in Afghanistan."We feel it is our journalistic duty to show the reality of the war there, however unpleasant and brutal that sometimes is," said Mr Santiago Lyon, director of photography for AP.
  •  
    Ethical question, when public's demand for information collides with private's demand for non-disclosure, which one should win? How do we measure the pros and cons?
  •  
    Journalistic Ethics
Wing Yan Wong

Are the Feds Stalking Your Cell Phone? Lawsuit Seeks Answers - 1 views

http://www.technewsworld.com/story/63668.html?wlc=1252493244 Two legal groups have filed a lawsuit to get more information on whether the Federal Government may be using Americans' handphones to l...

privacy

started by Wing Yan Wong on 09 Sep 09 no follow-up yet
qiyi liao

Online Censorship: Obama urged to fine firms for aiding censors - 3 views

Internet activists are urging Barack Obama to pass legislation that would make it illegal for technology companies to collaborate with authoritarian countries that censor the internet. -The Guardi...

started by qiyi liao on 02 Sep 09 no follow-up yet
1 - 20 of 71 Next › Last »
Showing 20 items per page