Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged search

Rss Feed Group items tagged

Weiye Loh

Roger Pielke Jr.'s Blog: This Quote Cannot be Accurate - 0 views

  • An article on energy efficiency over at Yale e360 has this quote allegedly from Energy Secretary Steven Chu: "it’s a myth that the wealth of a country is proportional to its energy use."I do not believe that this can possibly be an accurate quote, for the simple reason that it is wrong (and I can find no evidence of it online other than in the Yale e360 story).  The figure at the top of this post comes from Gapminder (the exact graph can be found here), and shows the very strong relationship of the wealth of a country to its energy use.  No myth.
  • UPDATE: A colleague emails to suggest that a linear-linear graph might show something different.  Good question, it does not.
  • UPDATE 2: Here is the source of the quote, a few minutes into Steven Chu's 2008 speech at the National Clean Energy Summit. Chu is clearly saying that beyond a certain level energy use, metrics of quality of life (and he shows a graph using the Human Development Index) do not increase proportionately to increases in energy consumption. I have no problem with such a claim. However, it is highly misleading to reduce that to "it’s a myth that the wealth of a country is proportional to its energy use" absent the broader context, as was done at Yale e360. Thus, the quote is accurate but used improperly. The wealth of a country is indeed proportional to its energy use, which is quite different than saying, once you are very wealthy, consuming more energy does not increase proportionately indicators of quality of life. Thanks to the folks at Yale e360 for quickly responding to my query!
Weiye Loh

Roger Pielke Jr.'s Blog: Ideological Diversity in Academia - 0 views

  • Jonathan Haidt's talk (above) at the annual meeting of the Society for Personality and Social Psychology was written up last week in a column by John Tierney in the NY Times.  This was soon followed by a dismissal of the work by Paul Krugman.  The entire sequence is interesting, but for me the best part, and the one that gets to the nub of the issue, is Haight's response to Krugman: My research, like so much research in social psychology, demonstrates that we humans are experts at using reasoning to find evidence for whatever conclusions we want to reach. We are terrible at searching for contradictory evidence. Science works because our peers are so darn good at finding that contradictory evidence for us. Social science — at least my corner of it — is broken because there is nobody to look for contradictory evidence regarding sacralized issues, particularly those related to race, gender, and class. I urged my colleagues to increase our ideological diversity not for any moral reason, but because it will make us better scientists. You do not have that problem in economics where the majority is liberal but there is a substantial and vocal minority of libertarians and conservatives. Your field is healthy, mine is not. Do you think I was wrong to call for my professional organization to seek out a modicum of ideological diversity?
  • On a related note, the IMF review of why the institution failed to warn of the global financial crisis identified a lack of intellectual diversity as being among the factors responsible (PDF): Several cognitive biases seem to have played an important role. Groupthink refers to the tendency among homogeneous, cohesive groups to consider issues only within a certain paradigm and not challenge its basic premises (Janis, 1982). The prevailing view among IMF staff—a cohesive group of macroeconomists—was that market discipline and self-regulation would be sufficient to stave off serious problems in financial institutions. They also believed that crises were unlikely to happen in advanced economies, where “sophisticated” financial markets could thrive safely with minimal regulation of a large and growing portion of the financial system.Everyyone in academia has seen similar dynamics at work.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Sex: New York City unveils condom finder for smartphones, users satisfied - National Cu... - 0 views

  • he application uses GPS technology and is available for the iPhone and Android devices. The over-the-air (OTA) downloadable app has access to New York City's more than 1,000 free condom outlets. When a user launches a search for rubbers, the nearest five locations are shown, allowing for enough time to act before the mood is lost.
  • The smartphone application that locates free condoms was a huge hit for New Yorkers this Valentine's Day, users said Tuesday, Feb. 15. The program was launched by the New York City Health Department to help the turned-on find protection at a moment's notice--no matter where they are.
  • The health department has come under significant fire for its free condom initiative. Parents have complained that it urges children to experiment with sex. "We're not promoting sex," Sweeney said. "We're promoting safer sex. In New York City and around the country, adolescents and pre-adolescents have sex whether you give them condoms or not."
Weiye Loh

If climate scientists are in it for the money, they're doing it wrong - 0 views

  • Since it doesn't have a lot of commercial appeal, most of the people working in the area, and the vast majority of those publishing the scientific literature, work in academic departments or at government agencies. Penn State, home of noted climatologists Richard Alley and Michael Mann, has a strong geosciences department and, conveniently, makes the department's salary information available. It's easy to check, and find that the average tenured professor earned about $120,000 last year, and a new hire a bit less than $70,000.
  • That's a pretty healthy salary by many standards, but it's hardly a racket. Penn State appears to be on the low end of similar institutions, and is outdone by two other institutions in its own state (based on this report). But, more significantly for the question at hand, we can see that Earth Sciences faculty aren't paid especially well. Sure, they do much better than the Arts faculty, but they're somewhere in the middle of the pack, and get stomped on by professors in the Business and IT departments.
  • This is all, of course, ignoring what someone who can do the sort of data analysis or modeling of complex systems that climatologists perform might make if they went to Wall Street.
  • ...10 more annotations...
  • It's also worth pointing out what they get that money for, as exemplified by a fairly typical program announcement for NSF grants. Note that it calls for studies of past climate change and its impact on the weather. This sort of research could support the current consensus view, but it just as easily might not. And here's the thing: it's impossible to tell before the work's done. Even a study looking at the flow of carbon into and out of the atmosphere, which would seem to be destined to focus on anthropogenic climate influences, might identify a previously unknown or underestimated sink or feedback. So, even if the granting process were biased (and there's been no indication that it is), there is no way for it to prevent people from obtaining contrary data. The granting system is also set up to induce people to publish it, since a grant that doesn't produce scientific papers can make it impossible for a professor to obtain future funding.
  • Maybe the money is in the perks that come with grants, which provide for travel and lab toys. Unfortunately, there's no indication that there's lots of money out there for the taking, either from the public or private sector. For the US government, spending on climate research across 13 different agencies (from the Department of State to NASA) is tracked by the US Climate Change Science Program. The group has tracked the research budget since 1989, but not everything was brought under its umbrella until 1991. That year, according to CCSP figures, about $1.45 billion was spent on climate research (all figures are in 2007 dollars). Funding peaked back in 1995 at $2.4 billion, then bottomed out in 2006 at only $1.7 billion.
  • Funding has gone up a bit over the last couple of years, and some stimulus money went into related programs. But, in general, the trend has been a downward one for 15 years; it's not an area you'd want to go into if you were looking for a rich source of grant money. If you were, you would target medical research, for which the NIH had a $31 billion budget plus another $10 billion in stimulus money.
  • Not all of this money went to researchers anyway; part of the budget goes to NASA, and includes some of that agency's (rather pricey) hardware. For example, the Orbiting Carbon Observatory cost roughly $200 million, but failed to go into orbit; its replacement is costing another $170 million.
  • Might the private sector make up for the lack of government money? Pretty unlikely. For starters, it's tough to identify many companies that have a vested interest in the scientific consensus. Renewable energy companies would seem to be the biggest winners, but they're still relatively tiny. Neither the largest wind or photovoltaic manufacturers (Vestas and First Solar) appear in the Financial Times' list of the world's 500 largest companies. In contrast, there are 16 oil companies in the of the top 100, and they occupy the top two spots. Exxon's profits in 2010 were nearly enough to buy both Vestas and First Solar, given their market valuations in late February.
  • climate researchers are scrambling for a piece of a smaller piece of the government-funded pie, and the resources of the private sector are far, far more likely to go to groups that oppose their conclusions.
  • If you were paying careful attention to that last section, you would have noticed something funny: the industry that seems most likely to benefit from taking climate change seriously produces renewable energy products. However, those companies don't employ any climatologists. They probably have plenty of space for engineers, materials scientists, and maybe a quantum physicist or two, but there's not much that a photovoltaic company would do with a climatologist. Even by convincing the public of their findings—namely, climate change is real, and could have serious impacts—the scientists are not doing themselves any favors in terms of job security or alternative careers.
  • But, surely, by convincing the public, or at least the politicians, that there's something serious here, they ensure their own funding? That's arguably not true either, and the stimulus package demonstrates that nicely. The US CCSP programs, in total, got a few hundred million dollars from the stimulus. In contrast, the Department of Energy got a few billion. Carbon capture and sequestration alone received $2.4 billion, more than the entire CCSP budget.
  • The problem is that climatologists are well equipped to identify potential problems, but very poorly equipped to solve them; it would be a bit like expecting an astronomer to know how to destroy a threatening asteroid.
  • The solutions to problems related to climate change are going to come in areas like renewable energy, carbon sequestration, and efficiency measures; that's where most of the current administration's efforts have focused. None of these are areas where someone studying the climate is likely to have a whole lot to add. So, when they advocate that the public take them seriously, they're essentially asking the public to send money to someone else.
Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

Largest Protests in Wisconsin's History | the kent ridge common - 0 views

  • American mainstream media (big news channels or newspapers) are not reporting these protests. (Note the Sydney Morning Herald comes in at third place on the google news search) A quick web-tour of Fox News, New York Times and CNN: all 3 have headlines of Japanese nuclear reactors in the wake of the earthquake. NYT had zero articles on the protests on its main page, Fox News did at the bottom – “Wisconsin Union Fight Not Over Yet” – and CNN had one iReport linked from its main page, consisting of 10 black-and-white photos, none of them giving a bird’s eye view to show the massive turnout. A web commenter had this to say:
Weiye Loh

Measuring Social Media: Who Has Access to the Firehose? - 0 views

  • The question that the audience member asked — and one that we tried to touch on a bit in the panel itself — was who has access to this raw data. Twitter doesn’t comment on who has full access to its firehose, but to Weil’s credit he was at least forthcoming with some of the names, including stalwarts like Microsoft, Google and Yahoo — plus a number of smaller companies.
  • In the case of Twitter, the company offers free access to its API for developers. The API can provide access and insight into information about tweets, replies and keyword searches, but as developers who work with Twitter — or any large scale social network — know, that data isn’t always 100% reliable. Unreliable data is a problem when talking about measurements and analytics, where the data is helping to influence decisions related to social media marketing strategies and allocations of resources.
  • One of the companies that has access to Twitter’s data firehose is Gnip. As we discussed in November, Twitter has entered into a partnership with Gnip that allows the social data provider to resell access to the Twitter firehose.This is great on one level, because it means that businesses and services can access the data. The problem, as noted by panelist Raj Kadam, the CEO of Viralheat, is that Gnip’s access can be prohibitively expensive.
  • ...3 more annotations...
  • The problems with reliable access to analytics and measurement information is by no means limited to Twitter. Facebook data is also tightly controlled. With Facebook, privacy controls built into the API are designed to prevent mass data scraping. This is absolutely the right decision. However, a reality of social media measurement is that Facebook Insights isn’t always reachable and the data collected from the tool is sometimes inaccurate.It’s no surprise there’s a disconnect between the data that marketers and community managers want and the data that can be reliably accessed. Twitter and Facebook were both designed as tools for consumers. It’s only been in the last two years that the platform ecosystem aimed at serving large brands and companies
  • The data that companies like Twitter, Facebook and Foursquare collect are some of their most valuable assets. It isn’t fair to expect a free ride or first-class access to the data by anyone who wants it.Having said that, more transparency about what data is available to services and brands is needed and necessary.We’re just scraping the service of what social media monitoring, measurement and management tools can do. To get to the next level, it’s important that we all question who has access to the firehose.
  • We Need More Transparency for How to Access and Connect with Data
Weiye Loh

Executive Insight | Think Quarterly - 0 views

  • it’s all about making the data work. “I triangulate an objective assessment of the new technologies coming in, a subjective assessment of the public’s reaction to new propositions, and then I take a punt.” This ‘triangulation’ is the combination of hardheaded data analysis, coupled with business nous. Data is something that informs his hunches – but never rules them.
  • As situations unfold in real time in Egypt or Bahrain, we can see how that affects the network, too.” Even a bill being sent by email triggers a whole chain of data events: customer gets bill, most open it; some have a query and call the centre. Forty thousand bills go out an hour but if the centre gets hit with too many queries, billings are dialled down to reduce calls in. It’s about fighting the data overload.
  • we are truly overloaded by data. Governments around the world are unleashing a deluge of numbers on their citizens. That has huge implications for big businesses with lucrative government contracts. In the UK, the government recently published every item of public spending over £25,000. Search the database for ‘Vodafone’ and you get 2,448 individual transactions covering millions of pounds. Information that companies once believed was commercially confidential is now routinely published – or leaked to websites like Wikileaks.
  • ...5 more annotations...
  • “Companies will become more transparent as a necessity – customers now see that as an essential part of the trust equation.” The bigger impact may come from the technology that is making access to this data a mobile phenomenon. “This industry is de-linking access to data from physical location,” he says. In a world where shoppers can check out the competition’s prices while they’re in your store, keeping control of data is no longer an option.
  • for now, managing the information out there is the priority. Access to information was once the big problem
  • Then it quickly flipped, through technology, to data overload. “We were brought up to believe more data was good, and that’s no longer true,” he argues.
  • Laurence refuses to read reports from his product managers with more than five of the vital key performance indicators on them. “The amount of data is obscene. The managers that are going to be successful are going to be the ones who are prepared to take a knife to the amount of data… Otherwise, it’s like a virus.
  • Data plus hunch equals a powerful combination. Or, as Laurence concludes: “Data on its own is impotent.”
  •  
    "We were brought up to believe more data was good, and that's no longer true"
Weiye Loh

Science-Based Medicine » Skepticism versus nihilism about cancer and science-... - 0 views

  • I’m a John Ioannidis convert, and I accept that there is a lot of medical literature that is erroneous. (Just search for Dr. Ioannidis’ last name on this blog, and you’ll find copious posts praising him and discussing his work.) In fact, as I’ve pointed out, most medical researchers instinctively know that most new scientific findings will not hold up to scrutiny, which is why we rarely accept the results of a single study, except in unusual circumstances, as being enough to change practice. I also have pointed out many times that this is not necessarily a bad thing. Replication is key to verification of scientific findings, and more often than not provocative scientific findings are not replicated. Does that mean they shouldn’t be published?
  • As for pseudoscience, I’m half tempted to agree with Dr. Spector, but just not in the way he thinks. Unfortunately, over the last 20 years or so, there has been an increasing amount of pseudoscience in the medical literature in the form of “complementary and alternative medicine” (CAM) studies of highly improbable remedies or even virtually impossible ones (i.e., homeopathy). However, that does not appear to be what Dr. Spector is talking about, which is why I looked up his references. The second reference is to an SI article from 2009 entitled Science and Pseudoscience in Adult Nutrition Research and Practice. There, and only there, did I find out just what it is that Dr. Spector apparently means by “pseudoscience”: By pseudoscience, I mean the use of inappropriate methods that frequently yield wrong or misleading answers for the type of question asked. In nutrition research, such methods also often misuse statistical evaluations.
  • Dr. Spector doesn’t really know the difference between inadequately rigorous science and pseudoscience! Now, don’t get me wrong. I know that it’s not always easy to distinguish science from pseudoscience, especially at the fringes, but in general bad science has to go a lot further than Dr. Spector thinks to merit the the term “pseudoscience.” It is clear (to me, at least) from his articles that Dr. Spector throws around the term “pseudoscience” around rather more loosely than he should, using it as a pejorative for any clinical science less rigorous than a randomized, double-blind, placebo-controlled trial that meets FDA standards for approval of a drug (his pharma background coming to the fore, no doubt). Pseudoscience, Dr. Spector. You keep using that word. I do not think it means what you think it means. Indeed, I almost get the impression from his articles that Dr. Spector views any study that doesn’t reach FDA-level standards for drug approval to be pseudoscience.
  • ...4 more annotations...
  • Medical science, when it works well, tends to progress from basic science, to small pilot studies, to larger randomized studies, and then–only then–to those big, rigorous, insanely expensive randomized, double-blind, placebo-controlled trials. Dr. Spector mentions hierarchies of evidence, but he seems to fall into a false dichotomy, namely that if it’s not Level I evidence, it’s crap. The problem is, as Mark pointed out, in medicine we often don’t have Level I evidence for many questions. Indeed, for some questions, we will never have Level I evidence. Clinical medicine involves making decisions in the midst of uncertainty, sometimes extreme uncertainty.
  • Dr. Spector then proceeds to paint a picture of reckless physicians proceeding on crappy studies to pump women full of hormones. Actually, it was more than a bit more complicated on than that. That was the time when I was in my medical training, and I remember the discussions we had regarding the strength (or lack thereof) of the epidemiological data and the lack of good RCTs looking at HRT. I also remember that nothing works as well to relieve menopausal symptoms as HRT, an observation we have been reminded of again since 2003, which is the year when the first big study came out implicating HRT in increasing the risk of breast cancer (more later).
  • I found a rather fascinating editorial in the New England Journal of Medicine from more than 20 years ago that discussed the state of the evidence back then with regard to estrogen and breast cancer: Evidence that estrogen increases the risk of breast cancer has been surprisingly difficult to obtain. Clinical and epidemiologic studies and studies in animals strongly suggest that endogenous estrogen plays a part in causing breast cancer. If so, exogenous estrogen should be a potent promoter of breast cancer. Although more than 20 case–control and prospective studies of the relation of breast cancer and noncontraceptive estrogen use have failed to demonstrate the expected association, relatively few women in these studies used estrogen for extended periods. Studies of the use of diethylstilbestrol and oral contraceptives suggest that a long exposure or latency may be necessary to show any association between hormone use and breast cancer. In the Swedish study, only six years of follow-up was needed to demonstrate an increased risk of breast cancer with the postmenopausal use of estradiol. It should be noted, however, that half the women in the subgroup that provided detailed data on the duration of hormone use had taken estrogen for many years before their base-line prescription status was defined. The duration of estrogen exposure in these women before the diagnosis of breast cancer was probably seriously underestimated; a short latency cannot be attributed to estradiol on the basis of these data. Other recent studies of the use of noncontraceptive estrogen suggest a slightly increased risk of breast cancer after 15 to 20 years’ use.
  • even now, the evidence is conflicting regarding HRT and breast cancer, with the preponderance of evidence suggesting that mixed HRT (estrogen and progestin) significantly increases the risk of breast cancer, while estrogen-alone HRT very well might not increase the risk of breast cancer at all or (more likely) only very little. Indeed, I was just at a conference all day Saturday where data demonstrating this very point were discussed by one of the speakers. None of this stops Dr. Spector from categorically labeling estrogen as a “carcinogen that causes breast cancers that kill women.” Maybe. Maybe not. It’s actually not that clear. The problem, of course, is that, consistent with the first primary reports of WHI results, the preponderance of evidence finding health risks due to HRT have indicted the combined progestin/estrogen combinations as unsafe.
Weiye Loh

Have you heard of the Koch Brothers? | the kent ridge common - 0 views

  • I return to the Guardian online site expressly to search for those elusive articles on Wisconsin. The main page has none. I click on News – US, and there are none. I click on ‘Commentary is Free’- US, and find one article on protests in Ohio. I go to the New York Times online site. Earlier, on my phone, I had seen one article at the bottom of the main page on Wisconsin. By the time I managed to get on my computer to find it again however, the NYT main page was quite devoid of any articles on the protests at all. I am stumped; clearly, I have to reconfigure my daily news sources and reading diet.
  • It is not that the media is not covering the protests in Wisconsin at all – but effective media coverage in the US at least, in my view, is as much about volume as it is about substantive coverage. That week, more prime-time slots and the bulk of the US national attention were given to Charlie Sheen and his crazy antics (whatever they were about, I am still not too sure) than to Libya and the rest of the Middle East, or more significantly, to a pertinent domestic issue, the teacher protests  - not just in Wisconsin but also in other cities in the north-eastern part of the US.
  • In the March 2nd episode of The Colbert Report, it was shown that the Fox News coverage of the Wisconsin protests had re-used footage from more violent protests in California (the palm trees in the background gave Fox News away). Bill O’Reilly at Fox News had apparently issued an apology – but how many viewers who had seen the footage and believed it to be on-the-ground footage of Wisconsin would have followed-up on the report and the apology? And anyway, why portray the teacher protests as violent?
  • ...12 more annotations...
  • In this New York Times’ article, “Teachers Wonder, Why the scorn?“, the writer notes the often scathing comments from counter-demonstrators – “Oh you pathetic teachers, read the online comments and placards of counterdemonstrators. You are glorified baby sitters who leave work at 3 p.m. You deserve minimum wage.” What had begun as an ostensibly ‘economic reform’ targeted at teachers’ unions has gradually transmogrified into a kind of “character attack” to this section of American society – teachers are people who wage violent protests (thanks to borrowed footage from the West Coast) and they are undeserving of their economic benefits, and indeed treat these privileges as ‘rights’. The ‘war’ is waged on multiple fronts, economic, political, social, psychological even — or at least one gets this sort of picture from reading these articles.
  • as Singaporeans with a uniquely Singaporean work ethic, we may perceive functioning ‘trade unions’ as those institutions in the so-called “West” where they amass lots of membership, then hold the government ‘hostage’ in order to negotiate higher wages and benefits. Think of trade unions in the Singaporean context, and I think of SIA pilots. And of LKY’s various firm and stern comments on those issues. Think of trade unions and I think of strikes in France, in South Korea, when I was younger, and of my mum saying, “How irresponsible!” before flipping the TV channel.
  • The reason why I think the teachers’ protests should not be seen solely as an issue about trade-unions, and evaluated myopically and naively in terms of whether trade unions are ‘good’ or ‘bad’ is because the protests feature in a larger political context with the billionaire Koch brothers at the helm, financing and directing much of what has transpired in recent weeks. Or at least according to certain articles which I present here.
  • In this NYT article entitled “Billionaire Brothers’ Money Plays Role in Wisconsin Dispute“, the writer noted that Koch Industries had been “one of the biggest contributors to the election campaign of Gov. Scott Walker of Wisconsin, a Republican who has championed the proposed cuts.” Further, the president of Americans for Prosperity, a nonprofit group financed by the Koch brothers, had reportedly addressed counter-demonstrators last Saturday saying that “the cuts were not only necessary, but they also represented the start of a much-needed nationwide move to slash public-sector union benefits.” and in his own words -“ ‘We are going to bring fiscal sanity back to this great nation’ ”. All this rhetoric would be more convincing to me if they weren’t funded by the same two billionaires who financially enabled Walker’s governorship.
  • I now refer you to a long piece by Jane Mayer for The New Yorker titled, “Covert Operations: The billionaire brothers who are waging a war against Obama“. According to her, “The Kochs are longtime libertarians who believe in drastically lower personal and corporate taxes, minimal social services for the needy, and much less oversight of industry—especially environmental regulation. These views dovetail with the brothers’ corporate interests.”
  • Their libertarian modus operandi involves great expenses in lobbying, in political contributions and in setting up think tanks. From 2006-2010, Koch Industries have led energy companies in political contributions; “[i]n the second quarter of 2010, David Koch was the biggest individual contributor to the Republican Governors Association, with a million-dollar donation.” More statistics, or at least those of the non-anonymous donation records, can be found on page 5 of Mayer’s piece.
  • Naturally, the Democrats also have their billionaire donors, most notably in the form of George Soros. Mayer writes that he has made ‘generous private contributions to various Democratic campaigns, including Obama’s.” Yet what distinguishes him from the Koch brothers here is, as Michael Vachon, his spokesman, argued, ‘that Soros’s giving is transparent, and that “none of his contributions are in the service of his own economic interests.” ‘ Of course, this must be taken with a healthy dose of salt, but I will note here that in Charles Ferguson’s documentary Inside Job, which was about the 2008 financial crisis, George Soros was one of those interviewed who was not portrayed negatively. (My review of it is here.)
  • Of the Koch brothers’ political investments, what interested me more was the US’ “first libertarian thinktank”, the Cato Institute. Mayer writes, ‘When President Obama, in a 2008 speech, described the science on global warming as “beyond dispute,” the Cato Institute took out a full-page ad in the Times to contradict him. Cato’s resident scholars have relentlessly criticized political attempts to stop global warming as expensive, ineffective, and unnecessary. Ed Crane, the Cato Institute’s founder and president, told [Mayer] that “global-warming theories give the government more control of the economy.” ‘
  • K Street refers to a major street in Washington, D.C. where major think tanks, lobbyists and advocacy groups are located.
  • with recent developments as the Citizens United case where corporations are now ‘persons’ and have no caps in political contributions, the Koch brothers are ever better-positioned to take down their perceived big, bad government and carry out their ideological agenda as sketched in Mayer’s piece
  • with much important news around the world jostling for our attention – earthquake in Japan, Middle East revolutions – the passing of an anti-union bill (which finally happened today, for better or for worse) in an American state is unlikely to make a headline able to compete with natural disasters and revolutions. Then, to quote Wisconsin Governor Scott Walker during that prank call conversation, “Sooner or later the media stops finding it [the teacher protests] interesting.”
  • What remains more puzzling for me is why the American public seems to buy into the Koch-funded libertarian rhetoric. Mayer writes, ‘ “Income inequality in America is greater than it has been since the nineteen-twenties, and since the seventies the tax rates of the wealthiest have fallen more than those of the middle class. Yet the brothers’ message has evidently resonated with voters: a recent poll found that fifty-five per cent of Americans agreed that Obama is a socialist.” I suppose that not knowing who is funding the political rhetoric makes it easier for the public to imbibe it.
Weiye Loh

Rationally Speaking: A different kind of moral relativism - 0 views

  • Prinz’s basic stance is that moral values stem from our cognitive hardware, upbringing, and social environment. These equip us with deep-seated moral emotions, but these emotions express themselves in a contingent way due to cultural circumstances. And while reason can help, it has limited influence, and can only reshape our ethics up to a point, it cannot settle major differences between different value systems. Therefore, it is difficult, if not impossible, to construct an objective morality that transcends emotions and circumstance.
  • As Prinz writes, in part:“No amount of reasoning can engender a moral value, because all values are, at bottom, emotional attitudes. … Reason cannot tell us which facts are morally good. Reason is evaluatively neutral. At best, reason can tell us which of our values are inconsistent, and which actions will lead to fulfillment of our goals. But, given an inconsistency, reason cannot tell us which of our conflicting values to drop or which goals to follow. If my goals come into conflict with your goals, reason tells me that I must either thwart your goals, or give up caring about mine; but reason cannot tell me to favor one choice over the other. … Moral judgments are based on emotions, and reasoning normally contributes only by helping us extrapolate from our basic values to novel cases. Reasoning can also lead us to discover that our basic values are culturally inculcated, and that might impel us to search for alternative values, but reason alone cannot tell us which values to adopt, nor can it instill new values.”
  • This moral relativism is not the absolute moral relativism of, supposedly, bands of liberal intellectuals, or of postmodernist philosophers. It presents a more serious challenge to those who argue there can be objective morality. To be sure, there is much Prinz and I agree on. At the least, we agree that morality is largely constructed by our cognition, upbringing, and social environment; and that reason has the power synthesize and clarify our worldviews, and help us plan for and react to life’s situations
  • ...5 more annotations...
  • Suppose I concede to Prinz that reason cannot settle differences in moral values and sentiments. Difference of opinion doesn’t mean that there isn’t a true or rational answer. In fact, there are many reasons why our cognition, emotional reactions or previous values could be wrong or irrational — and why people would not pick up on their deficiencies. In his article, Prinz uses the case of sociopaths, who simply lack certain cognitive abilities. There are many reasons other than sociopathy why human beings can get things wrong, morally speaking, often and badly. It could be that people are unable to adopt a more objective morality because of their circumstances — from brain deficiencies to lack of access to relevant information. But, again, none of this amounts to an argument against the existence of objective morality.
  • As it turns out, Prinz’s conception of objective morality does not quite reflect the thinking of most people who believe in objective morality. He writes that: “Objectivism holds that there is one true morality binding upon all of us.” This is a particular strand of moral realism, but there are many. For instance, one can judge some moral precepts as better than others, yet remain open to the fact that there are probably many different ways to establish a good society. This is a pluralistic conception of objective morality which doesn’t assume one absolute moral truth. For all that has been said, Sam Harris’ idea of a moral landscape does help illustrate this concept. Thinking in terms of better and worse morality gets us out of relativism and into an objectivist approach. The important thing to note is that one need not go all the way to absolute objectivity to work toward a rational, non-arbitrary morality.
  • even Prinz admits that “Relativism does not entail that we should tolerate murderous tyranny. When someone threatens us or our way of life, we are strongly motivated to protect ourselves.” That is, there are such things as better and worse values: the worse ones kill us, the better ones don’t. This is a very broad criterion, but it is an objective standard. Prinz is arguing for a tighter moral relativism – a sort of stripped down objective morality that is constricted by nature, experience, and our (modest) reasoning abilities.
  • I proposed at the discussion that a more objective morality could be had with the help of a robust public discourse on the issues at hand. Prinz does not necessarily disagree. He wrote that “Many people have overlapping moral values, and one can settle debates by appeal to moral common ground.” But Prinz pointed out a couple of limitations on public discourse. For example, the agreements we reach on “moral common ground” are often exclusive of some, and abstract in content. Consider the United Nations Declaration of Human Rights, a seemingly good example of global moral agreement. Yet, it was ratified by a small sample of 48 countries, and it is based on suspiciously Western sounding language. Everyone has a right to education and health care, but — Prinz pointed out during the discussion — what level of education and health care? Still, the U.N. declaration was passed 48-0 with just 8 abstentions (Belarus, Czechoslovakia, Poland, Ukraine, USSR, Yugoslavia, South Africa and Saudi Arabia). It includes 30 articles of ethical standards agreed upon by 48 countries around the world. Such a document does give us more reason to think that public discourse can lead to significant agreement upon values.
  • Reason might not be able to arrive at moral truths, but it can push us to test and question the rationality of our values — a crucial part in the process that leads to the adoption of new, or modified values. The only way to reduce disputes about morality is to try to get people on the same page about their moral goals. Given the above, this will not be easy, and perhaps we shouldn’t be too optimistic in our ability to employ reason to figure things out. But reason is still the best, and even only, tool we can wield, and while it might not provide us with a truly objective morality, it’s enough to save us from complete moral relativism.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

Criticism and takedown: how review sites can defend free speech - 0 views

  • Review sites depend on user trust, and that trust is eroded when businesses are able to manipulate their own reviews. Some, including Yelp, view themselves as passive conduits for their users' reviews. Others take a more active role in fighting against censorship of patients. We think the latter approach makes more sense.
  • "it's scary to be involved in litigation," Levy said. "For many ordinary people, the easiest thing is to move on with your life."
  • Review sites can protect the integrity of their review processes by actively fighting such takedown requests.
  • ...5 more annotations...
  • review sites could do more. For example, Yelp could have offered to represent Alice itself, or even filed for a declaratory judgment that Alice's post was not an infringement of copyright.
  • According to Yelp spokeswoman Stephanie Ichinose, that isn't Yelp's role. "The way we approach this space is that we're a platform," she told Ars by phone. When faced with a lawsuit threat, "some reviewers might choose to take down their reviews, others may choose to leave them intact."
  • Wendy Seltzer, founder of the Chiling Effects clearinghouse, thinks that's not good enough. "It's in Yelp's interest not to let it or its submitters be manipulated by these agreements," she said. "The reading public is going to learn that these things exist and then come to distrust the sites."
  • Transparency is another key weapon against review censorship.
  • Ars talked to Angie Hicks, founder of Angie's List, about the steps her company takes to prevent manipulation by business owners. "Angie's List is positioned very differently in the review space," she said. "We don't accept anonymous reviews. Consumers pay to be a part of Angie's List. And any time a flag is raised about a review, it's reviewed by a human." Angie said that her company actively penalizes businesses who try to use user agreements to censor her users. "Whenever we find that a doctor is asking patients to sign this kind of agreement, we put a notification on that provider's record," she said. "We also take them out of search results."
  •  
    the mere threat of a lawsuit-even a legally frivolous one-is enough to force patients to take down negative reviews.
Weiye Loh

About - TinEye - 0 views

shared by Weiye Loh on 12 May 11 - Cached
Weiye Loh

Hashtags, a New Way for Tweets - Cultural Studies - NYTimes.com - 0 views

  • hashtags have transcended the 140-characters-or-less microblogging platform, and have become a new cultural shorthand, finding their way into chat windows, e-mail and face-to-face conversations.
  • people began using hashtags to add humor, context and interior monologues to their messages — and everyday conversation. As Susan Orlean wrote in a New Yorker blog post titled “Hash,” the symbol can be “a more sophisticated, verbal version of the dread winking emoticon that tweens use to signify that they’re joking.”
  • “Because you have a hashtag embedded in a short message with real language, it starts exhibiting other characteristics of natural language, which means basically that people start playing with it and manipulating it,” said Jacob Eisenstein, a postdoctoral fellow at Carnegie Mellon University in computational linguistics. “You’ll see them used as humor, as sort of meta-commentary, where you’ll write a message and maybe you don’t really believe it, and what you really think is in the hashtag.”
  • ...2 more annotations...
  • Hashtags then began popping up outside of Twitter, in e-mails, chat windows and text messages.
  • Using a hashtag is also a way for someone to convey that they’re part of a certain scene.
Weiye Loh

Has the Internet "hamsterized" journalism? - 0 views

  • The good news about this online convergence, the survey observes, is that it allows print journalists to produce short and longer versions of stories, the web versions of which can be continuously updated as the situation develops.
  • But, "these additional responsibilities—and having to learn the new technologies to execute them—are time-consuming, and come at a cost. In many newsrooms, old-fashioned, shoe-leather reporting—the kind where a reporter goes into the streets and talks to people or probes a government official—has been sometimes replaced by Internet searches."
  • those "rolling deadlines" in many newsrooms are increasingly resembling the rapid iteration of the proverbial exercise device invented for the aforementioned cute domestic rodent.
  •  
    the "hamsterization" of American journalism. "As newsrooms have shrunk, the job of the remaining reporters has changed. They typically face rolling deadlines as they post to their newspaper's website before, and after, writing print stories," the FCC notes in its just released report on The Information Needs of Communities.
Weiye Loh

Measuring the Unmeasurable (Internet) and Why It Matters « Gurstein's Communi... - 0 views

  • it appears that there is a quite significant hole in the National Accounting (and thus the GDP statistics) around Internet related activities since most of this accounting is concerned with measuring the production and distribution of tangible products and the associated services. For the most part the available numbers don’t include many Internet (or “social capital” e.g. in health and education) related activities as they are linked to intangible outputs. The significance of not including social capital components in the GDP has been widely discussed elsewhere. The significance (and potential remediation) of the absence of much of the Internet related activities was the subject of the workshop.
  • there had been a series of critiques of GDP statistics from Civil Society (CS) over the last few years—each associated with a CS “movements—the Woman’s Movement and the absence of measurement of “women’s (and particularly domestic) work”; the Environmental Movement and the absence of the longer term and environmental costs of the production of the goods that the GDP so blithely counts as a measure of national economic well-being; and most recently with the Sustainability Movement, and the absence of measures reflective of the longer term negative effects/costs of resource depletion and environmental degradation. What I didn’t see anywhere apart from the background discussions to the OECD workshop itself were critiques reflecting issues related to the Internet or ICTs.
  • the implications of the limitations in the Internet accounting went beyond a simple technical glitch and had potentially quite profound implications from a national policy and particularly a CS and community based development perspective. The possible distortions in economic measurement arising from the absence of Internet associated numbers in the SNA (there may be some $750 BILLION a year in “value’ being generated by Internet based search alone!) lead to the very real possibility that macro-economic analysis and related policy making may be operating on the basis of inadequate and even fallacious assumptions.
  • ...2 more annotations...
  • perhaps of greatest significance from the perspective of Civil Society and of communities is the overall absence of measurement and thus inclusion in the economic accounting of the value of the contributions provided to, through and on the Internet of various voluntary and not-for-profit initiatives and activities. Thus for example, the millions of hours of labour contributed to Wikipedia, or to the development of Free or Open Source software, or to providing support for public Internet access and training is not included as a net contribution or benefit to the economy (as measured through the GDP). Rather, this is measured as a negative effect since, as some would argue, those who are making this contribution could be using their time and talents in more “productive” (and “economically measurable”) activities. Thus for example, a region or country that chooses to go with free or open source software as the basis for its in-school computing is not only “not contributing to ‘economic well being’” it is “statistically” a “cost” to the economy since it is not allowing for expenditures on, for example, suites of Microsoft products.
  • there appears to have been no systematic attention paid to the relationship of the activities and growth of voluntary contributions to the Internet and the volume, range and depth of Internet activity, digital literacy and economic value being derived from the use of the Internet.
nora sikin

A different kind of "human flesh" search engine - 13 views

Someone told me that some guys take pictures of random pretty girls on the street, post them up on the online forum hardwarezone, and together they pool their resources to identify who she is. =) ...

privacy

Weiye Loh

Google's Marissa Mayer Assaults Designers With Data | Designerati | Fast Company - 0 views

  • The irony was not lost on anyone in attendance at AIGA's national conference in Memphis last weekend. Marissa Mayer, "keeper" of the Google homepage since 1998, walked into a room filled with over 1,200 mostly graphic designers to talk about how well design worked at the design-dismissive Google. She even had the charts and graphs of user-tested research to prove it, she said.
  • In an almost robotic delivery, Mayer acknowledged that design was never the primary concern when developing the site. When she mentioned to founder Sergey Brin that he might want to do something to spiff up the brand-new homepage for users, his response was uncomfortably eloquent: "I don't do HTML."
  • About the now-notorious claim that she once tested 41 shades of blue? All true. Turns out Google was using two different colors of blue, one on the homepage, one on the Gmail page. To find out which was more effective so they could standardize it across the system, they tested an imperceptible range of blues between the two. The winning color, according to dozens of charts and graphs, was not too green, not too red.
  • ...1 more annotation...
  • This kind of over-analytical testing was exactly why designer Doug Bowman made a very public break from Google earlier this year. "I had a recent debate over whether a border should be 3, 4, or 5 pixels wide and was asked to prove my case," he wrote in a post after his departure. Maybe he couldn't, but someone won a recent battle to widen the search box by a few pixels, the most major change for the homepage in quite some time.
  •  
    I don't really know where this fits but I find this really amusing. The article is about how Google uses data, very specific data to determine their designs, almost to the point of being anal (to me). I wonder if this is what it means by challenging forth the nature (human mind) to reveal.
« First ‹ Previous 81 - 100 of 108 Next ›
Showing 20 items per page