Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Cap

Rss Feed Group items tagged

Weiye Loh

Congress told that Internet data caps will discourage piracy - 0 views

  • While usage-based billing and data caps are often talked about in terms of their ability to curb congestion, it's rarely suggested that making Internet access more expensive is a positive move for the content industries. But Castro has a whole host of such suggestions, drawn largely verbatim from his 2009 report (PDF) on the subject.
  • Should the US government actually fund antipiracy research? Sure. Should the US government “enlist” Internet providers to block entire websites? Sure. Should copyright holders suggest to the government which sites should go on the blocklist? Sure. Should ad networks and payment processors be forced to cut ties to such sites, even if those sites are legal in the countries where they operate? Sure.
  • Castro's original 2009 paper goes further, suggesting that deep packet inspection (DPI) be routinely deployed by ISPs in order to scan subscriber traffic for potential copyright infringements. Sound like wiretapping? Yes, though Castro has a solution if courts do crack down on the practice: "the law should be changed." After all, "piracy mitigation with DPI deals with a set of issues virtually identical to the largely noncontroversial question of virus detection and mitigation."
  • ...1 more annotation...
  • If you think that some of these approaches to antipiracy enforcement have problems, Castro knows why; he told Congress yesterday that critics of such ideas "assume that piracy is the bedrock of the Internet economy" and don't want to disrupt it, a statement patently absurd on its face.
  •  
    Internet data caps aren't just good at stopping congestion; they can also be useful tools for curtailing piracy. That was one of the points made by Daniel Castro, an analyst at the Information Technology and Innovation Foundation (ITIF) think tank in Washington DC. Castro testified (PDF) yesterday before the House Judiciary Committee about the problem of "parasite" websites, saying that usage-based billing and monthly data caps were both good ways to discourage piracy, and that the government shouldn't do anything to stand in their way. The government should allow "pricing structures and usage caps that discourage online piracy," he wrote, which comes pretty close to suggesting that heavy data use implies piracy and should be limited.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Paul Crowley's Blog - A survey of anti-cryonics writing - 0 views

  • cryonics offers almost eternal life. To its critics, cryonics is pseudoscience; the idea that we could freeze someone today in such a way that future technology might be able to re-animate them is nothing more than wishful thinking on the desire to avoid death. Many who battle nonsense dressed as science have spoken out against it: see for example Nano Nonsense and Cryonics, a 2001 article by celebrated skeptic Michael Shermer; or check the Skeptic’s Dictionary or Quackwatch entries on the subject, or for more detail read the essay Cryonics–A futile desire for everlasting life by “Invisible Flan”.
  • And of course the pro-cryonics people have written reams and reams of material such as Ben Best’s Scientific Justification of Cryonics Practice on why they think this is exactly as plausible as I might think, and going into tremendous technical detail setting out arguments for its plausibility and addressing particular difficulties. It’s almost enough to make you want to sign up on the spot. Except, of course, that plenty of totally unscientific ideas are backed by reams of scientific-sounding documents good enough to fool non-experts like me. Backed by the deep pockets of the oil industry, global warming denialism has produced thousands of convincing-sounding arguments against the scientific consensus on CO2 and AGW. T
  • Nano Nonsense and Cryonics goes for the nitty-gritty right away in the opening paragraph:To see the flaw in this system, thaw out a can of frozen strawberries. During freezing, the water within each cell expands, crystallizes, and ruptures the cell membranes. When defrosted, all the intracellular goo oozes out, turning your strawberries into runny mush. This is your brain on cryonics.This sounds convincing, but doesn’t address what cryonicists actually claim. Ben Best, President and CEO of the Cryonics Institute, replies in the comments:Strawberries (and mammalian tissues) are not turned to mush by freezing because water expands and crystallizes inside the cells. Water crystallizes in the extracellular space because more nucleators are found extracellularly. As water crystallizes in the extracellular space, the extracellular salt concentration increases causing cells to lose water osmotically and shrink. Ultimately the cell membranes are broken by crushing from extracellular ice and/or high extracellular salt concentration. […] Cryonics organizations use vitrification perfusion before cooling to cryogenic temperatures. With good brain perfusion, vitrification can reduce ice formation to negligible amounts.
  • ...6 more annotations...
  • The Skeptic’s Dictionary entry is no advance. Again, it refers erroneously to a “mushy brain”. It points out that the technology to reanimate those in storage does not already exist, but provides no help for us non-experts in assessing whether it is a plausible future technology, like super-fast computers or fusion power, or whether it is as crazy as the sand-powered tank; it simply asserts baldly and to me counterintuitively that it is the latter. Again, perhaps cryonic reanimation is a sand-powered tank, but I can explain to you why a sand-powered tank is implausible if you don’t already know, and if cryonics is in the same league I’d appreciate hearing the explanation.
  • Another part of the article points out the well-known difficulties with whole-body freezing — because the focus is on achieving the best possible preservation of the brain, other parts suffer more. But the reason why the brain is the focus is that you can afford to be a lot bolder in repairing other parts of the body — unlike the brain, if my liver doesn’t survive the freezing, it can be replaced altogether.
  • Further, the article ignores one of the most promising possibilities for reanimation, that of scanning and whole-brain emulation, a route that requires some big advances in computer and scanning technology as well as our understanding of the lowest levels of the brain’s function, but which completely sidesteps any problems with repairing either damage from the freezing process or whatever it was that led to legal death.
  • Sixteen years later, it seems that hasn’t changed; in fact, as far as the issue of technical feasability goes it is starting to look as if on all the Earth, or at least all the Internet, there is not one person who has ever taken the time to read and understand cryonics claims in any detail, still considers it pseudoscience, and has written a paper, article or even a blog post to rebut anything that cryonics advocates actually say. In fact, the best of the comments on my first blog post on the subject are already a higher standard than anything my searches have turned up.
  • I don’t have anything useful to add, I just wanted to say that I feel exactly as you do about cryonics and living forever. And I thought that this statement: I know that I don’t know enough to judge. shows extreme wisdom. If only people wishing to comment on global warming would apply the same test.
  • WRT global warming, the mistake people make is trying to go direct to the first-order evidence, which is much too complicated and too easy to misrepresent to hope to directly interpret unless you make it your life’s work, and even then only in a particular area. The correct thing to do is to collect second-order evidence, such as that every major scientific academy has backed the IPCC.
    • Weiye Loh
       
      First-order evidence vs second-order evidence...
  •  
    Cryonics
Weiye Loh

How wise are crowds? - 0 views

  • n the past, economists trying to model the propagation of information through a population would allow any given member of the population to observe the decisions of all the other members, or of a random sampling of them. That made the models easier to deal with mathematically, but it also made them less representative of the real world.
    • Weiye Loh
       
      Random sampling is not representative
  • this paper does is add the important component that this process is typically happening in a social network where you can’t observe what everyone has done, nor can you randomly sample the population to find out what a random sample has done, but rather you see what your particular friends in the network have done,” says Jon Kleinberg, Tisch University Professor in the Cornell University Department of Computer Science, who was not involved in the research. “That introduces a much more complex structure to the problem, but arguably one that’s representative of what typically happens in real settings.”
    • Weiye Loh
       
      So random sampling is actually more accurate?
  • Earlier models, Kleinberg explains, indicated the danger of what economists call information cascades. “If you have a few crucial ingredients — namely, that people are making decisions in order, that they can observe the past actions of other people but they can’t know what those people actually knew — then you have the potential for information cascades to occur, in which large groups of people abandon whatever private information they have and actually, for perfectly rational reasons, follow the crowd,”
  • ...8 more annotations...
  • The MIT researchers’ paper, however, suggests that the danger of information cascades may not be as dire as it previously seemed.
  • a mathematical model that describes attempts by members of a social network to make binary decisions — such as which of two brands of cell phone to buy — on the basis of decisions made by their neighbors. The model assumes that for all members of the population, there is a single right decision: one of the cell phones is intrinsically better than the other. But some members of the network have bad information about which is which.
  • The MIT researchers analyzed the propagation of information under two different conditions. In one case, there’s a cap on how much any one person can know about the state of the world: even if one cell phone is intrinsically better than the other, no one can determine that with 100 percent certainty. In the other case, there’s no such cap. There’s debate among economists and information theorists about which of these two conditions better reflects reality, and Kleinberg suggests that the answer may vary depending on the type of information propagating through the network. But previous models had suggested that, if there is a cap, information cascades are almost inevitable.
  • if there’s no cap on certainty, an expanding social network will eventually converge on an accurate representation of the state of the world; that wasn’t a big surprise. But they also showed that in many common types of networks, even if there is a cap on certainty, convergence will still occur.
  • people in the past have looked at it using more myopic models,” says Acemoglu. “They would be averaging type of models: so my opinion is an average of the opinions of my neighbors’.” In such a model, Acemoglu says, the views of people who are “oversampled” — who are connected with a large enough number of other people — will end up distorting the conclusions of the group as a whole.
  • What we’re doing is looking at it in a much more game-theoretic manner, where individuals are realizing where the information comes from. So there will be some correction factor,” Acemoglu says. “If I’m seeing you, your action, and I’m seeing Munzer’s action, and I also know that there is some probability that you might have observed Munzer, then I discount his opinion appropriately, because I know that I don’t want to overweight it. And that’s the reason why, even though you have these influential agents — it might be that Munzer is everywhere, and everybody observes him — that still doesn’t create a herd on his opinion.”
  • the new paper leaves a few salient questions unanswered, such as how quickly the network will converge on the correct answer, and what happens when the model of agents’ knowledge becomes more complex.
  • the MIT researchers begin to address both questions. One paper examines rate of convergence, although Dahleh and Acemoglu note that that its results are “somewhat weaker” than those about the conditions for convergence. Another paper examines cases in which different agents make different decisions given the same information: some people might prefer one type of cell phone, others another. In such cases, “if you know the percentage of people that are of one type, it’s enough — at least in certain networks — to guarantee learning,” Dahleh says. “I don’t need to know, for every individual, whether they’re for it or against it; I just need to know that one-third of the people are for it, and two-thirds are against it.” For instance, he says, if you notice that a Chinese restaurant in your neighborhood is always half-empty, and a nearby Indian restaurant is always crowded, then information about what percentages of people prefer Chinese or Indian food will tell you which restaurant, if either, is of above-average or below-average quality.
  •  
    By melding economics and engineering, researchers show that as social networks get larger, they usually get better at sorting fact from fiction.
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

Stock and flow « Snarkmarket - 0 views

  • There are two kinds of quan­ti­ties in the world. Stock is a sta­tic value: money in the bank, or trees in the for­est. Flow is a rate of change: fif­teen dol­lars an hour, or three-thousand tooth­picks a day.
  • stock and flow is the mas­ter metaphor for media today. Here’s what I mean: Flow is the feed. It’s the posts and the tweets. It’s the stream of daily and sub-daily updates that remind peo­ple that you exist. Stock is the durable stuff. It’s the con­tent you pro­duce that’s as inter­est­ing in two months (or two years) as it is today. It’s what peo­ple dis­cover via search. It’s what spreads slowly but surely, build­ing fans over time.
  • I feel like flow is ascenascen­dant these days, for obvi­ous reasons—but we neglect stock at our own peril.
  • ...9 more annotations...
  • Flow is a tread­mill, and you can’t spend all of your time run­ning on the tread­mill. Well, you can. But then one day you’ll get off and look around and go: Oh man. I’ve got noth­ing here.
  • But I’m not say­ing you should ignore flow!
  • this is no time to hole up and work in iso­la­tion, emerg­ing after long months or years with your perfectly-polished opus. Every­body will go: huh?
  • if you don’t have flow to plug your new fans into, you’re suf­fer­ing a huge (here it is!) oppor­tu­nity cost. You’ll have to find them all again next time you emerge from your cave.
  • we all got really good at flow, really fast. But flow is ephemeral. Stock sticks around. Stock is cap­i­tal. Stock is protein.
  • And the real magic trick in 2010 is to put them both together. To keep the ball bounc­ing with your flow—to main­tain that open chan­nel of communication—while you work on some kick-ass stock in the back­ground.
  • all these super-successful artists and media peo­ple today who don’t really think about flow. Like, Wes Ander­son
  • the secret is that some­body else does his flow for him. I mean, what are PR and adver­tis­ing? Flow, bought and paid for.
  • Today I’m still always ask­ing myself: Is this stock? Is this flow? How’s my mix? Do I have enough of both?
  •  
    flow is ascen dant these days, for obvi ous reasons-but we neglect stock at our own peril.
Weiye Loh

Is 'More Efficient' Always Better? - NYTimes.com - 1 views

  • Efficiency is the seemingly value-free standard economists use when they make the case for particular policies — say, free trade, more liberal immigration policies, cap-and-trade policies on environmental pollution, the all-volunteer army or congestion tolls. The concept of efficiency is used to justify a reliance on free-market principles, rather than the government, to organize the health care sector, or to make recommendations on taxation, government spending and monetary policy. All of these public policies have one thing in common: They create winners and losers among members of society.
  • can it be said that a more efficient resource allocation is better than a less efficient one, given the changes in the distribution of welfare among members of society that these allocations imply?
  • Suppose a restructuring of the economy has the effect of increasing the growth of average gross domestic product per capita, but that the benefits of that growth accrue disproportionately disproportionately to a minority of citizens, while others are worse off as a result, as appears to have been the case in the United States in the last several decades. Can economists judge this to be a good thing?
  • ...1 more annotation...
  • Indeed, how useful is efficiency as a normative guide to public policy? Can economists legitimately base their advocacy of particular policies on that criterion? That advocacy, especially when supported by mathematical notation and complex graphs, may look like economic science. But when greater efficiency is accompanied by a redistribution of economic privilege in society, subjective ethical dimensions inevitably get baked into the economist’s recommendations.
  •  
    Is 'More Efficient' Always Better?
Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

Why Research Alone Won't Fix the Climate - NYTimes.com - 0 views

  •  
    Why Research Alone Won't Fix the Climate By DAVID LEONHARDT
Weiye Loh

Climate Change: Study Says Dire Warnings Fuel Skepticism - TIME - 0 views

  • I had the chance to sift through TIME's decades of environment coverage. I came to two conclusions: First, we were writing stories about virtually the same subjects 40 years ago as we do now. (Air pollution, endangered species, the polluted oceans, dwindling natural resources.) Second, our coverage of climate change has been really scary — by which I mean, we've emphasized the catastrophic threats of global warming in dire language.
  • Scientists were telling us that global warming really had the potential to wreck the future of the planet, and we wanted to get that message across to readers — even if it meant scaring the hell out of them.
  • According to forthcoming research by the Berkeley psychologists Robb Willer and Matthew Feinberg, when people are shown scientific evidence or news stories on climate change that emphasize the most negative aspects of warming — extinguished species, melting ice caps, serial natural disasters — they are actually more likely to dismiss or deny what they're seeing. Far from scaring people into taking action on climate change, such messages seem to scare them straight into denial.
  • ...4 more annotations...
  • Willer and Feinberg tested participants' belief in global warming, and then their belief in what's called the just-world theory, which holds that life is generally fair and predictable. The subjects were then randomly assigned to read one of two newspaper-style articles. Both pieces were identical through the first four paragraphs, providing basic scientific information about climate change, but they differed in their conclusions, with one article detailing the possibly apocalyptic consequences of climate change, and the other ending with a more upbeat message about potential solutions to global warming.
  • participants given the doomsday articles came out more skeptical of climate change, while those who read the bright-side pieces came out less skeptical. The increase in skepticism was especially acute among subjects who'd scored high on the just-world scale, perhaps because the worst victims of global warming — the poor of the developing world, future generations, blameless polar bears — are the ones least responsible for it. Such unjust things couldn't possibly occur, and so the predictions can't be true. The results, Willer and Feinberg wrote, "demonstrate how dire messages warning of the severity of global warming and its presumed dangers can backfire ... by contradicting individuals' deeply held beliefs that the world is fundamentally just."
  • a climate scientist armed with data might argue that worldviews should be trumped by facts. But there's no denying that climate skepticism is on the rise
  • politicians — mostly on the right — have aggressively pushed the climate-change-is-a-hoax trope. The Climategate controversy of a year ago certainly might have played a role, too, though the steady decline in belief began well before those hacked e-mails were published. Still, the fact remains that if the point of the frightening images in global-warming documentaries like An Inconvenient Truth was to push audiences to act on climate change, they've been a failure theoretically and practically.
Weiye Loh

The Dawn of Paid Search Without Keywords - Search Engine Watch (SEW) - 0 views

  • This year will fundamentally change how we think about and buy access to prospects, namely keywords. It is the dawn of paid search without keywords.
  • Google's search results were dominated by the "10 blue links" -- simple headlines, descriptions, and URLs to entice and satisfy searchers. Until it wasn't. Universal search wove in images, video, and real-time updates.
  • For most of its history, too, AdWords been presented in a text format even as the search results morphed into a multimedia experience. The result is that attention was pulled towards organic results at the expense of ads.
  • ...8 more annotations...
  • Google countered that trend with their big push for universal paid search in 2010. It was, perhaps, the most radical evolution to the paid search results since the introduction of Quality Score. Consider the changes:
  • New ad formats: Text is no longer the exclusive medium for advertising on Google. No format exemplifies that more than Product List Ads (and their cousin, Product Extensions). There is no headline, copy or display URL. Instead, it's just a product image, name, price and vendor slotted in the highest positions on the right side. What's more, you don't choose keywords. We also saw display creep into image search results with Image Search Ads and traditional display ads.
  • New calls-to-action: The way you satisfy your search with advertising on Google has evolved as well. Most notably, through the introduction of click-to-call as an option for mobile search ads (as well as the limited release AdWords call metrics). Similarly, more of the site experience is being pulled into the search results. The beta Comparison Ads creates a marketplace for loan and credit card comparison all on Google. The call to action is comparison and filtering, not just clicking on an ad.
  • New buying/monetization models: Cost-per-click (CPC) and cost-per-thousand-impressions (CPM) are no longer the only ways you can buy. Comparison Ads are sold on a cost-per-lead basis. Product listing ads are sold on a cost-per-acquisition (CPA) basis for some advertisers (CPC for most).
  • New display targeting options: Remarketing (a.k.a. retargeting) brought highly focused display buys to the AdWords interface. Specifically, the ability to only show display ads to segments of people who visit your site, in many cases after clicking on a text ad.
  • New advertising automation: In a move that radically simplifies advertising for small businesses, Google began testing Google Boost. It involves no keyword research and no bidding. If you have a Google Places page, you can even do it without a website. It's virtually hands-off advertising for SMBs.
  • Of those changes, Google Product Listing Ads and Google Boost offer the best glimpse into the future of paid search without keywords. They're notable for dramatic departures in every step of how you advertise on Google: Targeting: Automated targeting toward certain audiences as determined by Google vs. keywords chosen by the advertiser. Ads: Product listing ads bring a product search like result in the top position in the right column and Boost promotes a map-like result in a preferred position above organic results. Pricing: CPA and monthly budget caps replace daily budgets and CPC bids.
  • For Google to continue their pace of growth, they need two things: Another line of business to complement AdWords, and display advertising is it. They've pushed even more aggressively into the channel, most notably with the acquisition of Invite Media, a demand side platform. To remove obstacles to profit and incremental growth within AdWords. These barriers are primarily how wide advertisers target and how much they pay for the people they reach (see: "Why Google Wants to Eliminate Bidding In Exchange for Your Profits").
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Roger Pielke Jr.'s Blog: Climate Science Turf Wars and Carbon Dioxide Myopia - 0 views

  • Presumably by "climate effect" Caldeira means the long-term consequences of human actions on the global climate system -- that is, climate change. Going unmentioned by Caldeira is the fact that there are also short-term climate effects, and among those, the direct health effects of non-carbon dioxide emissions on human health and agriculture.
  • There are a host of reasons to worry about the climatic effects of  non-CO2 forcings beyond long-term climate change.  Shindell explains this point: There is also a value judgement inherent in any suggestion that CO2 is the only real forcer that matters or that steps to reduce soot and ozone are ‘almost meaningless’. Based on CO2’s long residence time in the atmosphere, it dominates long-term committed forcing. However, climate changes are already happening and those alive today are feeling the effects now and will continue to feel them during the next few decades, but they will not be around in the 22nd century. These climate changes have significant impacts. When rainfall patterns shift, livelihoods in developing countries can be especially hard hit. I suspect that virtually all farmers in Africa and Asia are more concerned with climate change over the next 40 years than with those after 2050. Of course they worry about the future of their children and their children’s children, but providing for their families now is a higher priority. . . However, saying CO2 is the only thing that matters implies that the near-term climate impacts I’ve just outlined have no value at all, which I don’t agree with. What’s really meant in a comment like “if one’s goal is to limit climate change, one would always be better off spending the money on immediate reduction of CO2 emissions’ is ‘if one’s goal is limiting LONG-TERM climate change”. That’s a worthwhile goal, but not the only goal.
  • The UNEP report notes that action on carbon dioxide is not going to have a discernible influence on the climate system until perhaps mid-century (see the figure at the top of this post).  Consequently, action on non-carbon dioxide forcings is very much independent of action on carbon dioxide -- they address climatic causes and consequences on very different timescales, and thus probably should not even be conflated to begin with. UNEP writes: In essence, the near-term CH4 and BC measures examined in this Assessment are effectively decoupled from the CO2 measures both in that they target different source sectors and in that their impacts on climate change take place over different timescales.Advocates for action on carbon dioxide are quick to frame discussions narrowly in terms of long-term climate change and the primary role of carbon dioxide. Indeed, accumulating carbon dioxide is a very important issue (consider that my focus in The Climate Fix is carbon dioxide, but I also emphasize that the carbon dioxide issue is not the same thing as climate change), but it is not the only issue.
  • ...2 more annotations...
  • perhaps the difference in opinions on this subject expressed by Shindell and Caldeira is nothing more than an academic turf battle over what it means for policy makers to focus on "climate" -- with one wanting the term (and justifications for action invoking that term) to be reserved for long-term climate issues centered on carbon dioxide and the other focused on a broader definition of climate and its impacts.  If so, then it is important to realize that such turf battles have practical consequences. Shindell's breath of fresh air gets the last word with his explanation why it is that we must consider long- and short- term climate impacts at the same time, and how we balance them will reflect a host of non-scientific considerations: So rather than set one against the other, I’d view this as analogous to research on childhood leukemia versus Alzheimer’s. If you’re an advocate for child’s health, you may care more about the former, and if you’re a retiree you might care more about the latter. One could argue about which is most worthy based on number of cases, years of life lost, etc., but in the end it’s clear that both diseases are worth combating and any ranking of one over the other is a value judgement. Similarly, there is no scientific basis on which to decide which impacts of climate change are most important, and we can only conclude that both controls are worthwhile. The UNEP/WMO Assessment provides clear information on the benefits of short-lived forcer reductions so that decision-makers, and society at large, can decide how best to use limited resources.
  • If we eliminated emissions of methane and black carbon, but did nothing about carbon dioxide we would have delayedThis presupposes that CO2 emissions can be capped at current levels without economic devastation or that immediate economic devastation is warranted.
  •  
    Over at Dot Earth Andy Revkin has posted up two illuminating comments from climate scientists -- one from NASA's Drew Shindell and a response to it from Stanford's Ken Caldeira. Shindell's comment focuses on the impacts of action to mitigate the effects of black carbon, tropospheric ozone and other non-carbon dioxide human climate forcings, and comes from his perspective as lead author of an excellent UNEP report on the subject that is just out (here in PDF and the Economist has an excellent article here).  (Shindell's comment was apparently in response to an earlier Dot Earth comment by Raymond Pierrehumbert.) In contrast, Caldeira invokes long-term climate change to defend the importance of focusing on carbon dioxide:
Weiye Loh

Have you heard of the Koch Brothers? | the kent ridge common - 0 views

  • I return to the Guardian online site expressly to search for those elusive articles on Wisconsin. The main page has none. I click on News – US, and there are none. I click on ‘Commentary is Free’- US, and find one article on protests in Ohio. I go to the New York Times online site. Earlier, on my phone, I had seen one article at the bottom of the main page on Wisconsin. By the time I managed to get on my computer to find it again however, the NYT main page was quite devoid of any articles on the protests at all. I am stumped; clearly, I have to reconfigure my daily news sources and reading diet.
  • It is not that the media is not covering the protests in Wisconsin at all – but effective media coverage in the US at least, in my view, is as much about volume as it is about substantive coverage. That week, more prime-time slots and the bulk of the US national attention were given to Charlie Sheen and his crazy antics (whatever they were about, I am still not too sure) than to Libya and the rest of the Middle East, or more significantly, to a pertinent domestic issue, the teacher protests  - not just in Wisconsin but also in other cities in the north-eastern part of the US.
  • In the March 2nd episode of The Colbert Report, it was shown that the Fox News coverage of the Wisconsin protests had re-used footage from more violent protests in California (the palm trees in the background gave Fox News away). Bill O’Reilly at Fox News had apparently issued an apology – but how many viewers who had seen the footage and believed it to be on-the-ground footage of Wisconsin would have followed-up on the report and the apology? And anyway, why portray the teacher protests as violent?
  • ...12 more annotations...
  • In this New York Times’ article, “Teachers Wonder, Why the scorn?“, the writer notes the often scathing comments from counter-demonstrators – “Oh you pathetic teachers, read the online comments and placards of counterdemonstrators. You are glorified baby sitters who leave work at 3 p.m. You deserve minimum wage.” What had begun as an ostensibly ‘economic reform’ targeted at teachers’ unions has gradually transmogrified into a kind of “character attack” to this section of American society – teachers are people who wage violent protests (thanks to borrowed footage from the West Coast) and they are undeserving of their economic benefits, and indeed treat these privileges as ‘rights’. The ‘war’ is waged on multiple fronts, economic, political, social, psychological even — or at least one gets this sort of picture from reading these articles.
  • as Singaporeans with a uniquely Singaporean work ethic, we may perceive functioning ‘trade unions’ as those institutions in the so-called “West” where they amass lots of membership, then hold the government ‘hostage’ in order to negotiate higher wages and benefits. Think of trade unions in the Singaporean context, and I think of SIA pilots. And of LKY’s various firm and stern comments on those issues. Think of trade unions and I think of strikes in France, in South Korea, when I was younger, and of my mum saying, “How irresponsible!” before flipping the TV channel.
  • The reason why I think the teachers’ protests should not be seen solely as an issue about trade-unions, and evaluated myopically and naively in terms of whether trade unions are ‘good’ or ‘bad’ is because the protests feature in a larger political context with the billionaire Koch brothers at the helm, financing and directing much of what has transpired in recent weeks. Or at least according to certain articles which I present here.
  • In this NYT article entitled “Billionaire Brothers’ Money Plays Role in Wisconsin Dispute“, the writer noted that Koch Industries had been “one of the biggest contributors to the election campaign of Gov. Scott Walker of Wisconsin, a Republican who has championed the proposed cuts.” Further, the president of Americans for Prosperity, a nonprofit group financed by the Koch brothers, had reportedly addressed counter-demonstrators last Saturday saying that “the cuts were not only necessary, but they also represented the start of a much-needed nationwide move to slash public-sector union benefits.” and in his own words -“ ‘We are going to bring fiscal sanity back to this great nation’ ”. All this rhetoric would be more convincing to me if they weren’t funded by the same two billionaires who financially enabled Walker’s governorship.
  • I now refer you to a long piece by Jane Mayer for The New Yorker titled, “Covert Operations: The billionaire brothers who are waging a war against Obama“. According to her, “The Kochs are longtime libertarians who believe in drastically lower personal and corporate taxes, minimal social services for the needy, and much less oversight of industry—especially environmental regulation. These views dovetail with the brothers’ corporate interests.”
  • Their libertarian modus operandi involves great expenses in lobbying, in political contributions and in setting up think tanks. From 2006-2010, Koch Industries have led energy companies in political contributions; “[i]n the second quarter of 2010, David Koch was the biggest individual contributor to the Republican Governors Association, with a million-dollar donation.” More statistics, or at least those of the non-anonymous donation records, can be found on page 5 of Mayer’s piece.
  • Naturally, the Democrats also have their billionaire donors, most notably in the form of George Soros. Mayer writes that he has made ‘generous private contributions to various Democratic campaigns, including Obama’s.” Yet what distinguishes him from the Koch brothers here is, as Michael Vachon, his spokesman, argued, ‘that Soros’s giving is transparent, and that “none of his contributions are in the service of his own economic interests.” ‘ Of course, this must be taken with a healthy dose of salt, but I will note here that in Charles Ferguson’s documentary Inside Job, which was about the 2008 financial crisis, George Soros was one of those interviewed who was not portrayed negatively. (My review of it is here.)
  • Of the Koch brothers’ political investments, what interested me more was the US’ “first libertarian thinktank”, the Cato Institute. Mayer writes, ‘When President Obama, in a 2008 speech, described the science on global warming as “beyond dispute,” the Cato Institute took out a full-page ad in the Times to contradict him. Cato’s resident scholars have relentlessly criticized political attempts to stop global warming as expensive, ineffective, and unnecessary. Ed Crane, the Cato Institute’s founder and president, told [Mayer] that “global-warming theories give the government more control of the economy.” ‘
  • K Street refers to a major street in Washington, D.C. where major think tanks, lobbyists and advocacy groups are located.
  • with recent developments as the Citizens United case where corporations are now ‘persons’ and have no caps in political contributions, the Koch brothers are ever better-positioned to take down their perceived big, bad government and carry out their ideological agenda as sketched in Mayer’s piece
  • with much important news around the world jostling for our attention – earthquake in Japan, Middle East revolutions – the passing of an anti-union bill (which finally happened today, for better or for worse) in an American state is unlikely to make a headline able to compete with natural disasters and revolutions. Then, to quote Wisconsin Governor Scott Walker during that prank call conversation, “Sooner or later the media stops finding it [the teacher protests] interesting.”
  • What remains more puzzling for me is why the American public seems to buy into the Koch-funded libertarian rhetoric. Mayer writes, ‘ “Income inequality in America is greater than it has been since the nineteen-twenties, and since the seventies the tax rates of the wealthiest have fallen more than those of the middle class. Yet the brothers’ message has evidently resonated with voters: a recent poll found that fifty-five per cent of Americans agreed that Obama is a socialist.” I suppose that not knowing who is funding the political rhetoric makes it easier for the public to imbibe it.
Weiye Loh

Roger Pielke Jr.'s Blog: Wanted: Less Spin, More Informed Debate - 0 views

  • , the rejection of proposals that suggest starting with a low carbon price is thus a pretty good guarantee against any carbon pricing at all.  It is rather remarkable to see advocates for climate action arguing against a policy that recommends implementing a carbon price, simply because it does not start high enough for their tastes.  For some, idealism trumps pragmatism, even if it means no action at all.
  • Ward writes: . . . climate change is the result of a number of market failures, the largest of which arises from the fact that the prices of products and services involving emissions of greenhouse gases do not reflect the true costs of the damage caused through impacts on the climate. . . All serious economic analyses of how to tackle climate change identify the need to correct this market failure through a carbon price, which can be implemented, for instance, through cap and trade schemes or carbon taxes. . . A carbon price can be usefully supplemented by improvements in innovation policies, but it needs to be at the core of action on climate change, as this paper by Carolyn Fischer and Richard Newell points out.
  • First, the criticism is off target. A low and rising carbon price is in fact a central element to the policy recommendations advanced by the Hartwell Group in Climate Pragmatism, the Hartwell Paper, and as well, in my book The Climate Fix.  In Climate Pragmatism, we approvingly cite Japan's low-but-rising fossil fuels tax and discuss a range of possible fees or taxes on fossil fuels, implemented, not to penalize energy use or price fossil fuels out of the market, but rather to ensure that as we benefit from today’s energy resources we are setting aside the funds necessary to accelerate energy innovation and secure the nation’s energy future.
  • ...3 more annotations...
  • Here is another debating lesson -- before engaging in public not only should one read the materials that they are critiquing, they should also read the materials that they cite in support of their own arguments. This is not the first time that Bob Ward has put out misleading information related to my work.  Ever since we debated in public at the Royal Institution, Bob has adopted guerrilla tactics, lobbing nonsense into the public arena and then hiding when challenged to support or defend his views.  As readers here know, I am all for open and respectful debate over these important topics.  Why is that instead, all we get is poorly informed misdirection and spin? Despite the attempts at spin, I'd welcome Bob's informed engagement on this topic. Perhaps he might start by explaining which of the 10 statements that I put up on the mathematics and logic underlying climate pragmatism is incorrect.
  • In comments to another blog, I've identified Bob as a PR flack. I see no reason to change that assessment. In fact, his actions only confirm it. Where does he fit into a scientific debate?
  • Thanks for the comment, but I'll take the other side ;-)First, this is a policy debate that involves various scientific, economic, political analyses coupled with various values commitments including monied interests -- and as such, PR guys are as welcome as anyone else.That said, the problem here is not that Ward is a PR guy, but that he is trying to make his case via spin and misrepresentation. That gets noticed pretty quickly by anyone paying attention and is easily shot down.
Weiye Loh

AP IMPACT: Framed for child porn - by a PC virus by AP: Yahoo! Tech - 0 views

  • Pedophiles can exploit virus-infected PCs to remotely store and view their stash without fear they'll get caught. Pranksters or someone trying to frame you can tap viruses to make it appear that you surf illegal Web sites.
  • Whatever the motivation, you get child porn on your computer — and might not realize it until police knock at your door.
  • In 2007, Fiola's bosses became suspicious after the Internet bill for his state-issued laptop showed that he used 4 1/2 times more data than his colleagues. A technician found child porn in the PC folder that stores images viewed online. Fiola was fired and charged with possession of child pornography, which carries up to five years in prison. He endured death threats, his car tires were slashed and he was shunned by friends. Fiola and his wife fought the case, spending $250,000 on legal fees. They liquidated their savings, took a second mortgage and sold their car. An inspection for his defense revealed the laptop was severely infected. It was programmed to visit as many as 40 child porn sites per minute — an inhuman feat. While Fiola and his wife were out to dinner one night, someone logged on to the computer and porn flowed in for an hour and a half. Prosecutors performed another test and confirmed the defense findings. The charge was dropped — 11 months after it was filed.
    • Weiye Loh
       
      The law is reason beyond passion. Yet, reasons may be flawed, bounded, or limited by our in irrationality. Who are we to blame if we are victims of such false accusation? Is it right then to carry on with these proceedings just so those who are truly guilty won't get away scot-free? 
  • ...1 more annotation...
  • The Fiolas say they have health problems from the stress of the case. They say they've talked to dozens of lawyers but can't get one to sue the state, because of a cap on the amount they can recover. "It ruined my life, my wife's life and my family's life," he says. The Massachusetts attorney general's office, which charged Fiola, declined interview requests.
Weiye Loh

flaneurose: The KK Chemo Misdosage Incident - 0 views

  • Labelling the pump that dispenses in ml/hr in a different color from the pump that dispenses in ml/day would be an obvious remedy that would have addressed the KK incident. It's the common-sensical solution that anyone can think of.
  • Sometimes, design flaws like that really do occur because engineers can't see the wood for the trees.
  • But sometimes the team is aware of these issues and highlights them to management, but the manufacturer still proceeds as before. Why is that? Because in addition to design principles, one must be mindful that there are always business considerations at play as well. Manufacturing two (or more) separate designs for pumps incurs greater costs, eliminates the ability to standardize across pumps, increases holding inventory, and overall increases complexity of business and manufacturing processes, and decreases economies of scale. All this naturally reduces profitability.It's not just pumps. Even medicines are typically sold in identical-looking vials with identically colored vial caps, with only the text on the vial labels differentiating them in both drug type and concentration. You can imagine what kinds of accidents can potentially happen there.
  • ...2 more annotations...
  • Legally, the manufacturer has clearly labelled on the pump (in text) the appropriate dosing regime, or for a medicine vial, the type of drug and concentration. The manufacturer has hence fulfilled its duty. Therefore, if there are any mistakes in dosing, the liability for the error lies with the hospital and not the manufacturer of the product. The victim of such a dosing error can be said to be an "externalized cost"; the beneficiaries of the victim's suffering are the manufacturer, who enjoys greater profitability, the hospital, which enjoys greater cost-savings, and the public, who save on healthcare. Is it ethical of the manufacturer, to "pass on" liability to the hospital? To make it difficult (or at least not easy) for the hospital to administer the right dosage? Maybe the manufacturer is at fault, but IMHO, it's very hard to say.
  • When a chemo incident like the one that happened in KK occurs, there are cries of public remonstration, and the pendulum may swing the other way. Hospitals might make the decision to purchase more expensive and better designed pumps (that is, if they are available). Then years down the road, when a bureaucrat (or a management consultant) with an eye to trim costs looks through the hospital purchasing orders, they may make the suggestion that $XXX could be saved by buying the generic version of such-and-such a product, instead of the more expensive version. And they would not be wrong, just...myopic.Then the cycle starts again.Sometimes it's not only about human factors. It could be about policy, or human nature, or business fundamentals, or just the plain old, dysfunctional way the world works.
    • Weiye Loh
       
      Interesting article. Explains clearly why our 'ethical' considerations is always only limited to a particular context and specific considerations. 
Weiye Loh

Read Aubrey McClendon's response to "misleading" New York Times article (1) - 0 views

  • Since the shale gas revolution and resulting confirmation of enormous domestic gas reserves, there has been a relatively small group of analysts and geologists who have doubted the future of shale gas.  Their doubts have become very convenient to the environmental activists I mentioned earlier. This particular NYT reporter has apparently sought out a few of the doubters to fashion together a negative view of the U.S. natural gas industry. We also believe certain media outlets, especially the once venerable NYT, are being manipulated by those whose environmental or economic interests are being threatened by abundant natural gas supplies. We have seen for example today an email from a leader of a group called the Environmental Working Group who claimed today’s articles as this NYT reporter’s "second great story" (the first one declaring that produced water disposal from shale gas wells was unsafe) and that “we've been working with him for over 8 months. Much more to come. . .”
  • this reporter’s claim of impending scarcity of natural gas supply contradicts the facts and the scientific extrapolation of those facts by the most sophisticated reservoir engineers and geoscientists in the world. Not just at Chesapeake, but by experts at many of the world’s leading energy companies that have made multi-billion-dollar, long-term investments in U.S. shale gas plays, with us and many other companies. Notable examples of these companies, besides the leading independents such as Chesapeake, Devon, Anadarko, EOG, EnCana, Talisman and others, include these leading global energy giants:  Exxon, Shell, BP, Chevron, Conoco, Statoil, BHP, Total, CNOOC, Marathon, BG, KNOC, Reliance, PetroChina, Mitsui, Mitsubishi and ENI, among others.  Is it really possible that all of these companies, with a combined market cap of almost $2 trillion, know less about shale gas than a NYT reporter, a few environmental activists and a handful of shale gas doubters?
  •  
    Administrator's Note: This email was sent to all Chesapeake employees from CEO Aubrey McClendon, in response to a Sunday New York Times piece by Ian Urbina entitled "Insiders Sound an Alarm Amid a Natural Gas Rush."   FW: CHK's response to 6.26.11 NYT article on shale gas   From: Aubrey McClendon Sent: Sunday, June 26, 2011 8:37 PM To: All Employees   Dear CHK Employees:  By now many of you may have read or heard about a story in today's New York Times (NYT) that questioned the productive capacity and economic quality of U.S. natural gas shale reserves, as well as energy reserve accounting practices used by E&P companies, including Chesapeake.  The story is misleading, at best, and is the latest in a series of articles produced by this publication that obviously have an anti-industry bias.  We know for a fact that today's NYT story is the handiwork of the same group of environmental activists who have been the driving force behind the NYT's ongoing series of negative articles about the use of fracking and its importance to the US natural gas supply growth revolution - which is changing the future of our nation for the better in multiple areas.  It is not clear to me exactly what these environmental activists are seeking to offer as their alternative energy plan, but most that I have talked to continue to naively presume that our great country need only rely on wind and solar energy to meet our current and future energy needs. They always seem to forget that wind and solar produce less than 2% of America electricity today and are completely non-economic without ongoing government and ratepayer subsidies.
1 - 18 of 18
Showing 20 items per page