Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged increase

Rss Feed Group items tagged

Weiye Loh

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
Weiye Loh

Freakonomics » The Revolution Will Not Be Televised. But It Will Be Tweeted - 0 views

  • information alone does not destabilize an oppressive regime. In fact, more information (and the control of that information) is a major source of political strength for any ruling party. The state controlled media of North Korea is a current example of the power of propaganda, much as it was in the Soviet Union and Nazi Germany, where the state heavily subsidized the diffusion of radios during the 1930s to help spread Nazi propaganda.
  • changes in technology do not by themselves weaken the state. While Twitter played a role in the Iranian protests in 2009, the medium was used effectively by the Iranian regime to spread rumors and disinformation. But, if information becomes not just more widespread but more reliable, the regime’s chances of survival are significantly diminished. In this sense, though social media like Twitter and Facebook appear to be a scattered mess, they are more reliable than state controlled messages.
  • The model predicts that a given percentage increase in information reliability has exactly twice as large an effect on the regime’s chances as the same percentage increase in information quantity, so, overall, an information revolution that leads to roughly equal-sized percentage increases in both these characteristics will reduce a regime’s chances of surviving.-
  •  
    If the quantity of information available to citizens is sufficiently high, then the regime has a better chance of surviving. However, an increase in the reliability of information can reduce the regime's chances. These two effects are always in tension: a regime benefits from an increase in information quantity if and only if an increase in information reliability reduces its chances. The model allows for two kinds of information revolutions. In the first, associated with radio and mass newspapers under the totalitarian regimes of the early twentieth century, an increase in information quantity coincides with a shift towards media institutions more accommodative of the regime and, in this sense, a decrease in information reliability. In this case, both effects help the regime. In the second kind, associated with diffuse technologies like modern social media, an increase in information quantity coincides with a shift towards sources of information less accommodative of the regime and an increase in information reliability. This makes the quantity and reliability effects work against each other.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

The Breakthrough Institute: New Report: How Efficiency Can Increase Energy Consumption - 0 views

  • There is a large expert consensus and strong evidence that below-cost energy efficiency measures drive a rebound in energy consumption that erodes much and in some cases all of the expected energy savings, concludes a new report by the Breakthrough Institute. "Energy Emergence: Rebound and Backfire as Emergent Phenomena" covers over 96 published journal articles and is one of the largest reviews of the peer-reviewed journal literature to date. (Readers in a hurry can download Breakthrough's PowerPoint demonstration here or download the full paper here.)
  • In a statement accompanying the report, Breakthrough Institute founders Ted Nordhaus and Michael Shellenberger wrote, "Below-cost energy efficiency is critical for economic growth and should thus be aggressively pursued by governments and firms. However, it should no longer be considered a direct and easy way to reduce energy consumption or greenhouse gas emissions." The lead author of the new report is Jesse Jenkins, Breakthrough's Director of Energy and Climate Policy; Nordhaus and Shellenberger are co-authors.
  • The findings of the new report are significant because governments have in recent years relied heavily on energy efficiency measures as a means to cut greenhouse gases. "I think we have to have a strong push toward energy efficiency," said President Obama recently. "We know that's the low-hanging fruit, we can save as much as 30 percent of our current energy usage without changing our quality of life." While there is robust evidence for rebound in academic peer-reviewed journals, it has largely been ignored by major analyses, including the widely cited 2009 McKinsey and Co. study on the cost of reducing greenhouse gases.
  • ...2 more annotations...
  • The idea that increased energy efficiency can increase energy consumption at the macro-economic level strikes many as a new idea, or paradoxical, but it was first observed in 1865 by British economist William Stanley Jevons, who pointed out that Watt's more efficient steam engine and other technical improvements that increased the efficiency of coal consumption actually increased rather than decreased demand for coal. More efficient engines, Jevons argued, would increase future coal consumption by lowering the effective price of energy, thus spurring greater demand and opening up useful and profitable new ways to utilize coal. Jevons was proven right, and the reality of what is today known as "Jevons Paradox" has long been uncontroversial among economists.
  • Economists have long observed that increasing the productivity of any single factor of production -- whether labor, capital, or energy -- increases demand for all of those factors. This is one of the basic dynamics of economic growth. Luddites who feared there would be fewer jobs with the emergence of weaving looms were proved wrong by lower price for woven clothing and demand that has skyrocketed (and continued to increase) ever since. And today, no economist would posit that an X% improvement in labor productivity would lead directly to an X% reduction in employment. In fact, the opposite is widely expected: labor productivity is a chief driver of economic growth and thus increases in employment overall. There is no evidence, the report points out, that energy is any different, as per capita energy consumption everywhere on earth continues to rise, even as economies become more efficient each year.
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
minogirou

how to increase the number of views of my site or video ? - 0 views

  •  
    increase your site increase your video increase your trafic increase your page facebook increase your groups facebook increase your fans
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Science-Based Medicine » Skepticism versus nihilism about cancer and science-... - 0 views

  • I’m a John Ioannidis convert, and I accept that there is a lot of medical literature that is erroneous. (Just search for Dr. Ioannidis’ last name on this blog, and you’ll find copious posts praising him and discussing his work.) In fact, as I’ve pointed out, most medical researchers instinctively know that most new scientific findings will not hold up to scrutiny, which is why we rarely accept the results of a single study, except in unusual circumstances, as being enough to change practice. I also have pointed out many times that this is not necessarily a bad thing. Replication is key to verification of scientific findings, and more often than not provocative scientific findings are not replicated. Does that mean they shouldn’t be published?
  • As for pseudoscience, I’m half tempted to agree with Dr. Spector, but just not in the way he thinks. Unfortunately, over the last 20 years or so, there has been an increasing amount of pseudoscience in the medical literature in the form of “complementary and alternative medicine” (CAM) studies of highly improbable remedies or even virtually impossible ones (i.e., homeopathy). However, that does not appear to be what Dr. Spector is talking about, which is why I looked up his references. The second reference is to an SI article from 2009 entitled Science and Pseudoscience in Adult Nutrition Research and Practice. There, and only there, did I find out just what it is that Dr. Spector apparently means by “pseudoscience”: By pseudoscience, I mean the use of inappropriate methods that frequently yield wrong or misleading answers for the type of question asked. In nutrition research, such methods also often misuse statistical evaluations.
  • Dr. Spector doesn’t really know the difference between inadequately rigorous science and pseudoscience! Now, don’t get me wrong. I know that it’s not always easy to distinguish science from pseudoscience, especially at the fringes, but in general bad science has to go a lot further than Dr. Spector thinks to merit the the term “pseudoscience.” It is clear (to me, at least) from his articles that Dr. Spector throws around the term “pseudoscience” around rather more loosely than he should, using it as a pejorative for any clinical science less rigorous than a randomized, double-blind, placebo-controlled trial that meets FDA standards for approval of a drug (his pharma background coming to the fore, no doubt). Pseudoscience, Dr. Spector. You keep using that word. I do not think it means what you think it means. Indeed, I almost get the impression from his articles that Dr. Spector views any study that doesn’t reach FDA-level standards for drug approval to be pseudoscience.
  • ...4 more annotations...
  • Medical science, when it works well, tends to progress from basic science, to small pilot studies, to larger randomized studies, and then–only then–to those big, rigorous, insanely expensive randomized, double-blind, placebo-controlled trials. Dr. Spector mentions hierarchies of evidence, but he seems to fall into a false dichotomy, namely that if it’s not Level I evidence, it’s crap. The problem is, as Mark pointed out, in medicine we often don’t have Level I evidence for many questions. Indeed, for some questions, we will never have Level I evidence. Clinical medicine involves making decisions in the midst of uncertainty, sometimes extreme uncertainty.
  • Dr. Spector then proceeds to paint a picture of reckless physicians proceeding on crappy studies to pump women full of hormones. Actually, it was more than a bit more complicated on than that. That was the time when I was in my medical training, and I remember the discussions we had regarding the strength (or lack thereof) of the epidemiological data and the lack of good RCTs looking at HRT. I also remember that nothing works as well to relieve menopausal symptoms as HRT, an observation we have been reminded of again since 2003, which is the year when the first big study came out implicating HRT in increasing the risk of breast cancer (more later).
  • I found a rather fascinating editorial in the New England Journal of Medicine from more than 20 years ago that discussed the state of the evidence back then with regard to estrogen and breast cancer: Evidence that estrogen increases the risk of breast cancer has been surprisingly difficult to obtain. Clinical and epidemiologic studies and studies in animals strongly suggest that endogenous estrogen plays a part in causing breast cancer. If so, exogenous estrogen should be a potent promoter of breast cancer. Although more than 20 case–control and prospective studies of the relation of breast cancer and noncontraceptive estrogen use have failed to demonstrate the expected association, relatively few women in these studies used estrogen for extended periods. Studies of the use of diethylstilbestrol and oral contraceptives suggest that a long exposure or latency may be necessary to show any association between hormone use and breast cancer. In the Swedish study, only six years of follow-up was needed to demonstrate an increased risk of breast cancer with the postmenopausal use of estradiol. It should be noted, however, that half the women in the subgroup that provided detailed data on the duration of hormone use had taken estrogen for many years before their base-line prescription status was defined. The duration of estrogen exposure in these women before the diagnosis of breast cancer was probably seriously underestimated; a short latency cannot be attributed to estradiol on the basis of these data. Other recent studies of the use of noncontraceptive estrogen suggest a slightly increased risk of breast cancer after 15 to 20 years’ use.
  • even now, the evidence is conflicting regarding HRT and breast cancer, with the preponderance of evidence suggesting that mixed HRT (estrogen and progestin) significantly increases the risk of breast cancer, while estrogen-alone HRT very well might not increase the risk of breast cancer at all or (more likely) only very little. Indeed, I was just at a conference all day Saturday where data demonstrating this very point were discussed by one of the speakers. None of this stops Dr. Spector from categorically labeling estrogen as a “carcinogen that causes breast cancers that kill women.” Maybe. Maybe not. It’s actually not that clear. The problem, of course, is that, consistent with the first primary reports of WHI results, the preponderance of evidence finding health risks due to HRT have indicted the combined progestin/estrogen combinations as unsafe.
Weiye Loh

The Epidemic of Mental Illness: Why? by Marcia Angell | The New York Review of Books - 0 views

  • Is the prevalence of mental illness really that high and still climbing? Particularly if these disorders are biologically determined and not a result of environmental influences, is it plausible to suppose that such an increase is real? Or are we learning to recognize and diagnose mental disorders that were always there? On the other hand, are we simply expanding the criteria for mental illness so that nearly everyone has one? And what about the drugs that are now the mainstay of treatment? Do they work? If they do, shouldn’t we expect the prevalence of mental illness to be declining, not rising?
  • after Prozac came to market in 1987 and was intensively promoted as a corrective for a deficiency of serotonin in the brain. The number of people treated for depression tripled in the following ten years, and about 10 percent of Americans over age six now take antidepressants.
  •  
    It seems that Americans are in the midst of a raging epidemic of mental illness, at least as judged by the increase in the numbers treated for it. The tally of those who are so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) increased nearly two and a half times between 1987 and 2007-from one in 184 Americans to one in seventy-six. For children, the rise is even more startling-a thirty-five-fold increase in the same two decades. Mental illness is now the leading cause of disability in children, well ahead of physical disabilities like cerebral palsy or Down syndrome, for which the federal programs were created.
Weiye Loh

Roger Pielke Jr.'s Blog: A Decrease in Floods Around the World? - 0 views

  • Bouziotas et al. presented a paper at the EGU a few weeks ago (PDF) and concluded: Analysis of trends and of aggregated time series on climatic (30-year) scale does not indicate consistent trends worldwide. Despite common perception, in general, the detected trends are more negative (less intense floods in most recent years) than positive. Similarly, Svensson et al. (2005) and Di Baldassarre et al. (2010) did not find systematical change neither in flood increasing or decreasing numbers nor change in flood magnitudes in their analysis.
  • This finding is largely consistent with Kundzewicz et al. (2005) who find: Out of more than a thousand long time series made available by the Global Runoff Data Centre (GRDC) in Koblenz, Germany, a worldwide data set consisting of 195 long series of daily mean flow records was selected, based on such criteria as length of series, currency, lack of gaps and missing values, adequate geographical distribution, and priority to smaller catchments. The analysis of annual maximum flows does not support the hypothesis of ubiquitous growth of high flows. Although 27 cases of strong, statistically significant increase were identified by the Mann-Kendall test, there are 31 decreases as well, and most (137) time series do not show any significant changes (at the 10% level). Caution is advised in interpreting these results as flooding is a complex phenomenon, caused by a number of factors that can be associated with local, regional, and hemispheric climatic processes. Moreover, river flow has strong natural variability and exhibits long-term persistence which can confound the results of trend and significance tests.
  • estructive floods observed in the last decade all over the world have led to record high material damage. The conventional belief is that the increasing cost of floods is associated with increasing human development on flood plains (Pielke & Downton, 2000). However, the question remains as to whether or not the frequency and/or magnitude of flooding is also increasing and, if so, whether it is in response to climate variability and change. Several scenarios of future climate indicate a likelihood of increased intense precipitation and flood hazard. However, observations to date provide no conclusive and general proof as to how climate change affects flood behaviour.
  • ...1 more annotation...
  • References: Bouziotas, D., G. Deskos, N. Mastrantonas, D. Tsaknias, G. Vangelidis, S.M. Papalexiou, and D. Koutsoyiannis, Long-term properties of annual maximum daily river discharge worldwide, European Geosciences Union General Assembly 2011, Geophysical Research Abstracts, Vol. 13, Vienna, EGU2011-1439, European Geosciences Union, 2011. Kundzewicz, Z.W., D. Graczyk, T. Maurer, I. Przymusińska, M. Radziejewski, C. Svensson and M. Szwed, 2005(a):Trend detection in river flow time-series: 1. annual maximum flow. Hydrol. Sci. J., 50(5): 797-810.
Weiye Loh

"The Particle-Emissions Dilemma" by Henning Rodhe | Project Syndicate - 0 views

  • according to the United Nations’ Intergovernmental Panel on Climate Change, the cooling effect of white particles may counteract as much as about half of the warming effect of carbon dioxide. So, if all white particles were removed from the atmosphere, global warming would increase considerably.CommentsView/Create comment on this paragraphThe dilemma is that all particles, whether white or black, constitute a serious problem for human health. Every year, an estimated two million people worldwide die prematurely, owing to the effects of breathing polluted air. Furthermore, sulfur-rich white particles contribute to the acidification of soil and water.
  • Naturally, measures targeting soot and other short-lived particles must not undermine efforts to reduce CO2 emissions. In the long term, emissions of CO2 and other long-lived greenhouse gases constitute the main problem. But a reduction in emissions of soot (and other short-lived climate pollutants) could alleviate the pressures on the climate in the coming decades.
  • what do we do about white particles? How do we weigh improved health and reduced mortality rates for hundreds of thousands of people against the serious consequences of global warming?CommentsView/Create comment on this paragraphIt is difficult to imagine that any country’s officials would knowingly submit their population to higher health risks by not acting to reduce white particles solely because they counteract global warming. On the contrary, sulfur emissions have been reduced over the last few decades in both Europe and North America, owing to a desire to promote health and counter acidification; and China, too, seems to be taking measures to reduce sulfur emissions and improve the country’s terrible air quality. But, in other parts of the world where industrialization is accelerating, sulfur emissions continue to increase.
  • ...2 more annotations...
  • Nobel laureate Paul Crutzen has suggested another solution: manipulate the climate by releasing white sulfur particles high up in the stratosphere, where they would remain for several years, exerting a proven cooling effect on Earth’s climate without affecting human health. In 1991, the eruption of Mount Pinatubo in the Philippines created a haze of sulfur in the higher atmosphere that cooled the entire planet approximately half a degree Celsius for two years afterwards.
  • View/Create comment on this paragraphOther methods of geoengineering – that is, consciously manipulating the climate – include painting the roofs of houses white in order to increase the reflection of sunlight, covering deserts with reflective plastic, and fertilizing the seas with iron in order to increase the absorption of CO2.
  •  
    Particle emissions into Earth's atmosphere affect both human health and the climate. So we should limit them, right? For health reasons, yes, we should indeed do that; but, paradoxically, limiting such emissions would cause global warming to increase
Weiye Loh

Skepticblog » Flaws in Creationist Logic - 0 views

  • making a false analogy here by confusing the origin of life with the later evolution of life. The watch analogy was specifically offered to say that something which is complex and displays design must have been created and designed by a creator. Therefore, since we see complexity and design in life it too must have had a creator. But all the life that we know – that life which is being pointed to as complex and designed – is the result of a process (evolution) that has worked over billions of years. Life can grow, reproduce, and evolve. Watches cannot – so it is not a valid analogy.
  • Life did emerge from non-living matter, but that is irrelevant to the point. There was likely a process of chemical evolution – but still the non-living precursors to life were just chemicals, they did not display the design or complexity apparent in a watch. Ankur’s attempt to rescue this false analogy fails. And before someone has a chance to point it out – yes, I said that life displays design. It displays bottom-up evolutionary design, not top-down intelligent design. This refers to another fallacy of creationists – the assumption that all design is top down. But nature demonstrates that this is a false assumption.
  • An increase in variation is an increase in information – it takes more information to describe the greater variety. By any actual definition of information – variation increases information. Also, as I argued, when you have gene duplication you are physically increasing the number of information carrying units – that is an increase in information. There is simply no way to avoid the mountain of genetic evidence that genetic information has increased over evolutionary time through evolutionary processes.
  •  
    FLAWS IN CREATIONIST LOGIC
Weiye Loh

RealClimate: Going to extremes - 0 views

  • There are two new papers in Nature this week that go right to the heart of the conversation about extreme events and their potential relationship to climate change.
  • Let’s start with some very basic, but oft-confused points: Not all extremes are the same. Discussions of ‘changes in extremes’ in general without specifying exactly what is being discussed are meaningless. A tornado is an extreme event, but one whose causes, sensitivity to change and impacts have nothing to do with those related to an ice storm, or a heat wave or cold air outbreak or a drought. There is no theory or result that indicates that climate change increases extremes in general. This is a corollary of the previous statement – each kind of extreme needs to be looked at specifically – and often regionally as well. Some extremes will become more common in future (and some less so). We will discuss the specifics below. Attribution of extremes is hard. There are limited observational data to start with, insufficient testing of climate model simulations of extremes, and (so far) limited assessment of model projections.
  • The two new papers deal with the attribution of a single flood event (Pall et al), and the attribution of increased intensity of rainfall across the Northern Hemisphere (Min et al). While these issues are linked, they are quite distinct, and the two approaches are very different too.
  • ...4 more annotations...
  • The aim of the Pall et al paper was to examine a specific event – floods in the UK in Oct/Nov 2000. Normally, with a single event there isn’t enough information to do any attribution, but Pall et al set up a very large ensemble of runs starting from roughly the same initial conditions to see how often the flooding event occurred. Note that flooding was defined as more than just intense rainfall – the authors tracked runoff and streamflow as part of their modelled setup. Then they repeated the same experiments with pre-industrial conditions (less CO2 and cooler temperatures). If the amount of times a flooding event would occur increased in the present-day setup, you can estimate how much more likely the event would have been because of climate change. The results gave varying numbers but in nine out of ten cases the chance increased by more than 20%, and in two out of three cases by more than 90%. This kind of fractional attribution (if an event is 50% more likely with anthropogenic effects, that implies it is 33% attributable) has been applied also to the 2003 European heatwave, and will undoubtedly be applied more often in future. One neat and interesting feature of these experiments was that they used the climateprediction.net set up to harness the power of the public’s idle screensaver time.
  • The second paper is a more standard detection and attribution study. By looking at the signatures of climate change in precipitation intensity and comparing that to the internal variability and the observation, the researchers conclude that the probability of intense precipitation on any given day has increased by 7 percent over the last 50 years – well outside the bounds of natural variability. This is a result that has been suggested before (i.e. in the IPCC report (Groisman et al, 2005), but this was the first proper attribution study (as far as I know). The signal seen in the data though, while coherent and similar to that seen in the models, was consistently larger, perhaps indicating the models are not sensitive enough, though the El Niño of 1997/8 may have had an outsize effect.
  • Both papers were submitted in March last year, prior to the 2010 floods in Pakistan, Australia, Brazil or the Philippines, and so did not deal with any of the data or issues associated with those floods. However, while questions of attribution come up whenever something weird happens to the weather, these papers demonstrate clearly that the instant pop-attributions we are always being asked for are just not very sensible. It takes an enormous amount of work to do these kinds of tests, and they just can’t be done instantly. As they are done more often though, we will develop a better sense for the kinds of events that we can say something about, and those we can’t.
  • There is always concern that the start and end points for any trend study are not appropriate (both sides are guilty on this IMO). I have read precipitation studies were more difficult due to sparse data, and it seems we would have seen precipitation trend graphs a lot more often by now if it was straight forward. 7% seems to be a large change to not have been noted (vocally) earlier, seems like there is more to this story.
Weiye Loh

The Breakthrough Institute: ANALYSIS: Nuclear Moratorium in Germany Could Cause Spike i... - 0 views

  • The German government announced today that it will shut down seven of the country's seventeen nuclear power plants for an indefinite period, a decision taken in response to widespread protests and a German public increasingly fearful of nuclear power after a nuclear emergency in Japan. The decision places a moratorium on a law that would extend the lifespan of these plants, and is uncharacteristic of Angela Merkel, whose government previously overturned its predecessor's decision to phase nuclear out of Germany's energy supply.
  • The seven plants, each built before 1980, represent 30% of Germany's nuclear electricity generation and 24% of its gross installed nuclear capacity. Shutting down these plants, or even just placing an indefinite hold on their operation, would be a major loss of zero-emissions generation capacity for Germany. The country currently relies on nuclear power from its seventeen nuclear power plants for about a quarter of its electricity supply.
  • The long-term closure of these plants would therefore seriously challenge Germany's carbon emissions efforts, as they try to meet the goal of 40% reduction below 1990 carbon emissions rates by 2020.
  • ...4 more annotations...
  • if lost generation were made up for entirely by coal-fired plants, carbon emissions would increase annually by as much as 33 million tons. This would represent an overall 4% annual increase in carbon emissions for the country and an 8% increase in carbon emissions for the power sector alone.
  • The moratorium could cause a spike in CO2 emissions as Germany turns to its other, more carbon-intensive sources to supply its energy demand. Already, the country has been engaged in a "dash for coal", building dozens of new coal plants in response to the perverse incentives and intense lobbying from the coal industries made possible by the European Emissions Trading Scheme. (As previously reported by Breakthrough Europe).
  • Alternatively, should the country try to replace lost generation entirely with power from renewables, it would need to more than double generation of renewable energy, from where it currently stands at 97 billion kWh to about 237 billion kWh. As part of the country's low-carbon strategy, Germany has planned to deploy at least 20% renewable energy sources by 2020. If the nation now chooses to meet this goal by displacing nuclear plants, 2020 emissions levels would be higher than had the country otherwise phased out its carbon-intensive coal or natural gas plants.
  • *Carbon emissions factors used are those estimated by the World Bank in 2009 for new coal-fired power plants (0.795 t C02/MWh) and new gas-fired power plants (0.398 t C02/MWh) **Data from Carbon Monitoring For Action, European Nuclear Society Data, and US Energy Information Administration
  •  
    Carbon dioxide emissions in Germany may increase by 4 percent annually in response to a moratorium on seven of the country's oldest nuclear power plants, as power generation is shifted from nuclear power, a zero carbon source, to the other carbon-intensive energy sources that currently make up the country's energy supply.
Weiye Loh

Freakonomics » Scientific Literacy Does Not Increase Concern Over Climate Cha... - 0 views

  • The conventional explanation for controversy over climate change emphasizes impediments to public understanding: Limited popular knowledge of science, the inability of ordinary citizens to assess technical information, and the resulting widespread use of unreliable cognitive heuristics to assess risk. A large survey of U.S. adults (N = 1540) found little support for this account. On the whole, the most scientifically literate and numerate subjects were slightly less likely, not more, to see climate change as a serious threat than the least scientifically literate and numerate ones. More importantly, greater scientific literacy and numeracy were associated with greater cultural polarization: Respondents predisposed by their values to dismiss climate change evidence became more dismissive, and those predisposed by their values to credit such evidence more concerned, as science literacy and numeracy increased. We suggest that this evidence reflects a conflict between two levels of rationality: The individual level, which is characterized by citizens’ effective use of their knowledge and reasoning capacities to form risk perceptions that express their cultural commitments; and the collective level, which is characterized by citizens’ failure to converge on the best available scientific evidence on how to promote their common welfare. Dispelling this, “tragedy of the risk-perception commons,” we argue, should be understood as the central aim of the science of science communication.
  •  
    A new study by the Cultural Cognition Project, a team headed up by Yale law professor Dan Kahan, shows that people who are more science- and math-literate tend to be more skeptical about the consequences of climate change. Increased scientific literacy also leads to higher polarization on climate-change issues:
yongernn teo

Eli Lilly Accused of Unethical Marketing of Zyprexa - 0 views

  •  
    Summary of the Unethical Marketing of Zyprexa by Eli Lilly: \n\nEli Lilly is a global pharmaceutical company. In the year 2006, it was charged with unethical marketing of Zyprexa, the top-selling drug. It is approved only for the treatment of schizophrenia and bipolar disorder. \nFirstly, Eli Lilly in a report downplayed the risks of obesity and increased blood sugar associated with Zyprexa. Although Eli Lilly was aware of these risks for at least a decade, they went ahead without emphasizing the significance of these risks, in fear of jeopardizing their sales. \nSecondly, Eli Lilly held a promotional campaign called Viva Zyprexa, encouraging off-label usage of this drug in patients who had neither schizophrenia nor bipolar disorder. This campaign was targeted at the elderly who had dementia. However, this drug was not approved to treat dementia. In fact, it could increase the risk of death in older patients who had dementia-related psychosis. \nAll these were done to boost the sale of Zyprexa and to bring in more revenue for Eli Lilly. Zyprexa could alone bring in $4billion worth of sales annually. \n\nEthical Question:\nTo what extent should pharmaceutical companies go to inform potential consumers on the side-effects of their drugs? \n\nEthical Problem: \nThe information that is disseminated through marketing campaigns have to be true and transparent. There should not be any hidden agenda behind the amount of information being released. In this case, to prevent sales from plummeting, Eli Lilly downplayed the side-effects of Zyprexa. It also encouraged off-label usage. \nIt is very important that pharmaceutical companies practice good ethics as this concerns the health of its consumers. While one drug may act as a remedy for a health-problem, it could possibly lead to other health problems due to the side-effects. All these have to be conveyed to the consumer who exchanges his money for the product. \nNot being transparent and honest with the information of the pr
Weiye Loh

Skepticblog » ADHD and Genetics - 0 views

  • The main problem with media reporting is that they tend to oversimplify the concept of a genetic disorder. The worst offenders speak of “the gene” for whatever is being discussed, like ADHD. There are purely genetic disorders that are the result of a mutation in a single gene, but more often diseases and disorders that have a genetic component are the complex result of multiple genes and their interaction with the environment. Therefore there is no single gene for ADHD, autism, migraines, obesity or other complex condition.
  • What this study shows is an increased risk of copy number variants (CNVs) in people with a confirmed diagnosis of ADHD. A CNV is either a deletion or duplication of genetic material. The researchers found that 78 out of 1047 control had such CNVs (7%), while 57 out of 366 subjects with ADHD did (15%). This was a statistically significant increase. Further, CNV were more likely to occur on genes previous associated with both autism and schizophrenia (and therefore likely to be involved in brain development).
  • The authors conclude: “Our findings provide genetic evidence of an increased rate of large CNVs in individuals with ADHD and suggest that ADHD is not purely a social construct.”
  • ...1 more annotation...
  • they are saying that this study is evidence that ADHD is not purely social. They are not saying that it is purely or even mostly genetic. In fact, only 15% of subjects with ADHD demonstrated increased CNVs. This study is a proof of concept more than anything, demonstrating that genetic makeup can contribute, at least in some cases, to the clinical syndrome of ADHD.
  •  
    ADHD AND GENETICS
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Effective media reporting of sea level rise projections: 1989-2009 - 0 views

  •  
    In the mass media, sea level rise is commonly associated with the impacts of climate change due to increasing atmospheric greenhouse gases. As this issue garners ongoing international policy attention, segments of the scientific community have expressed unease about how this has been covered by mass media. Therefore, this study examines how sea level rise projections-in IPCC Assessment Reports and a sample of the scientific literature-have been represented in seven prominent United States (US) and United Kingdom (UK) newspapers over the past two decades. The research found that-with few exceptions-journalists have accurately portrayed scientific research on sea level rise projections to 2100. Moreover, while coverage has predictably increased in the past 20 years, journalists have paid particular attention to the issue in years when an IPCC report is released or when major international negotiations take place, rather than when direct research is completed and specific projections are published. We reason that the combination of these factors has contributed to a perceived problem in the sea level rise reporting by the scientific community, although systematic empirical research shows none. In this contemporary high-stakes, high-profile and highly politicized arena of climate science and policy interactions, such results mark a particular bright spot in media representations of climate change. These findings can also contribute to more measured considerations of climate impacts and policy action at a critical juncture of international negotiations and everyday decision-making associated with the causes and consequences of climate change.
1 - 20 of 169 Next › Last »
Showing 20 items per page