Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Level of Measurement

Rss Feed Group items tagged

Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

A Brief Primer on Criminal Statistics « Canada « Skeptic North - 0 views

  • Occurrences of crime are properly expressed as the number of incidences per 100,000 people. Total numbers are not informative on their own and it is very easy to manipulate an argument by cherry picking between a total number and a rate.  Beware of claims about crime that use raw incidence numbers. When a change in whole incidence numbers is observed, this might not have any bearing on crime levels at all, because levels of crime are dependent on population.
  • Whole Numbers versus Rates
  • Reliability Not every criminal statistic is equally reliable. Even though we have measures of incidences of crimes across types and subtypes, not every one of these statistics samples the actual incidence of these crimes in the same way. Indeed, very few measure the total incidences very reliably at all. The crime rates that you are most likely to encounter capture only crimes known and substantiated by police. These numbers are vulnerable to variances in how crimes become known and verified by police in the first place. Crimes very often go unreported or undiscovered. Some crimes are more likely to go unreported than others (such as sexual assaults and drug possession), and some crimes are more difficult to substantiate as having occurred than others.
  • ...9 more annotations...
  • Complicating matters further is the fact that these reporting patterns vary over time and are reflected in observed trends.   So, when a change in the police reported crime rate is observed from year to year or across a span of time we may be observing a “real” change, we may be observing a change in how these crimes come to the attention of police, or we may be seeing a mixture of both.
  • Generally, the most reliable criminal statistic is the homicide rate – it’s very difficult, though not impossible, to miss a dead body. In fact, homicides in Canada are counted in the year that they become known to police and not in the year that they occurred.  Our most reliable number is very, very close, but not infallible.
  • Crimes known to the police nearly always under measure the true incidence of crime, so other measures are needed to better complete our understanding. The reported crimes measure is reported every year to Statistics Canada from data that makes up the Uniform Crime Reporting Survey. This is a very rich data set that measures police data very accurately but tells us nothing about unreported crime.
  • We do have some data on unreported crime available. Victims are interviewed (after self-identifying) via the General Social Survey. The survey is conducted every five years
  • This measure captures information in eight crime categories both reported, and not reported to police. It has its own set of interpretation problems and pathways to misuse. The survey relies on self-reporting, so the accuracy of the information will be open to errors due to faulty memories, willingness to report, recording errors etc.
  • From the last data set available, self-identified victims did not report 69% of violent victimizations (sexual assault, robbery and physical assault), 62% of household victimizations (break and enter, motor vehicle/parts theft, household property theft and vandalism), and 71% of personal property theft victimizations.
  • while people generally understand that crimes go unreported and unknown to police, they tend to be surprised and perhaps even shocked at the actual amounts that get unreported. These numbers sound scary. However, the most common reasons reported by victims of violent and household crime for not reporting were: believing the incident was not important enough (68%) believing the police couldn’t do anything about the incident (59%), and stating that the incident was dealt with in another way (42%).
  • Also, note that the survey indicated that 82% of violent incidents did not result in injuries to the victims. Do claims that we should do something about all this hidden crime make sense in light of what this crime looks like in the limited way we can understand it? How could you be reasonably certain that whatever intervention proposed would in fact reduce the actual amount of crime and not just reduce the amount that goes unreported?
  • Data is collected at all levels of the crime continuum with differing levels of accuracy and applicability. This is nicely reflected in the concept of “the crime funnel”. All criminal incidents that are ever committed are at the opening of the funnel. There is “loss” all along the way to the bottom where only a small sample of incidences become known with charges laid, prosecuted successfully and responded to by the justice system.  What goes into the top levels of the funnel affects what we can know at any other point later.
Weiye Loh

Jonathan Stray » Measuring and improving accuracy in journalism - 0 views

  • Accuracy is a hard thing to measure because it’s a hard thing to define. There are subjective and objective errors, and no standard way of determining whether a reported fact is true or false
  • The last big study of mainstream reporting accuracy found errors (defined below) in 59% of 4,800 stories across 14 metro newspapers. This level of inaccuracy — where about one in every two articles contains an error — has persisted for as long as news accuracy has been studied, over seven decades now.
  • With the explosion of available information, more than ever it’s time to get serious about accuracy, about knowing which sources can be trusted. Fortunately, there are emerging techniques that might help us to measure media accuracy cheaply, and then increase it.
  • ...7 more annotations...
  • We could continuously sample a news source’s output to produce ongoing accuracy estimates, and build social software to help the audience report and filter errors. Meticulously applied, this approach would give a measure of the accuracy of each information source, and a measure of the efficiency of their corrections process (currently only about 3% of all errors are corrected.)
  • Real world reporting isn’t always clearly “right” or “wrong,” so it will often be hard to decide whether something is an error or not. But we’re not going for ultimate Truth here,  just a general way of measuring some important aspect of the idea we call “accuracy.” In practice it’s important that the error counting method is simple, clear and repeatable, so that you can compare error rates of different times and sources.
  • Subjective errors, though by definition involving judgment, should not be dismissed as merely differences in opinion. Sources found such errors to be about as common as factual errors and often more egregious [as rated by the sources.] But subjective errors are a very complex category
  • One of the major problems with previous news accuracy metrics is the effort and time required to produce them. In short, existing accuracy measurement methods are expensive and slow. I’ve been wondering if we can do better, and a simple idea comes to mind: sampling. The core idea is this: news sources could take an ongoing random sample of their output and check it for accuracy — a fact check spot check
  • Standard statistical theory tells us what the error on that estimate will be for any given number of samples (If I’ve got this right, the relevant formula is standard error of a population proportion estimate without replacement.) At a sample rate of a few stories per day, daily estimates of error rate won’t be worth much. But weekly and monthly aggregates will start to produce useful accuracy estimates
  • the first step would be admitting how inaccurate journalism has historically been. Then we have to come up with standardized accuracy evaluation procedures, in pursuit of metrics that capture enough of what we mean by “true” to be worth optimizing. Meanwhile, we can ramp up the efficiency of our online corrections processes until we find as many useful, legitimate errors as possible with as little staff time as possible. It might also be possible do data mining on types of errors and types of stories to figure out if there are patterns in how an organization fails to get facts right.
  • I’d love to live in a world where I could compare the accuracy of information sources, where errors got found and fixed with crowd-sourced ease, and where news organizations weren’t shy about telling me what they did and did not know. Basic factual accuracy is far from the only measure of good journalism, but perhaps it’s an improvement over the current sad state of affairs
  •  
    Professional journalism is supposed to be "factual," "accurate," or just plain true. Is it? Has news accuracy been getting better or worse in the last decade? How does it vary between news organizations, and how do other information sources rate? Is professional journalism more or less accurate than everything else on the internet? These all seem like important questions, so I've been poking around, trying to figure out what we know and don't know about the accuracy of our news sources. Meanwhile, the online news corrections process continues to evolve, which gives us hope that the news will become more accurate in the future.
Weiye Loh

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
Weiye Loh

Effective media reporting of sea level rise projections: 1989-2009 - 0 views

  •  
    In the mass media, sea level rise is commonly associated with the impacts of climate change due to increasing atmospheric greenhouse gases. As this issue garners ongoing international policy attention, segments of the scientific community have expressed unease about how this has been covered by mass media. Therefore, this study examines how sea level rise projections-in IPCC Assessment Reports and a sample of the scientific literature-have been represented in seven prominent United States (US) and United Kingdom (UK) newspapers over the past two decades. The research found that-with few exceptions-journalists have accurately portrayed scientific research on sea level rise projections to 2100. Moreover, while coverage has predictably increased in the past 20 years, journalists have paid particular attention to the issue in years when an IPCC report is released or when major international negotiations take place, rather than when direct research is completed and specific projections are published. We reason that the combination of these factors has contributed to a perceived problem in the sea level rise reporting by the scientific community, although systematic empirical research shows none. In this contemporary high-stakes, high-profile and highly politicized arena of climate science and policy interactions, such results mark a particular bright spot in media representations of climate change. These findings can also contribute to more measured considerations of climate impacts and policy action at a critical juncture of international negotiations and everyday decision-making associated with the causes and consequences of climate change.
Weiye Loh

Likert scale - Wikipedia, the free encyclopedia - 0 views

  • Whether individual Likert items can be considered as interval-level data, or whether they should be considered merely ordered-categorical data is the subject of disagreement. Many regard such items only as ordinal data, because, especially when using only five levels, one cannot assume that respondents perceive all pairs of adjacent levels as equidistant. On the other hand, often (as in the example above) the wording of response levels clearly implies a symmetry of response levels about a middle category; at the very least, such an item would fall between ordinal- and interval-level measurement; to treat it as merely ordinal would lose information. Further, if the item is accompanied by a visual analog scale, where equal spacing of response levels is clearly indicated, the argument for treating it as interval-level data is even stronger.
  • When treated as ordinal data, Likert responses can be collated into bar charts, central tendency summarised by the median or the mode (but some would say not the mean), dispersion summarised by the range across quartiles (but some would say not the standard deviation), or analyzed using non-parametric tests, e.g. chi-square test, Mann–Whitney test, Wilcoxon signed-rank test, or Kruskal–Wallis test.[4] Parametric analysis of ordinary averages of Likert scale data is also justifiable by the Central Limit Theorem, although some would disagree that ordinary averages should be used for Likert scale data.
Weiye Loh

Measuring Social Media: Who Has Access to the Firehose? - 0 views

  • The question that the audience member asked — and one that we tried to touch on a bit in the panel itself — was who has access to this raw data. Twitter doesn’t comment on who has full access to its firehose, but to Weil’s credit he was at least forthcoming with some of the names, including stalwarts like Microsoft, Google and Yahoo — plus a number of smaller companies.
  • In the case of Twitter, the company offers free access to its API for developers. The API can provide access and insight into information about tweets, replies and keyword searches, but as developers who work with Twitter — or any large scale social network — know, that data isn’t always 100% reliable. Unreliable data is a problem when talking about measurements and analytics, where the data is helping to influence decisions related to social media marketing strategies and allocations of resources.
  • One of the companies that has access to Twitter’s data firehose is Gnip. As we discussed in November, Twitter has entered into a partnership with Gnip that allows the social data provider to resell access to the Twitter firehose.This is great on one level, because it means that businesses and services can access the data. The problem, as noted by panelist Raj Kadam, the CEO of Viralheat, is that Gnip’s access can be prohibitively expensive.
  • ...3 more annotations...
  • The problems with reliable access to analytics and measurement information is by no means limited to Twitter. Facebook data is also tightly controlled. With Facebook, privacy controls built into the API are designed to prevent mass data scraping. This is absolutely the right decision. However, a reality of social media measurement is that Facebook Insights isn’t always reachable and the data collected from the tool is sometimes inaccurate.It’s no surprise there’s a disconnect between the data that marketers and community managers want and the data that can be reliably accessed. Twitter and Facebook were both designed as tools for consumers. It’s only been in the last two years that the platform ecosystem aimed at serving large brands and companies
  • The data that companies like Twitter, Facebook and Foursquare collect are some of their most valuable assets. It isn’t fair to expect a free ride or first-class access to the data by anyone who wants it.Having said that, more transparency about what data is available to services and brands is needed and necessary.We’re just scraping the service of what social media monitoring, measurement and management tools can do. To get to the next level, it’s important that we all question who has access to the firehose.
  • We Need More Transparency for How to Access and Connect with Data
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

Let's make science metrics more scientific : Article : Nature - 0 views

  • Measuring and assessing academic performance is now a fact of scientific life.
  • Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use1
  • Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
  • ...15 more annotations...
  • narrow or biased measures of scientific achievement can lead to narrow and biased science.
  • Global demand for, and interest in, metrics should galvanize stakeholders — national funding agencies, scientific research organizations and publishing houses — to combine forces. They can set an agenda and foster research that establishes sound scientific metrics: grounded in theory, built with high-quality data and developed by a community with strong incentives to use them.
  • Scientists are often reticent to see themselves or their institutions labelled, categorized or ranked. Although happy to tag specimens as one species or another, many researchers do not like to see themselves as specimens under a microscope — they feel that their work is too complex to be evaluated in such simplistic terms. Some argue that science is unpredictable, and that any metric used to prioritize research money risks missing out on an important discovery from left field.
    • Weiye Loh
       
      It is ironic that while scientists feel that their work are too complex to be evaluated in simplistic terms or matrics, they nevertheless feel ok to evaluate the world in simplistic terms. 
  • It is true that good metrics are difficult to develop, but this is not a reason to abandon them. Rather it should be a spur to basing their development in sound science. If we do not press harder for better metrics, we risk making poor funding decisions or sidelining good scientists.
  • Metrics are data driven, so developing a reliable, joined-up infrastructure is a necessary first step.
  • We need a concerted international effort to combine, augment and institutionalize these databases within a cohesive infrastructure.
  • On an international level, the issue of a unique researcher identification system is one that needs urgent attention. There are various efforts under way in the open-source and publishing communities to create unique researcher identifiers using the same principles as the Digital Object Identifier (DOI) protocol, which has become the international standard for identifying unique documents. The ORCID (Open Researcher and Contributor ID) project, for example, was launched in December 2009 by parties including Thompson Reuters and Nature Publishing Group. The engagement of international funding agencies would help to push this movement towards an international standard.
  • if all funding agencies used a universal template for reporting scientific achievements, it could improve data quality and reduce the burden on investigators.
    • Weiye Loh
       
      So in future, we'll only have one robust matric to evaluate scientific contribution? hmm...
  • Importantly, data collected for use in metrics must be open to the scientific community, so that metric calculations can be reproduced. This also allows the data to be efficiently repurposed.
  • As well as building an open and consistent data infrastructure, there is the added challenge of deciding what data to collect and how to use them. This is not trivial. Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes4 and even the production of YouTube videos.
  • Perhaps publications in these different media should be weighted differently in different fields.
  • There needs to be a greater focus on what these data mean, and how they can be best interpreted.
  • This requires the input of social scientists, rather than just those more traditionally involved in data capture, such as computer scientists.
  • An international data platform supported by funding agencies could include a virtual 'collaboratory', in which ideas and potential solutions can be posited and discussed. This would bring social scientists together with working natural scientists to develop metrics and test their validity through wikis, blogs and discussion groups, thus building a community of practice. Such a discussion should be open to all ideas and theories and not restricted to traditional bibliometric approaches.
  • Far-sighted action can ensure that metrics goes beyond identifying 'star' researchers, nations or ideas, to capturing the essence of what it means to be a good scientist.
  •  
    Let's make science metrics more scientific Julia Lane1 Top of pageAbstract To capture the essence of good science, stakeholders must combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity, says Julia Lane.
Jody Poh

Australia's porn-blocking plan unveiled - 10 views

Elaine said: What are the standards put in place to determine whether something is of adult content? Who set those standards? Based on 'general' beliefs and what the government/"web police'' think ...

Weiye Loh

The Problem with Climate Change | the kent ridge common - 0 views

  • what is climate change? From a scientific point of view, it is simply a statistical change in atmospheric variables (temperature, precipitation, humidity etc). It has been occurring ever since the Earth came into existence, far before humans even set foot on the planet: our climate has been fluctuating between warm periods and ice ages, with further variations within. In fact, we are living in a warm interglacial period in the middle of an ice age.
  • Global warming has often been portrayed in apocalyptic tones, whether from the mouth of the media or environmental groups: the daily news tell of natural disasters happening at a frightening pace, of crop failures due to strange weather, of mass extinctions and coral die-outs. When the devastating tsunami struck Southeast Asia years ago, some said it was the wrath of God against human mistreatment of the environment; when hurricane Katrina dealt out a catastrophe, others said it was because of (America’s) failure to deal with climate change. Science gives the figures and trends, and people take these to extremes.
  • One immediate problem with blaming climate change for every weather-related disaster or phenomenon is that it reduces humans’ responsibility of mitigating or preventing it. If natural disasters are already, as their name suggests, natural, adding the tag ‘global warming’ or ‘climate change’ emphasizes the dominance of natural forces, and our inability to do anything about it. Surely, humans cannot undo climate change? Even at Cancun, amid the carbon cuts that have been promised, questions are being brought up on whether they are sufficient to reverse our actions and ‘save’ the planet.  Yet the talk about this remote, omnipotent force known as climate change obscures the fact that, we can, and have always been, thinking of ways to reduce the impact of natural hazards. Forecasting, building better infrastructure and coordinating more efficient responses – all these are far more desirable to wading in woe. For example, we will do better at preventing floods in Singapore at tackling the problems rather than singing in praise of God.
  • ...5 more annotations...
  • However, a greater concern lies in the notion of climate change itself. Climate change is in essence one kind of nature-society relationship, in which humans influence the climate through greenhouse gas (particularly CO2) emissions, and the climate strikes back by heating up and going crazy at times. This can be further simplified into a battle between humans and CO2: reducing CO2 guards against climate change, and increasing it aggravates the consequences. This view is anchored in scientists’ recommendation that a ‘safe’ level of CO2 should be at 350 parts per million (ppm) instead of the current 390. Already, the need to reduce CO2 is understood, as is evident in the push for greener fuels, more efficient means of production, the proliferation of ‘green’ products and companies, and most recently, the Cancun talks.
  • So can there be anything wrong with reducing CO2? No, there isn’t, but singling out CO2 as the culprit of climate change or of the environmental problems we face prevents us from looking within. What do I mean? The enemy, CO2, is an ‘other’, an externality produced by our economic systems but never an inherent component of the systems. Thus, we can declare war on the gas or on climate change without taking a step back and questioning: is there anything wrong with the way we develop?  Take Singapore for example: the government pledged to reduce carbon emissions by 16% under ‘business as usual’ standards, which says nothing about how ‘business’ is going to be changed other than having less carbon emissions (in fact, it is questionable even that CO2 levels will decrease, as ‘business as usual’ standards project a steady increase emission of CO2 each year). With the development of green technologies, decrease in carbon emissions will mainly be brought about by increased energy efficiency and switch to alternative fuels (including the insidious nuclear energy).
  • Thus, the way we develop will hardly be changed. Nobody questions whether our neoliberal system of development, which relies heavily on consumption to drive economies, needs to be looked into. We assume that it is the right way to develop, and only tweak it for the amount of externalities produced. Whether or not we should be measuring development by the Gross Domestic Product (GDP) or if welfare is correlated to the amount of goods and services consumed is never considered. Even the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) scheme which aims to pay forest-rich countries for protecting their forests, ends up putting a price tag on them. The environment is being subsumed under the economy, when it should be that the economy is re-looked to take the environment into consideration.
  • when the world is celebrating after having held at bay the dangerous greenhouse gas, why would anyone bother rethinking about the economy? Yet we should, simply because there are alternative nature-society relationships and discourses about nature that are more or of equal importance as global warming. Annie Leonard’s informative videos on The Story of Stuff and specific products like electronics, bottled water and cosmetics shed light on the dangers of our ‘throw-away culture’ on the planet and poorer countries. What if the enemy was instead consumerism? Doing so would force countries (especially richer ones) to fundamentally question the nature of development, instead of just applying a quick technological fix. This is so much more difficult (and less economically viable), alongside other issues like environmental injustices – e.g. pollution or dumping of waste by Trans-National Corporations in poorer countries and removal of indigenous land rights. It is no wonder that we choose to disregard internal problems and focus instead on an external enemy; when CO2 is the culprit, the solution is too simple and detached from the communities that are affected by changes in their environment.
  • We need hence to allow for a greater politics of the environment. What I am proposing is not to diminish our action to reduce carbon emissions, for I do believe that it is part of the environmental problem that we are facing. What instead should be done is to reduce our fixation on CO2 as the main or only driver of climate change, and of climate change as the most pertinent nature-society issue we are facing. We should understand that there are many other ways of thinking about the environment; ‘developing’ countries, for example, tend to have a closer relationship with their environment – it is not something ‘out there’ but constantly interacted with for food, water, regulating services and cultural value. Their views and the impact of the socio-economic forces (often from TNCs and multi-lateral organizations like IMF) that shape the environment must also be taken into account, as do alternative meanings of sustainable development. Thus, even as we pat ourselves on the back for having achieved something significant at Cancun, our action should not and must not end there. Even if climate change hogs the headlines now, we must embrace more plurality in environmental discourse, for nature is not and never so simple as climate change alone. And hopefully sometime in the future, alongside a multi-lateral conference on climate change, the world can have one which rethinks the meaning of development.
  •  
    Chen Jinwen
Weiye Loh

Mike Adams Remains True to Form « Alternative Medicine « Health « Skeptic North - 0 views

  • The 10:23 demonstrations and the CBC Marketplace coverage have elicited fascinating case studies in CAM professionalism. Rather than offering any new information or evidence about homeopathy itself, some homeopaths have spuriously accused skeptical groups of being malicious Big Pharma shills.
  • Mike Adams of the Natural News website
  • has decided to provide his own coverage of the 10:23 campaign
  • ...17 more annotations...
  • Mike’s thesis is essentially: Silly skeptics, it’s impossible to OD on homeopathy!
  • 1. “Notice that they never consume their own medicines in large doses? Chemotherapy? Statin drugs? Blood thinners? They wouldn’t dare drink those.
  • Of course we wouldn’t. Steven Novella rightly points out that, though Mike thinks he’s being clever here, he’s actually demonstrating a lack of understanding for what the 10:23 campaign is about by using a straw man. Mike later issues a challenge for skeptics to drink their favourite medicines while he drinks homeopathy. Since no one will agree to that for the reasons explained above, he can claim some sort of victory — hence his smugness. But no one is saying that drugs aren’t harmful.
  • The difference between medicine and poison is in the dose. The vitamins and herbs promoted by the CAM industry are just as potentially harmful as any pharmaceutical drug, given enough of it. Would Adams be willing to OD on the vitamins or herbal remedies that he sells?
  • Even Adams’ favorite panacea, vitamin D, is toxic if you take enough of it (just ask Gary Null). Notice how skeptics don’t consume those either, because that is not the point they’re making.
  • The point of these demonstrations is that homeopathy has nothing in it, has no measurable physiological effects, and does not do what is advertised on the package.
  • 2. “Homeopathy, you see, isn’t a drug. It’s not a chemical.” Well, he’s got that right. “You know the drugs are kicking in when you start getting worse. Toxicity and conventional medicine go hand in hand.” [emphasis his]
  • Here I have to wonder if Adams knows any people with diabetes, AIDS, or any other illness that used to mean a death sentence before the significant medical advances of the 20th century that we now take for granted. So far he seems to be a firm believer in the false dichotomy that drugs are bad and natural products are good, regardless of what’s in them or how they’re used (as we know, natural products can have biologically active substances and effectively act as impure drugs – but leave it to Adams not to get bogged down with details). There is nothing to support the assertion that conventional medicine is nothing but toxic symptom-inducers.
  • 3-11. “But homeopathy isn’t a chemical. It’s a resonance. A vibration, or a harmony. It’s the restructuring of water to resonate with the particular energy of a plant or substance. We can get into the physics of it in a subsequent article, but for now it’s easy to recognize that even from a conventional physics point of view, liquid water has tremendous energy, and it’s constantly in motion, not just at the molecular level but also at the level of its subatomic particles and so-called “orbiting electrons” which aren’t even orbiting in the first place. Electrons are vibrations and not physical objects.” [emphasis his]
  • This is Star Trek-like technobabble – lots of sciency words
  • if something — anything — has an effect, then that effect is measurable by definition. Either something works or it doesn’t, regardless of mechanism. In any case, I’d like to see the well-documented series of research that conclusively proves this supposed mechanism. Actually, I’d like to see any credible research at all. I know what the answer will be to that: science can’t detect this yet. Well if you agree with that statement, reader, ask yourself this: then how does Adams know? Where did he get this information? Without evidence, he is guessing, and what is that really worth?
  • 13. “But getting back to water and vibrations, which isn’t magic but rather vibrational physics, you can’t overdose on a harmony. If you have one violin playing a note in your room, and you add ten more violins — or a hundred more — it’s all still the same harmony (with all its complex higher frequencies, too). There’s no toxicity to it.” [emphasis his]
  • Homeopathy has standard dosing regimes (they’re all the same), but there is no “dose” to speak of: the ingredients have usually been diluted out to nothing. But Adams is also saying that homeopathy doesn’t work by dose at all, it works by the properties of “resonance” and “vibration”. Then why any dosing regimen? To maintain the resonance? How is this resonance measured? How long does the “resonance” last? Why does it wear off? Why does he think televisions can inactivate homeopathy? (I think I might know the answer to that last one, as electronic interference is a handy excuse for inefficacy.)
  • “These skeptics just want to kill themselves… and they wouldn’t mind taking a few of you along with them, too. Hence their promotion of vaccines, pharmaceuticals, chemotherapy and water fluoridation. We’ll title the video, “SKEPTICS COMMIT MASS SUICIDE BY DRINKING PHARMACEUTICALS AS IF THEY WERE KOOL-AID.” Jonestown, anyone?”
  • “Do you notice the irony here? The only medicines they’re willing to consume in large doses in public are homeopathic remedies! They won’t dare consume large quantities of the medicines they all say YOU should be taking! (The pharma drugs.)” [emphasis his]
  • what Adams seems to have missed is that the skeptics have no intention of killing themselves, so his bizarre claims that the 10:23 participants are psychopathic, self-loathing, and suicidal makes not even a little bit of sense. Skeptics know they aren’t going to die with these demonstrations, because homeopathy has no active ingredients and no evidence of efficacy.
  • The inventor of homeopathy himself, Samuel Hahnemann believed that excessive doses of homeopathy could be harmful (see sections 275 and 276 of his Organon). Homeopaths are pros at retconning their own field to fit in with Hahnemann’s original ideas (inventing new mechanisms, such as water memory and resonance, in the face of germ theory). So how does Adams reconcile this claim?
Weiye Loh

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
Weiye Loh

Basqueresearch.com: News - PhD thesis warns of risk of delegating to just a few teacher... - 0 views

  • the incorporation of Information and Communication Technologies into Primary Education brought with it positive changes in the role of the teacher and the student. Teachers and students stopped being mere transmitters and receptors, respectively. The first became mediators of information and the second opted for learning through investigating, discovering and presenting ideas to classmates and teachers. In this way they have, at the same time, the opportunity of getting to know the work of other students, too. Thus, the use of Internet and ICTs reinforce participation and collaboration in the school. According to Dr Altuna, it also helps to boost learning models that are more constructivist, socio-constructivist and even connectivist.
  • Despite its educational possibilities the researcher warns that there are numerous factors that limit the incorporation of Internet into the teaching of the curricular subject in question. These involve aspects such as the time dedicated weekly, technological and computer facilities, accessibility and connection to Internet, the school curriculum and, above all, the knowledge, training and involvement of the teaching staff.
  • the thesis observed a tendency to delegate responsibility for ICT in the school to those teachers who were considered to be “computer experts”. Dr Altuna warns of the risks that this practice runs, as thereby the rest of the staff continues to be untrained and unable to apply ICT and Internet in activities undertaken within their curricular subject. It has to be stressed, therefore, that all should be responsible for the educational measures to be taken so that students acquire digital skills. Also observed was the need for a pedagogic approach to ICT which advises the teaching staff on knowledge about and putting into practice activities in educational innovation.
  • ...2 more annotations...
  • Dr Altuna not only includes the lack of involvement of teaching staff amongst the limitations for incorporating ICT, but also that of the involvement of the families. It was explained that families showed interest in the use of Internet and ICTs as educational tools for their children, but that these, too, excessively delegate to the schools. The researcher stressed that the families also need guidance, as they are concerned about the use by their children of Internet but do not know the best way to go about the problem.
  • Educational psychologist Dr Jon Altuna has carried out a thorough study of the phenomenon of the school 2.0. Concretely, he has looked into the use and level of incorporation of Internet and of Information and Communication Technologies (ICT) into the third cycle of Primary Education, observing at the same time the attitudes of the teaching staff, and of the students and the families of the children in this regard. His PhD, defended at the University of the Basque Country (UPV/EHU), is entitled, Incorporation of Internet into the teaching of the subject Knowledge of the Environment during the third cycle of Primary Education: possibilities and analysis of the situation of a school. Dr Altuna’s research is based on a study of cases undertaken over eight years at a school where new activities involving ICT had been introduced into the curricular subject of Knowledge of the Environment, taught in the fifth and sixth year of Primary Education. The researcher gathered data from 837 students, 134 teachers and 190 families of this school. This study was completed with the experiences of ICT teachers from 21 schools.
  •  
    Despite its educational possibilities the researcher warns that there are numerous factors that limit the incorporation of Internet into the teaching of the curricular subject in question. These involve aspects such as the time dedicated weekly, technological and computer facilities, accessibility and connection to Internet, the school curriculum and, above all, the knowledge, training and involvement of the teaching staff.
Weiye Loh

Arsenic bacteria - a post-mortem, a review, and some navel-gazing | Not Exactly Rocket ... - 0 views

  • t was the big news that wasn’t. Hyperbolic claims about the possible discovery of alien life, or a second branch of life on Earth, turned out to be nothing more than bacteria that can thrive on arsenic, using it in place of phosphorus in their DNA and other molecules. But after the initial layers of hype were peeled away, even this extraordinar
  • This is a chronological roundup of the criticism against the science in the paper itself, ending with some personal reflections on my own handling of the story (skip to Friday, December 10th for that bit).
  • Thursday, December 2nd: Felisa Wolfe-Simon published a paper in Science, claiming to have found bacteria in California’s Mono Lake that can grow using arsenic instead of phosphorus. Given that phosphorus is meant to be one of six irreplaceable elements, this would have been a big deal, not least because the bacteria apparently used arsenic to build the backbones of their DNA molecules.
  • ...14 more annotations...
  • In my post, I mentioned some caveats. Wolfe-Simon isolated the arsenic-loving strain, known as GFAJ-1, by growing Mono Lake bacteria in ever-increasing concentrations of arsenic while diluting out the phosphorus. It is possible that the bacteria’s arsenic molecules were an adaptation to the harsh environments within the experiment, rather than Mono Lake itself. More importantly, there were still detectable levels of phosphorus left in the cells at the end of the experiment, although Wolfe-Simon claimed that the bacteria shouldn’t have been able to grow on such small amounts.
  • signs emerged that NASA weren’t going to engage with the criticisms. Dwayne Brown, their senior public affairs officer, highlighted the fact that the paper was published in one of the “most prestigious scientific journals” and deemed it inappropriate to debate the science using the same media and bloggers who they relied on for press coverage of the science. Wolfe-Simon herself tweeted that “discussion about scientific details MUST be within a scientific venue so that we can come back to the public with a unified understanding.”
  • Jonathan Eisen says that “they carried out science by press release and press conference” and “are now hypocritical if they say that the only response should be in the scientific literature.” David Dobbs calls the attitude “a return to pre-Enlightenment thinking”, and rightly noted that “Rosie Redfield is a peer, and her blog is peer review”.
  • Chris Rowan agreed, saying that what happens after publication is what he considers to be “real peer review”. Rowan said, “The pre-publication stuff is just a quality filter, a check that the paper is not obviously wrong – and an imperfect filter at that. The real test is what happens in the months and years after publication.”Grant Jacobs and others post similar thoughts, while Nature and the Columbia Journalism Review both cover the fracas.
  • Jack Gilbert at the University of Chicago said that impatient though he is, peer-reviewed journals are the proper forum for criticism. Others were not so kind. At the Guardian, Martin Robbins says that “at almost every stage of this story the actors involved were collapsing under the weight of their own slavish obedience to a fundamentally broken… well… ’system’” And Ivan Oransky noted that NASA failed to follow its own code of conduct when announcing the study.
  • Dr Isis said, “If question remains about the voracity of these authors findings, then the only thing that is going to answer that doubt is data.  Data cannot be generated by blog discussion… Talking about digging a ditch never got it dug.”
  • it is astonishing how quickly these events unfolded and the sheer number of bloggers and media outlets that became involved in the criticism. This is indeed a brave new world, and one in which we are all the infamous Third Reviewer.
  • I tried to quell the hype around the study as best I could. I had the paper and I think that what I wrote was a fair representation of it. But, of course, that’s not necessarily enough. I’ve argued before that journalists should not be merely messengers – we should make the best possible efforts to cut through what’s being said in an attempt to uncover what’s actually true. Arguably, that didn’t happen although to clarify, I am not saying that the paper is rubbish or untrue. Despite the criticisms, I want to see the authors respond in a thorough way or to see another lab attempt replicate the experiments before jumping to conclusions.
  • the sheer amount of negative comment indicates that I could have been more critical of the paper in my piece. Others have been supportive in suggesting that this was more egg on the face of the peer reviewers and indeed, several practicing scientists took the findings on face value, speculating about everything from the implications for chemotherapy to whether the bacteria have special viruses. The counter-argument, which I have no good retort to, is that peer review is no guarantee of quality, and that writers should be able to see through the fog of whatever topic they write about.
  • my response was that we should expect people to make reasonable efforts to uncover truth and be skeptical, while appreciating that people can and will make mistakes.
  • it comes down to this: did I do enough? I was certainly cautious. I said that “there is room for doubt” and I brought up the fact that the arsenic-loving bacteria still contain measurable levels of phosphorus. But I didn’t run the paper past other sources for comment, which I typically do it for stories that contain extraordinary claims. There was certainly plenty of time to do so here and while there were various reasons that I didn’t, the bottom line is that I could have done more. That doesn’t always help, of course, but it was an important missed step. A lesson for next time.
  • I do believe that it you’re going to try to hold your profession to a higher standard, you have to be honest and open when you’ve made mistakes yourself. I also think that if you cover a story that turns out to be a bit dodgy, you have a certain responsibility in covering the follow-up
  • A basic problem with is the embargo. Specifically that journalists get early access, while peers – other specialists in the field – do not. It means that the journalist, like yourself, can rely only on the original authors, with no way of getting other views on the findings. And it means that peers can’t write about the paper when the journalists (who, inevitably, do a positive-only coverage due to the lack of other viewpoints) do, but will be able to voice only after they’ve been able to digest the paper and formulate a response.
  • No, that’s not true. The embargo doens’t preclude journalists from sending papers out to other authors for review and comment. I do this a lot and I have been critical about new papers as a result, but that’s the step that I missed for this story.
Weiye Loh

Roger Pielke Jr.'s Blog: Climate Science Turf Wars and Carbon Dioxide Myopia - 0 views

  • Presumably by "climate effect" Caldeira means the long-term consequences of human actions on the global climate system -- that is, climate change. Going unmentioned by Caldeira is the fact that there are also short-term climate effects, and among those, the direct health effects of non-carbon dioxide emissions on human health and agriculture.
  • There are a host of reasons to worry about the climatic effects of  non-CO2 forcings beyond long-term climate change.  Shindell explains this point: There is also a value judgement inherent in any suggestion that CO2 is the only real forcer that matters or that steps to reduce soot and ozone are ‘almost meaningless’. Based on CO2’s long residence time in the atmosphere, it dominates long-term committed forcing. However, climate changes are already happening and those alive today are feeling the effects now and will continue to feel them during the next few decades, but they will not be around in the 22nd century. These climate changes have significant impacts. When rainfall patterns shift, livelihoods in developing countries can be especially hard hit. I suspect that virtually all farmers in Africa and Asia are more concerned with climate change over the next 40 years than with those after 2050. Of course they worry about the future of their children and their children’s children, but providing for their families now is a higher priority. . . However, saying CO2 is the only thing that matters implies that the near-term climate impacts I’ve just outlined have no value at all, which I don’t agree with. What’s really meant in a comment like “if one’s goal is to limit climate change, one would always be better off spending the money on immediate reduction of CO2 emissions’ is ‘if one’s goal is limiting LONG-TERM climate change”. That’s a worthwhile goal, but not the only goal.
  • The UNEP report notes that action on carbon dioxide is not going to have a discernible influence on the climate system until perhaps mid-century (see the figure at the top of this post).  Consequently, action on non-carbon dioxide forcings is very much independent of action on carbon dioxide -- they address climatic causes and consequences on very different timescales, and thus probably should not even be conflated to begin with. UNEP writes: In essence, the near-term CH4 and BC measures examined in this Assessment are effectively decoupled from the CO2 measures both in that they target different source sectors and in that their impacts on climate change take place over different timescales.Advocates for action on carbon dioxide are quick to frame discussions narrowly in terms of long-term climate change and the primary role of carbon dioxide. Indeed, accumulating carbon dioxide is a very important issue (consider that my focus in The Climate Fix is carbon dioxide, but I also emphasize that the carbon dioxide issue is not the same thing as climate change), but it is not the only issue.
  • ...2 more annotations...
  • perhaps the difference in opinions on this subject expressed by Shindell and Caldeira is nothing more than an academic turf battle over what it means for policy makers to focus on "climate" -- with one wanting the term (and justifications for action invoking that term) to be reserved for long-term climate issues centered on carbon dioxide and the other focused on a broader definition of climate and its impacts.  If so, then it is important to realize that such turf battles have practical consequences. Shindell's breath of fresh air gets the last word with his explanation why it is that we must consider long- and short- term climate impacts at the same time, and how we balance them will reflect a host of non-scientific considerations: So rather than set one against the other, I’d view this as analogous to research on childhood leukemia versus Alzheimer’s. If you’re an advocate for child’s health, you may care more about the former, and if you’re a retiree you might care more about the latter. One could argue about which is most worthy based on number of cases, years of life lost, etc., but in the end it’s clear that both diseases are worth combating and any ranking of one over the other is a value judgement. Similarly, there is no scientific basis on which to decide which impacts of climate change are most important, and we can only conclude that both controls are worthwhile. The UNEP/WMO Assessment provides clear information on the benefits of short-lived forcer reductions so that decision-makers, and society at large, can decide how best to use limited resources.
  • If we eliminated emissions of methane and black carbon, but did nothing about carbon dioxide we would have delayedThis presupposes that CO2 emissions can be capped at current levels without economic devastation or that immediate economic devastation is warranted.
  •  
    Over at Dot Earth Andy Revkin has posted up two illuminating comments from climate scientists -- one from NASA's Drew Shindell and a response to it from Stanford's Ken Caldeira. Shindell's comment focuses on the impacts of action to mitigate the effects of black carbon, tropospheric ozone and other non-carbon dioxide human climate forcings, and comes from his perspective as lead author of an excellent UNEP report on the subject that is just out (here in PDF and the Economist has an excellent article here).  (Shindell's comment was apparently in response to an earlier Dot Earth comment by Raymond Pierrehumbert.) In contrast, Caldeira invokes long-term climate change to defend the importance of focusing on carbon dioxide:
1 - 20 of 29 Next ›
Showing 20 items per page