Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Risk

Rss Feed Group items tagged

Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

Roger Pielke Jr.'s Blog: Blind Spots in Australian Flood Policies - 0 views

  • better management of flood risks in Australia will depend up better data on flood risk.  However, collecting such data has proven problematic
  • As many Queenslanders affected by January’s floods are realising, riverine flood damage is commonly excluded from household insurance policies. And this is unlikely to change until councils – especially in Queensland – stop dragging their feet and actively assist in developing comprehensive data insurance companies can use.
  • ? Because there is often little available information that would allow an insurer to adequately price this flood risk. Without this, there is little economic incentive for insurers to accept this risk. It would be irresponsible for insurers to cover riverine flood without quantifying and pricing the risk accordingly.
  • ...8 more annotations...
  • The first step in establishing risk-adjusted premiums is to know the likelihood of the depth of flooding at each address. This information has to be address-specific because the severity of flooding can vary widely over small distances, for example, from one side of a road to the other.
  • A litany of reasons is given for withholding data. At times it seems that refusal stems from a view that insurance is innately evil. This is ironic in view of the gratuitous advice sometimes offered by politicians and commentators in the aftermath of extreme events, exhorting insurers to pay claims even when no legal liability exists and riverine flood is explicitly excluded from policies.
  • Risk Frontiers is involved in jointly developing the National Flood Information Database (NFID) for the Insurance Council of Australia with Willis Re, a reinsurance broking intermediary. NFID is a five year project aiming to integrate flood information from all city councils in a consistent insurance-relevant form. The aim of NFID is to help insurers understand and quantify their risk. Unfortunately, obtaining the base data for NFID from some local councils is difficult and sometimes impossible despite the support of all state governments for the development of NFID. Councils have an obligation to assess their flood risk and to establish rules for safe land development. However, many are antipathetic to the idea of insurance. Some states and councils have been very supportive – in New South Wales and Victoria, particularly. Some states have a central repository – a library of all flood studies and digital terrain models (digital elevation data). Council reluctance to release data is most prevalent in Queensland, where, unfortunately, no central repository exists.
  • Second, models of flood risk are sometimes misused:
  • many councils only undertake flood modelling in order to create a single design flood level, usually the so-called one-in-100 year flood. (For reasons given later, a better term is the flood with an 1% annual likelihood of being exceeded.)
  • Inundation maps showing the extent of the flood with a 1% annual likelihood of exceedance are increasingly common on council websites, even in Queensland. Unfortunately these maps say little about the depth of water at an address or, importantly, how depth varies for less probable floods. Insurance claims usually begin when the ground is flooded and increase rapidly as water rises above the floor level. At Windsor in NSW, for example, the difference in the water depth between the flood with a 1% annual chance of exceedance and the maximum possible flood is nine metres. In other catchments this difference may be as small as ten centimetres. The risk of damage is quite different in both cases and an insurer needs this information if they are to provide coverage in these areas.
  • The ‘one-in-100 year flood’ term is misleading. To many it is something that happens regularly once every 100 years — with the reliability of a bus timetable. It is still possible, though unlikely, that a flood of similar magnitude or even greater flood could happen twice in one year or three times in successive years.
  • The calculations underpinning this are not straightforward but the probability that an address exposed to a 1-in-100 year flood will experience such an event or greater over the lifetime of the house – 50 years say – is around 40%. Over the lifetime of a typical home mortgage – 25 years – the probability of occurrence is 22%. These are not good odds.
  •  
    John McAneney of Risk Frontiers at Macquarie University in Sydney identifies some opportunities for better flood policies in Australia.
Weiye Loh

Want people to get on board with a shift to clean energy? Shield them from economic ins... - 0 views

  • The reality is that a bold new energy and climate change policy would inevitably result in dislocations in certain industries and upset long-established ways of life in many regions; in addition, it would lead to higher prices for basic commodities such as gas, home heating oil, and food. In societies where there are strong social safety nets -- universal healthcare, universal preschool, strong support for new parents, significant investments in public transportation, and sustained support for higher education -- the changes wrought by a paradigm shift in energy will tend not to result in hugely destabilizing effects across whole towns and communities. In fact, with good planning and investments in critical infrastructure, strong environmental policies can result in overall improvements in the quality of life for nearly everyone. Throughout much of the developed world, citizens are willing to pay prices for gasoline that would lead to riots in American streets, because they know that the government revenue raised by high gas taxes is used for programs that directly benefit them. In other words, ten-dollar-a-gallon gas isn’t such a big deal when everyone has great healthcare, great public transportation, and free high-quality schooling.
  • Americans are so battered and anxious right now. Median wages are flat, unemployment is high, politics is paralyzed. Middle-class families are one health problem away from ruin, and when they fall, there's no net. That kind of insecurity, as much as anything, explains the American reticence to launch bold new social programs.
  • Michael Levi points to a fantastic piece by Nassim Taleb and Mark Blyth wherein they approach a similar subject from a seemingly contrary angle, arguing that government efforts to suppress social and economic volatility can backfire. Without the experience of adjusting to small shocks as they come, we won't be prepared when the big shocks arrive:
  • ...3 more annotations...
  • Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting no visible risks. In fact, they tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policymakers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to "Black Swans" -- that is, they become extremely vulnerable to large-scale events that lie far from the statistical norm and were largely unpredictable to a given set of observers. Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • If a society provides a basic measure of health and economic security for its citizens, its citizens will be more tolerant of a little volatility/risk/ambition in its social and economic policy.
  • This gets at why I think its extremely difficult to reconcile modern-day conservatism and serious efforts to address climate change (and future resource shortages, and other various other sources of long-term risk). The U.S. conservative politic program is devoted to increasing economic and social insecurity for average people and decreasing it for wealthy business owners. That is roughly the opposite of the approach you'd want to take if you want to increase society's resilience to the dangers approaching.
  •  
    First there's this extremely smart piece from economist Jason Scorse. It makes an argument that I wish had gotten much more attention during the fight over the climate bill, to wit: "people are much more willing to support environmental policies that come with large risks and disruptions to their way of life when other policies are in place to shield them from excessive risk and instability."
yongernn teo

Eli Lilly Accused of Unethical Marketing of Zyprexa - 0 views

  •  
    Summary of the Unethical Marketing of Zyprexa by Eli Lilly: \n\nEli Lilly is a global pharmaceutical company. In the year 2006, it was charged with unethical marketing of Zyprexa, the top-selling drug. It is approved only for the treatment of schizophrenia and bipolar disorder. \nFirstly, Eli Lilly in a report downplayed the risks of obesity and increased blood sugar associated with Zyprexa. Although Eli Lilly was aware of these risks for at least a decade, they went ahead without emphasizing the significance of these risks, in fear of jeopardizing their sales. \nSecondly, Eli Lilly held a promotional campaign called Viva Zyprexa, encouraging off-label usage of this drug in patients who had neither schizophrenia nor bipolar disorder. This campaign was targeted at the elderly who had dementia. However, this drug was not approved to treat dementia. In fact, it could increase the risk of death in older patients who had dementia-related psychosis. \nAll these were done to boost the sale of Zyprexa and to bring in more revenue for Eli Lilly. Zyprexa could alone bring in $4billion worth of sales annually. \n\nEthical Question:\nTo what extent should pharmaceutical companies go to inform potential consumers on the side-effects of their drugs? \n\nEthical Problem: \nThe information that is disseminated through marketing campaigns have to be true and transparent. There should not be any hidden agenda behind the amount of information being released. In this case, to prevent sales from plummeting, Eli Lilly downplayed the side-effects of Zyprexa. It also encouraged off-label usage. \nIt is very important that pharmaceutical companies practice good ethics as this concerns the health of its consumers. While one drug may act as a remedy for a health-problem, it could possibly lead to other health problems due to the side-effects. All these have to be conveyed to the consumer who exchanges his money for the product. \nNot being transparent and honest with the information of the pr
Weiye Loh

Science-Based Medicine » Skepticism versus nihilism about cancer and science-... - 0 views

  • I’m a John Ioannidis convert, and I accept that there is a lot of medical literature that is erroneous. (Just search for Dr. Ioannidis’ last name on this blog, and you’ll find copious posts praising him and discussing his work.) In fact, as I’ve pointed out, most medical researchers instinctively know that most new scientific findings will not hold up to scrutiny, which is why we rarely accept the results of a single study, except in unusual circumstances, as being enough to change practice. I also have pointed out many times that this is not necessarily a bad thing. Replication is key to verification of scientific findings, and more often than not provocative scientific findings are not replicated. Does that mean they shouldn’t be published?
  • As for pseudoscience, I’m half tempted to agree with Dr. Spector, but just not in the way he thinks. Unfortunately, over the last 20 years or so, there has been an increasing amount of pseudoscience in the medical literature in the form of “complementary and alternative medicine” (CAM) studies of highly improbable remedies or even virtually impossible ones (i.e., homeopathy). However, that does not appear to be what Dr. Spector is talking about, which is why I looked up his references. The second reference is to an SI article from 2009 entitled Science and Pseudoscience in Adult Nutrition Research and Practice. There, and only there, did I find out just what it is that Dr. Spector apparently means by “pseudoscience”: By pseudoscience, I mean the use of inappropriate methods that frequently yield wrong or misleading answers for the type of question asked. In nutrition research, such methods also often misuse statistical evaluations.
  • Dr. Spector doesn’t really know the difference between inadequately rigorous science and pseudoscience! Now, don’t get me wrong. I know that it’s not always easy to distinguish science from pseudoscience, especially at the fringes, but in general bad science has to go a lot further than Dr. Spector thinks to merit the the term “pseudoscience.” It is clear (to me, at least) from his articles that Dr. Spector throws around the term “pseudoscience” around rather more loosely than he should, using it as a pejorative for any clinical science less rigorous than a randomized, double-blind, placebo-controlled trial that meets FDA standards for approval of a drug (his pharma background coming to the fore, no doubt). Pseudoscience, Dr. Spector. You keep using that word. I do not think it means what you think it means. Indeed, I almost get the impression from his articles that Dr. Spector views any study that doesn’t reach FDA-level standards for drug approval to be pseudoscience.
  • ...4 more annotations...
  • Medical science, when it works well, tends to progress from basic science, to small pilot studies, to larger randomized studies, and then–only then–to those big, rigorous, insanely expensive randomized, double-blind, placebo-controlled trials. Dr. Spector mentions hierarchies of evidence, but he seems to fall into a false dichotomy, namely that if it’s not Level I evidence, it’s crap. The problem is, as Mark pointed out, in medicine we often don’t have Level I evidence for many questions. Indeed, for some questions, we will never have Level I evidence. Clinical medicine involves making decisions in the midst of uncertainty, sometimes extreme uncertainty.
  • Dr. Spector then proceeds to paint a picture of reckless physicians proceeding on crappy studies to pump women full of hormones. Actually, it was more than a bit more complicated on than that. That was the time when I was in my medical training, and I remember the discussions we had regarding the strength (or lack thereof) of the epidemiological data and the lack of good RCTs looking at HRT. I also remember that nothing works as well to relieve menopausal symptoms as HRT, an observation we have been reminded of again since 2003, which is the year when the first big study came out implicating HRT in increasing the risk of breast cancer (more later).
  • I found a rather fascinating editorial in the New England Journal of Medicine from more than 20 years ago that discussed the state of the evidence back then with regard to estrogen and breast cancer: Evidence that estrogen increases the risk of breast cancer has been surprisingly difficult to obtain. Clinical and epidemiologic studies and studies in animals strongly suggest that endogenous estrogen plays a part in causing breast cancer. If so, exogenous estrogen should be a potent promoter of breast cancer. Although more than 20 case–control and prospective studies of the relation of breast cancer and noncontraceptive estrogen use have failed to demonstrate the expected association, relatively few women in these studies used estrogen for extended periods. Studies of the use of diethylstilbestrol and oral contraceptives suggest that a long exposure or latency may be necessary to show any association between hormone use and breast cancer. In the Swedish study, only six years of follow-up was needed to demonstrate an increased risk of breast cancer with the postmenopausal use of estradiol. It should be noted, however, that half the women in the subgroup that provided detailed data on the duration of hormone use had taken estrogen for many years before their base-line prescription status was defined. The duration of estrogen exposure in these women before the diagnosis of breast cancer was probably seriously underestimated; a short latency cannot be attributed to estradiol on the basis of these data. Other recent studies of the use of noncontraceptive estrogen suggest a slightly increased risk of breast cancer after 15 to 20 years’ use.
  • even now, the evidence is conflicting regarding HRT and breast cancer, with the preponderance of evidence suggesting that mixed HRT (estrogen and progestin) significantly increases the risk of breast cancer, while estrogen-alone HRT very well might not increase the risk of breast cancer at all or (more likely) only very little. Indeed, I was just at a conference all day Saturday where data demonstrating this very point were discussed by one of the speakers. None of this stops Dr. Spector from categorically labeling estrogen as a “carcinogen that causes breast cancers that kill women.” Maybe. Maybe not. It’s actually not that clear. The problem, of course, is that, consistent with the first primary reports of WHI results, the preponderance of evidence finding health risks due to HRT have indicted the combined progestin/estrogen combinations as unsafe.
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
Weiye Loh

11.01.97 - Misconceptions about the causes of cancer lead to skewed priorities and wast... - 0 views

  • One of the big misconceptions is that artificial chemicals such as pesticides have a lot to do with human cancer, but that's just not true," says Bruce N. Ames, professor of biochemistry and molecular biology at the University of California at Berkeley and co-author of a new review of what is known about environmental pollution and cancer. "Nevertheless, it's conventional wisdom and society spends billions on this each year." "We consume more carcinogens in one cup of coffee than we get from the pesticide residues on all the fruits and vegetables we eat in a year," he adds.
  • there may be many excellent reasons for cleaning up pollution of our air, water and soil, the researchers say, prevention of cancer is not one of them.
  • "The problem is that lifestyle changes are tough," says Gold, director of the Carcinogenic Potency Project at UC Berkeley's National Institute for Environmental Health Sciences Center and a senior scientist in the cell and molecular biology division at Lawrence Berkeley National Laboratory. "But by targeting pesticide residues as a major problem, we risk making fruits and vegetables more expensive and indirectly increasing cancer risks, especially among the poor."
  • ...10 more annotations...
  • Whereas 99.9 percent of all the chemicals we ingest are natural, 78 percent of the chemicals tested are synthetic. So when more than half of all synthetic chemicals are found to cause cancer in rodents, it's not surprising that people link cancer with synthetic chemicals. But of the natural chemicals in our diet that have been tested in animals, half also cause cancer, Gold says.
  • "We need to recognize that there are far more carcinogens in the natural world than in the synthetic world, and go after the important things, such as lifestyle change."
  • Misconception: Cancer rates are soaring. In fact, the researchers say, if lung cancer due to smoking is excluded, overall cancer deaths in the U.S. have declined 16 percent since 1950.
  • Misconception: Reducing pesticide residues is an effective way to prevent diet-related cancer. Because fruits and vegetables are of major importance in reducing cancer, the unintended effect of requiring expensive efforts to reduce the amount of pesticides remaining on fruits and vegetables will be to increase their cost. This will lead to an increase in cancer among low income people who no longer will be able to afford to eat them.
  • Misconception: Human exposures to carcinogens and other potential hazards are primarily due to synthetic chemicals. Americans actually eat about 10,000 times more natural pesticides from fruits and vegetables than synthetic pesticide residues on food. Natural pesticides are chemicals that plants produce to defend themselves against fungi, insects, and other predators. And half of all natural pesticides tested in rodents turn out to be rodent carcinogens. In addition, we consume many other carcinogens in foods because of the chemicals produced in cooking. In a single cup of roasted coffee, for example, the natural chemicals known to be rodent carcinogens are about equal in weight to an entire year's work of synthetic pesticide residues.
  • Misconception: Cancer risks to humans can be assessed by standard high-dose animal cancer tests. In cancer tests, animals are given very high, nearly toxic doses. The effect on humans at lower doses is extrapolated from these results, as if the relationship were a straight line from high dose to low dose. However, the fact that half of all chemicals tested, whether natural or synthetic, turn out to cause cancer in rodents implies that this is an artifact of using high doses. High doses of any chemical can chronically kill cells and wound tissue, a risk factor for cancer . "Our conclusion is that the scientific evidence shows that there are high-dose effects," Ames says. "But even though government regulatory agencies recognize this, they still decide which synthetic chemicals to regulate based on linear extrapolation of high dose cancer tests in animals."
  • Misconception: Synthetic chemicals pose greater carcinogenic hazards than natural chemicals. Naturally occurring carcinogens represent an enormous background compared to the low-dose exposures to residues of synthetic chemicals such as pesticides, the researchers conclude. These results call for a reevaluation of whether animal cancer tests are really useful guides for protecting the public against minor hypothetical risks.
  • Misconception: The toxicology of synthetic chemicals is different from that of natural chemicals. No evidence exists for this, but the assumption could lead to unfortunate tradeoffs between natural and synthetic pesticides. Recently, for example, when a new variety of highly insect-resistant celery was introduced on a farm, the workers handling the celery developed rashes when they were exposed to sunlight. The pest-resistant celery turned out to contain almost eight times more natural pesticide in the form of psoralens -- chemicals known to cause cancer and genetic mutations -- than common celery.
  • Misconception: Pesticides and other synthetic chemicals are disrupting human hormones. Claims that synthetic chemicals with hormonal activity contribute to cancer and reduced sperm count ignore the fact that natural chemicals have hormone-like activity millions of times greater than do traces of synthetic chemicals. Rather, lifestyle -- lack of exercise, obesity, alcohol use and reproductive history -- are known to lead to marked changes in hormone levels in the body.
  • Misconception: Regulating low, hypothetical risks advances public health. Society -- primarily the private sector -- will spend an estimated $140 billion to comply with environmental regulations this year, according to projections by the Environmental Protection Agency. Much of this is aimed at reducing low-level human exposure to chemicals solely because they are rodent carcinogens, despite the fact that this rationale is flawed. Our improved ability to detect even minuscule concentrations of chemicals makes regulation even more expensive.
  •  
    BERKELEY -- Despite a lack of convincing evidence that pollution is an important cause of human cancer, this misconception drives government policy today and results in billions of dollars spent to clean up minuscule amounts of synthetic chemicals, say two UC Berkeley researchers.
Weiye Loh

Secrecy in the age of WikiLeaks - 1 views

  •  
    As government agencies look to leverage new technologies to communicate with the public, move more citizen services online, share services amongst agencies, share intelligence for national security purposes and collaborate with other nations and private industry, they will need to take a more open stance to secrecy and information sharing. But to mitigate risks, they need to take a more solid security stance at the same time. It is imperative for leaders at all levels within government (agencies, departments, contractors, etc.) to weigh the risks and benefits of making information more accessible and, once decided, put strong safeguards in place to ensure only those who need access can get access. Information leaks imply failures across multiple areas, particularly risk management, access control and confidentiality. The ongoing WikiLeaks exposé clearly shows that the threat is not always from external groups; it can be far more insidious when it stems from trusted individuals within an organisation.
Weiye Loh

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
Weiye Loh

Effect of alcohol on risk of coronary heart diseas... [Vasc Health Risk Manag. 2006] - ... - 0 views

  • Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good.
  • . The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real.
  • However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one.
  •  
    Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake.
Weiye Loh

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.  First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.  Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.  The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.  For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.  This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).  Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.  Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.  De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116).  
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!  Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.  A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
Weiye Loh

Singapore M.D.: Whose "health" is it anyway? - 0 views

  • leaving aside the fact that from the figures given by Prof Feng, about 80 per cent of obese people are NOT "perfectly healthy with normal cholesterol and blood sugar", and 70 per cent of people who die suddenly of heart attacks ARE obese (see my take on the 'fat but fit' argument here), and that Prof Feng has written in a previous letter of obesity being "a serious medical problem and [that] studies in the United States show that obesity will be the No. 1 public health problem and cause of death in five years' time", I am amused by Prof Feng's definition of good health as "not a number... [but] a sense of well-being physically, mentally, socially and spiritually".
  • much of what we do in "medicine" today is about numbers. Your "weight, body mass index, how often you jog or the number of kilometres you run", your "cholesterol and blood sugar", your smoking, alcohol intake, exercise, sexual behaviour, diet and family history are all quantified and studied, because they give us an idea of your risk for certain diseases. Our interventions, pharmacological or otherwise, aim to modify or reduce these risks. These are numbers that translate to concrete events in real-life.You may argue that one can have bad risk factors and still have a sense of "physical, mental, social and spiritual well-being", in which case you don't need a doctor or drugs to make you feel better - but that doesn't mean you are not going to die of a heart attack at 40 either.
  • The problem with using the term "well-being" in defining something as important as healthcare or medicine, is that it is a vague term (a weasel word, I like to call it) that allows quacks to ply their trade, and for people to medicalise their problems of living - and that is something Prof Feng disapproved of, isn't it?Do I have a better definition for "health"? Well, not yet - but I certainly don't think my job is only about giving people "a sense of well-being".
  •  
    Whose "health" is it anyway? Friday, July 30, 2010 Posted by admin at 12:37 PM | The problem with us doctors is, we can't quite make up our minds on what constitute "health" or "real medicine".
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Climate change and extreme flooding linked by new evidence | George Monbiot | Environme... - 0 views

  • Two studies suggest for the first time a clear link between global warming and extreme precipitation
  • There's a sound rule for reporting weather events that may be related to climate change. You can't say that a particular heatwave or a particular downpour – or even a particular freeze – was definitely caused by human emissions of greenhouse gases. But you can say whether these events are consistent with predictions, or that their likelihood rises or falls in a warming world.
  • Weather is a complex system. Long-running trends, natural fluctuations and random patterns are fed into the global weather machine, and it spews out a series of events. All these events will be influenced to some degree by global temperatures, but it's impossible to say with certainty that any of them would not have happened in the absence of man-made global warming.
  • ...5 more annotations...
  • over time, as the data build up, we begin to see trends which suggest that rising temperatures are making a particular kind of weather more likely to occur. One such trend has now become clearer. Two new papers, published by Nature, should make us sit up, as they suggest for the first time a clear link between global warming and extreme precipitation (precipitation means water falling out of the sky in any form: rain, hail or snow).
  • We still can't say that any given weather event is definitely caused by man-made global warming. But we can say, with an even higher degree of confidence than before, that climate change makes extreme events more likely to happen.
  • One paper, by Seung-Ki Min and others, shows that rising concentrations of greenhouse gases in the atmosphere have caused an intensification of heavy rainfall events over some two-thirds of the weather stations on land in the northern hemisphere. The climate models appear to have underestimated the contribution of global warming on extreme rainfall: it's worse than we thought it would be.
  • The other paper, by Pardeep Pall and others, shows that man-made global warming is very likely to have increased the probability of severe flooding in England and Wales, and could well have been behind the extreme events in 2000. The researchers ran thousands of simulations of the weather in autumn 2000 (using idle time on computers made available by a network of volunteers) with and without the temperature rises caused by man-made global warming. They found that, in nine out of 10 cases, man-made greenhouse gases increased the risks of flooding. This is probably as solid a signal as simulations can produce, and it gives us a clear warning that more global heating is likely to cause more floods here.
  • As Richard Allan points out, also in Nature, the warmer the atmosphere is, the more water vapour it can carry. There's even a formula which quantifies this: 6-7% more moisture in the air for every degree of warming near the Earth's surface. But both models and observations also show changes in the distribution of rainfall, with moisture concentrating in some parts of the world and fleeing from others: climate change is likely to produce both more floods and more droughts.
Weiye Loh

Alzheimer's Studies Find New Genetic Links - NYTimes.com - 0 views

  • The two largest studies of Alzheimer’s disease have led to the discovery of no fewer than five genes that provide intriguing new clues to why the disease strikes and how it progresses.
  • For years, there have been unproven but persistent hints that cholesterol and inflammation are part of the disease process. People with high cholesterol are more likely to get the disease. Strokes and head injuries, which make Alzheimer’s more likely, also cause brain inflammation. Now, some of the newly discovered genes appear to bolster this line of thought, because some are involved with cholesterol and others are linked to inflammation or the transport of molecules inside cells.
  • By themselves, the genes are not nearly as important a factor as APOE, a gene discovered in 1995 that greatly increases risk for the disease: by 400 percent if a person inherits a copy from one parent, by 1,000 percent if from both parents.
  • ...7 more annotations...
  • In contrast, each of the new genes increases risk by no more than 10 to 15 percent; for that reason, they will not be used to decide if a person is likely to develop Alzheimer’s. APOE, which is involved in metabolizing cholesterol, “is in a class of its own,” said Dr. Rudolph Tanzi, a neurology professor at Harvard Medical School and an author of one of the papers.
  • But researchers say that even a slight increase in risk helps them in understanding the disease and developing new therapies. And like APOE, some of the newly discovered genes appear to be involved with cholesterol.
  • The other paper is by researchers in Britain, France and other European countries with contributions from the United States. They confirmed the genes found by the American researchers and added one more gene.
  • The American study got started about three years ago when Gerard D. Schellenberg, a pathology professor at the University of Pennsylvania, went to the National Institutes of Health with a complaint and a proposal. Individual research groups had been doing their own genome studies but not having much success, because no one center had enough subjects. In an interview, Dr. Schellenberg said that he had told Dr. Richard J. Hodes, director of the National Institute on Aging, the small genomic studies had to stop, and that Dr. Hodes had agreed. These days, Dr. Hodes said, “the old model in which researchers jealously guarded their data is no longer applicable.”
  • So Dr. Schellenberg set out to gather all the data he could on Alzheimer’s patients and on healthy people of the same ages. The idea was to compare one million positions on each person’s genome to determine whether some genes were more common in those who had Alzheimer’s. “I spent a lot of time being nice to people on the phone,” Dr. Schellenberg said. He got what he wanted: nearly every Alzheimer’s center and Alzheimer’s geneticist in the country cooperated. Dr. Schellenberg and his colleagues used the mass of genetic data to do an analysis and find the genes and then, using two different populations, to confirm that the same genes were conferring the risk. That helped assure the investigators that they were not looking at a chance association. It was a huge effort, Dr. Mayeux said. Many medical centers had Alzheimer’s patients’ tissue sitting in freezers. They had to extract the DNA and do genome scans.
  • “One of my jobs was to make sure the Alzheimer’s cases really were cases — that they had used some reasonable criteria” for diagnosis, Dr. Mayeux said. “And I had to be sure that people who were unaffected really were unaffected.”
  • Meanwhile, the European group, led by Dr. Julie Williams of the School of Medicine at Cardiff University, was engaged in a similar effort. Dr. Schellenberg said the two groups compared their results and were reassured that they were largely finding the same genes. “If there were mistakes, we wouldn’t see the same things,” he added. Now the European and American groups are pooling their data to do an enormous study, looking for genes in the combined samples. “We are upping the sample size,” Dr. Schellenberg said. “We are pretty sure more stuff will pop out.”
  •  
    Gene Study Yields
Jody Poh

Online data privacy - 12 views

I think another question has to be brought up: Is the information meant to be public or private? If the information is public, then users should know they are risking identity theft by putting in...

privacy

Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

Epiphenom: Religion and suicide - a patchy global picture - 0 views

  • The main objective of this study is to understand the factors that contribute to suicide in different countries, and what can be done to reduce them. In each country, people who have attempted suicide are brought into the study and given a questionnaire to fill out. Another group of people, randomly chosen, are given the same questionnaire. That allows the team to compare religious affiliation, involvement in organised religion, and individual religiosity in suicide attempters and the general population. When they looked at the data, and adjusted them for a host of factors known to affect suicide risk (age, gender, marital status, employment, and education), a complex picture emerged.
  • In Iran, religion was highly protective, whether religion was measured as the rate of mosque attendance or as whether the individual thought of themselves as a religious person. In Brazil, going to religious services and personal religiosity were both highly protective. Bizarrely, however, religious affiliation was not. That might be because being Protestant was linked to greater risk, and Catholicism to lower risk. Put the two together, and it may balance out. In Estonia, suicides were lower in those who were affiliated to a religion, and those who said they were religious. They were also a bit lower in those who In India, there wasn't much effect of religion at all - a bit lower in those who go to religious services at least occasionally. Vietnam was similar. Those who went to religious services yearly were less likely to have attempted suicide, but no other measure of religion had any effect. In Sri Lanka, going to religious services had no protective effect, but subjective religiosity did. In South Africa, those who go to Church were no less likely to attempt suicide. In fact, those who said they were religious were actually nearly three times more likely to attempt suicide, and those who were affiliated to a religion were an incredible six times more likely!
  • In Brazil, religious people are six times less likely to commit suicide than the non religious. In South Africa, they are three times more likely. How to explain these national differences?
  • ...5 more annotations...
  • Part of it might be differences in the predominant religion. The protective effect of religion seems to be higher in monotheistic countries, and it's particularly high in the most fervently monotheistic country, Iran. In India, Sri Lanka, and Vietnam, the protective effect is smaller or non-existent.
  • But that doesn't explain South Africa. South Africa is unusual in that it is a highly diverse country, fractured by ethnic, social and religious boundaries. The researchers think that this might be a factor: South Africa has been described as ‘‘The Rainbow Nation’’ because of its cultural diversity. There are a variety of ethnic groups and a greater variety of cultures within each of these groups. While cultural diversity is seen as a national asset, the interaction of cultures results in the blurring of cultural norms and boundaries at the individual, family and cultural group levels. Subsequently, there is a large diversity of religious denominations and this does not seem favorable in terms of providing protection against attempted suicide.
  • earlier studies have shown that religious homogeneity is linked to lower suicide rates, and they suggest that the reverse might well be happening in South Africa.
  • this also could explain why, in Brazil, Protestants have a higher suicide rate than the unaffiliated. That too could be linked to their status as a religious minority.
  • we've got a study showing the double-edged nature of religion. For those inside the group, it provides support and comfort. But once fractures appear, religion just seems to turn up the heat!
  •  
     Religion and suicide
Weiye Loh

Investments worth trillions at risk from climate change: study | Reuters - 0 views

  • (Reuters) - Climate change could put trillions of investment dollars at risk over the next 20 years, a global study released on Wednesday said, calling for pension funds and other investors to overhaul how they allocate funds.
Weiye Loh

TODAYonline | Commentary | Science, shaken, must take stock - 0 views

  • Japan's part-natural, part-human disaster is an extraordinary event. As well as dealing with the consequences of an earthquake and tsunami, rescuers are having to evacuate thousands of people from the danger zone around Fukushima. In addition, the country is blighted by blackouts from the shutting of 10 or more nuclear plants. It is a textbook case of how technology can increase our vulnerability through unintended side-effects.
  • Yet there had been early warnings from scientists. In 2006, Professor Katsuhiko Ishibashi resigned from a Japanese nuclear power advisory panel, saying the policy of building in earthquake zones could lead to catastrophe, and that design standards for proofing them against damage were too lax. Further back, the seminal study of accidents in complex technologies was Professor Charles Perrow's Normal Accidents, published in 1984
  • Things can go wrong with design, equipment, procedures, operators, supplies and the environment. Occasionally two or more will have problems simultaneously; in a complex technology such as a nuclear plant, the potential for this is ever-present.
  • ...9 more annotations...
  • in complex systems, "no matter how effective conventional safety devices are, there is a form of accident that is inevitable" - hence the term "normal accidents".
  • system accidents occur with many technologies: Take the example of a highway blow-out leading to a pile-up. This may have disastrous consequences for those involved but cannot be described as a disaster. The latter only happens when the technologies involved have the potential to affect many innocent bystanders. This "dread factor" is why the nuclear aspect of Japan's ordeal has come to dominate headlines, even though the tsunami has had much greater immediate impact on lives.
  • It is simply too early to say what precisely went wrong at Fukushima, and it has been surprising to see commentators speak with such speed and certainty. Most people accept that they will only ever have a rough understanding of the facts. But they instinctively ask if they can trust those in charge and wonder why governments support particular technologies so strongly.
  • Industry and governments need to be more straightforward with the public. The pretence of knowledge is deeply unscientific; a more humble approach where officials are frank about the unknowns would paradoxically engender greater trust.
  • Likewise, nuclear's opponents need to adopt a measured approach. We need a fuller democratic debate about the choices we are making. Catastrophic potential needs to be a central criterion in decisions about technology. Advice from experts is useful but the most significant questions are ethical in character.
  • If technologies can potentially have disastrous effects on large numbers of innocent bystanders, someone needs to represent their interests. We might expect this to be the role of governments, yet they have generally become advocates of nuclear power because it is a relatively low-carbon technology that reduces reliance on fossil fuels. Unfortunately, this commitment seems to have reduced their ability to be seen to act as honest brokers, something acutely felt at times like these, especially since there have been repeated scandals in Japan over the covering-up of information relating to accidents at reactors.
  • Post Fukushima, governments in Germany, Switzerland and Austria already appear to be shifting their policies. Rational voices, such as the Britain's chief scientific adviser John Beddington, are saying quite logically that we should not compare the events in Japan with the situation in Britain, which does not have the same earthquake risk. Unfortunately, such arguments are unlikely to prevail in the politics of risky technologies.
  • firms and investors involved in nuclear power have often failed to take regulatory and political risk into account; history shows that nuclear accidents can lead to tighter regulations, which in turn can increase nuclear costs. Further ahead, the proponents of hazardous technologies need to bear the full costs of their products, including insurance liabilities and the cost of independent monitoring of environmental and health effects. As it currently stands, taxpayers would pay for any future nuclear incident.
  • Critics of technology are often dubbed in policy circles as anti-science. Yet critical thinking is central to any rational decision-making process - it is less scientific to support a technology uncritically. Accidents happen with all technologies, and are regrettable but not disastrous so long as the technology does not have catastrophic potential; this raises significant questions about whether we want to adopt technologies that do have such potential.
1 - 20 of 94 Next › Last »
Showing 20 items per page