Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Probability

Rss Feed Group items tagged

Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

7 Essential Skills You Didn't Learn in College | Magazine - 0 views

shared by Weiye Loh on 15 Oct 10 - No Cached
  • Statistical Literacy Why take this course? We are misled by numbers and by our misunderstanding of probability.
  • Our world is shaped by widespread statistical illiteracy. We fear things that probably won’t kill us (terrorist attacks) and ignore things that probably will (texting while driving). We buy lottery tickets. We fall prey to misleading gut instincts, which lead to biases like loss aversion—an inability to gauge risk against potential gain. The effects play out in the grocery store, the office, and the voting booth (not to mention the bedroom: People who are more risk-averse are less successful in love).
  • We are now 53 percent more likely than our parents to trust polls of dubious merit. (That figure is totally made up. See?) Where do all these numbers that we remember so easily and cite so readily come from? How are they calculated, and by whom? How do we misuse them to make them say what we want them to? We’ll explore all of these questions in a sequence on sourcing statistics.
  • ...9 more annotations...
  • probabilistic intuition. We’ll learn to judge what’s likely and unlikely—and what’s impossible to know. We’ll learn about distorting habits of mind like selection bias—and how to guard against them. We’ll gamble. We’ll read The Art of Probability for Scientists and Engineers by Richard Hamming, Expert Political Judgment by Philip Tetlock, and How to Cheat Your Friends at Poker by Penn Jillette and Mickey Lynn.
  • Post-State Diplomacy Why take this course? As the world becomes evermore atomized, understanding the new leaders and constituencies becomes increasingly important.
  • tribal insurgents to multinational corporations, private charities to pirate gangs, religious movements to armies for hire, a range of organizations now compete with (and sometimes eclipse) the nation-states in which they reside. Without capitals or traditional constituencies, they can’t be persuaded or deterred by traditional tactics.
  • that doesn’t mean diplomacy is dead; quite the opposite. Negotiating with these parties requires the same skills as dealing with belligerent nations—understanding the shareholders and alliances they must answer to, the cultures that inform how they behave, and the religious, economic, and political interests they must address.
  • Power has always depended on who can provide justice, commerce, and stability.
  • Remix Culture Why take this course? Modern artists don’t start with a blank page or empty canvas. They start with preexisting works. What you’ll learn: How to analyze—and create—artworks made out of other artworks
  • philosophical roots of remix culture and study seminal works like Robert Rauschenberg’s Monogram and Jorge Luis Borges’ Pierre Menard, Author of Don Quixote. And we’ll examine modern-day exemplars from DJ Shadow’s Endtroducing to Auto-Tune the News.
  • Applied Cognition Why take this course? You have to know the brain to train the brain. What you’ll learn: How the mind works and how you can make it work for you.
  • Writing for New Forms Why take this course? You can write a cogent essay, but can you write it in 140 characters or less? What you’ll learn: How to adapt your message to multiple formats and audiences—human and machine.
  •  
    7 Essential Skills You Didn't Learn in College
Weiye Loh

Study: Airport Security Should Stop Racial Profiling | Smart Journalism. Real Solutions... - 0 views

  • Plucking out of line most of the vaguely Middle Eastern-looking men at the airport for heightened screening is no more effective at catching terrorists than randomly sampling everyone. It may even be less effective. Press stumbled across this counterintuitive concept — sometimes the best way to find something is not to weight it by probability — in the unrelated context of computational biology. The parallels to airport security struck him when a friend mentioned he was constantly being pulled out of line at the airport.
  • Racial profiling, in other words, doesn’t work because it devotes heightened resources to innocent people — and then devotes those resources to them repeatedly even after they’ve been cleared as innocent the first time. The actual terrorists, meanwhile, may sneak through while Transportation Security Administration agents are focusing their limited attention on the wrong passengers.
  • Press tested the theory in a series of probability equations (the ambitious can check his math here and here).
  • ...3 more annotations...
  • Sampling based on profiling is mathematically no more effective than uniform random sampling. The optimal equation, rather, turns out to be something called “square-root sampling,” a compromise between the other two methods.
  • “Crudely,” Press writes of his findings in the journal Significance, if certain people are “nine times as likely to be the terrorist, we pull out only three times as many of them for special checks. Surprisingly, and bizarrely, this turns out to be the most efficient way of catching the terrorist.”
  • Square-root sampling, though, still represents a kind of profiling, and, Press adds, not one that could be realistically implemented at airports today. Square-root sampling only works if the profile probabilities are accurate in the first place — if we are able to say with mathematical certainty that some types of people are “nine times as likely to be the terrorist” compared to others. TSA agents in a crowded holiday terminal making snap judgments about facial hair would be far from this standard. “The nice thing about uniform sampling is there’s nothing to be inaccurate about, you don’t need any data, it never can be worse than you expect,” Press said. “As soon as you use profile probabilities, if the profile probabilities are just wrong, then the strong profiling just does worse than the random sampling.”
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
yongernn teo

Ethics and Values Case Study- Mercy Killing, Euthanasia - 8 views

  •  
    THE ETHICAL PROBLEM: Allowing someone to die, mercy death, and mercy killing, Euthanasia: A 24-year-old man named Robert who has a wife and child is paralyzed from the neck down in a motorcycle accident. He has always been very active and hates the idea of being paralyzed. He also is in a great deal of pain, an he has asked his doctors and other members of his family to "put him out of his misery." After several days of such pleading, his brother comes into Robert's hospital ward and asks him if he is sure he still wants to be put out of his misery. Robert says yes and pleads with his brother to kill him. The brother kisses and blesses Robert, then takes out a gun and shoots him, killing him instantly. The brother later is tried for murder and acquitted by reason of temporary insanity. Was what Robert's brother did moral? Do you think he should have been brought to trial at all? Do you think he should have been acquitted? Would you do the same for a loved one if you were asked? THE DISCUSSION: In my opinion, the most dubious part about the case would be the part on Robert pleading with his brother, asking his brother to kill him. This could be his brother's own account of the incident and could/could not have been a plea by Robert. 1) With assumption that Robert indeed pleaded with his brother to kill him, an ethical analysis as such could be derived: That Robert's brother was only respecting Robert's choice and killed him because he wanted to relieve him from his misery. This could be argued to be ethical using a teleoloigical framework where the focus is on the end-result and the consequences that entails the action. Here, although the act of killing per se may be wrong and illegal, Robert was able to relieved of his pain and suffering. 2) With an assumption that Robert did not plea with his brother to kill him and that it was his brother's own decision to relieve Robert of all-suffering: In this case, the b
  • ...2 more comments...
  •  
    I find euthanasia to be a very interesting ethical dilemma. Even I myself am caught in the middle. Euthanasia has been termed as 'mercy killing' and even 'happy death'. Others may simply just term it as being 'evil'. Is it right to end someone's life even when he or she pleads you to do so? In the first place, is it even right to commit suicide? Once someone pulls off the main support that's keeping the person alive, such as the feeding tube, there is no turning back. Hmm..Come to think of it, technology is kind of unethical by being made available, for in the past, when someone is dying, they had the right to die naturally. Now, scientific technology is 'forcing' us to stay alive and cling on to a life that may be deemed being worthless if we were standing outside our bodies looking at our comatose selves. Then again, this may just be MY personal standpoint. But I have to argue, who gave technology the right to make me a worthless vegetable!(and here I am, attaching a value/judgement onto an immobile human being..) Hence, being incompetent in making decisions for my unconscious self (or perhaps even brain dead), who should take responsibility for my life, for my existence? And on what basis are they allowed to help me out? Taking the other side of the argument, against euthanasia, we can say that the act of ending someone else's life is the act of destroying societal respect for life. Based on the utilitarian perspective, we are not thinking of the overall beneficence for society and disregarding the moral considerations encompassed within the state's interest to preserve the sanctity of all life. It has been said that life in itself takes priority over all other values. We should let the person live so as to give him/her a chance to wake up or hope for recovery (think comatose patients). But then again we can also argue that life is not the top of the hierarchy! A life without rights is as if not living a life at all? By removing the patient
  •  
    as a human being, you supposedly have a right to live, whether you are mobile or immobile. however, i think that, in the case of euthanasia, you 'give up' your rights when you "show" that you are no longer able to serve the pre-requisites of having the right. for example, if "living" rights are equate to you being able to talk, walk, etc etc, then, obviously the opposite means you no longer are able to perform up to the expectations of that right. then again, it is very subjective as to who gets to make that criteria!
  •  
    hmm interesting.. however, a question i have is who and when can this "right" be "given up"? when i am a victim in a car accident, and i lost the ability to breathe, walk and may need months to recover. i am unconscious and the doctor is unable to determine when am i gonna regain consciousness. when should my parents decide i can no longer be able to have any living rights? and taking elaine's point into consideration, is committing suicide even 'right'? if it is legally not right, when i ask someone to take my life and wrote a letter that it was cus i wanted to die, does that make it committing suicide only in the hands of others?
  •  
    Similarly, I question the 'rights' that you have to 'give up' when you no longer 'serve the pre-requisites of having the right'. If the living rights means being able to talk and walk, then where does it leave infants? Where does it leave people who may be handicapped? Have their lost their rights to living?
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
Weiye Loh

CancerGuide: The Median Isn't the Message - 0 views

  • Statistics recognizes different measures of an "average," or central tendency. The mean is our usual concept of an overall average - add up the items and divide them by the number of sharers
  • The median, a different measure of central tendency, is the half-way point.
  • A politician in power might say with pride, "The mean income of our citizens is $15,000 per year." The leader of the opposition might retort, "But half our citizens make less than $10,000 per year." Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean; but he can balance only one mendicant in calculating a median).
  • ...7 more annotations...
  • The larger issue that creates a common distrust or contempt for statistics is more troubling. Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more "real" and the only proper basis for action - if it feels good, do it - while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, "Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death."
  • This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science. Heart and head are focal points of one body, one personality.
  • We still carry the historical baggage of a Platonic heritage that seeks sharp essences and definite boundaries. (Thus we hope to find an unambiguous "beginning of life" or "definition of death," although nature often comes to us as irreducible continua.) This Platonic heritage, with its emphasis in clear distinctions and separated immutable entities, leads us to view statistical measures of central tendency wrongly, indeed opposite to the appropriate interpretation in our actual world of variation, shadings, and continua. In short, we view means and medians as the hard "realities," and the variation that permits their calculation as a set of transient and imperfect measurements of this hidden essence. If the median is the reality and variation around the median just a device for its calculation, the "I will probably be dead in eight months" may pass as a reasonable interpretation.
  • But all evolutionary biologists know that variation itself is nature's only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently - and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation. When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good. I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation's best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.
  • Another technical point then added even more solace. I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call "right skewed." (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right. In skewed distributions, variation to one side of the central tendency is more stretched out - left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned. After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn't much room for the distribution's lower (or left) half - it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives. The distribution must be right skewed, and I needed to know how long the extended tail ran - for I had already concluded that my favorable profile made me a good candidate for that part of the curve.
  • The distribution was indeed, strongly right skewed, with a long tail (however small) that extended for several years above the eight month median. I saw no reason why I shouldn't be in that small tail, and I breathed a very long sigh of relief. My technical knowledge had helped. I had read the graph correctly. I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances - substantial time.
  • One final point about statistical distributions. They apply only to a prescribed set of circumstances - in this case to survival with mesothelioma under conventional modes of treatment. If circumstances change, the distribution may alter. I was placed on an experimental protocol of treatment and, if fortune holds, will be in the first cohort of a new distribution with high median and a right tail extending to death by natural causes at advanced old age.
  •  
    The Median Isn't the Message by Stephen Jay Gould
Weiye Loh

Research integrity: Sabotage! : Nature News - 0 views

  • University of Michigan in Ann Arbor
  • Vipul Bhrigu, a former postdoc at the university's Comprehensive Cancer Center, wears a dark-blue three-buttoned suit and a pinched expression as he cups his pregnant wife's hand in both of his. When Pollard Hines calls Bhrigu's case to order, she has stern words for him: "I was inclined to send you to jail when I came out here this morning."
  • Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells". Bhrigu has said on multiple occasions that he was compelled by "internal pressure" and had hoped to slow down Ames's work. Speaking earlier this month, he was contrite. "It was a complete lack of moral judgement on my part," he said.
  • ...16 more annotations...
  • Bhrigu's actions are surprising, but probably not unique. There are few firm numbers showing the prevalence of research sabotage, but conversations with graduate students, postdocs and research-misconduct experts suggest that such misdeeds occur elsewhere, and that most go unreported or unpoliced. In this case, the episode set back research, wasted potentially tens of thousands of dollars and terrorized a young student. More broadly, acts such as Bhrigu's — along with more subtle actions to hold back or derail colleagues' work — have a toxic effect on science and scientists. They are an affront to the implicit trust between scientists that is necessary for research endeavours to exist and thrive.
  • Despite all this, there is little to prevent perpetrators re-entering science.
  • federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data.
  • In Bhrigu's case, administrators at the University of Michigan worked with police to investigate, thanks in part to the persistence of Ames and her supervisor, Theo Ross. "The question is, how many universities have such procedures in place that scientists can go and get that kind of support?" says Christine Boesz, former inspector-general for the US National Science Foundation in Arlington, Virginia, and now a consultant on scientific accountability. "Most universities I was familiar with would not necessarily be so responsive."
  • Some labs are known to be hyper-competitive, with principal investigators pitting postdocs against each other. But Ross's lab is a small, collegial place. At the time that Ames was noticing problems, it housed just one other graduate student, a few undergraduates doing projects, and the lab manager, Katherine Oravecz-Wilson, a nine-year veteran of the lab whom Ross calls her "eyes and ears". And then there was Bhrigu, an amiable postdoc who had joined the lab in April 2009.
  • Some people whom Ross consulted with tried to convince her that Ames was hitting a rough patch in her work and looking for someone else to blame. But Ames was persistent, so Ross took the matter to the university's office of regulatory affairs, which advises on a wide variety of rules and regulations pertaining to research and clinical care. Ray Hutchinson, associate dean of the office, and Patricia Ward, its director, had never dealt with anything like it before. After several meetings and two more instances of alcohol in the media, Ward contacted the department of public safety — the university's police force — on 9 March. They immediately launched an investigation — into Ames herself. She endured two interrogations and a lie-detector test before investigators decided to look elsewhere.
  • At 4:00 a.m. on Sunday 18 April, officers installed two cameras in the lab: one in the cold room where Ames's blots had been contaminated, and one above the refrigerator where she stored her media. Ames came in that day and worked until 5:00 p.m. On Monday morning at around 10:15, she found that her medium had been spiked again. When Ross reviewed the tapes of the intervening hours with Richard Zavala, the officer assigned to the case, she says that her heart sank. Bhrigu entered the lab at 9:00 a.m. on Monday and pulled out the culture media that he would use for the day. He then returned to the fridge with a spray bottle of ethanol, usually used to sterilize lab benches. With his back to the camera, he rummaged through the fridge for 46 seconds. Ross couldn't be sure what he was doing, but it didn't look good. Zavala escorted Bhrigu to the campus police department for questioning. When he told Bhrigu about the cameras in the lab, the postdoc asked for a drink of water and then confessed. He said that he had been sabotaging Ames's work since February. (He denies involvement in the December and January incidents.)
  • Misbehaviour in science is nothing new — but its frequency is difficult to measure. Daniele Fanelli at the University of Edinburgh, UK, who studies research misconduct, says that overtly malicious offences such as Bhrigu's are probably infrequent, but other forms of indecency and sabotage are likely to be more common. "A lot more would be the kind of thing you couldn't capture on camera," he says. Vindictive peer review, dishonest reference letters and withholding key aspects of protocols from colleagues or competitors can do just as much to derail a career or a research project as vandalizing experiments. These are just a few of the questionable practices that seem quite widespread in science, but are not technically considered misconduct. In a meta-analysis of misconduct surveys, published last year (D. Fanelli PLoS ONE 4, e5738; 2009), Fanelli found that up to one-third of scientists admit to offences that fall into this grey area, and up to 70% say that they have observed them.
  • Some say that the structure of the scientific enterprise is to blame. The big rewards — tenured positions, grants, papers in stellar journals — are won through competition. To get ahead, researchers need only be better than those they are competing with. That ethos, says Brian Martinson, a sociologist at HealthPartners Research Foundation in Minneapolis, Minnesota, can lead to sabotage. He and others have suggested that universities and funders need to acknowledge the pressures in the research system and try to ease them by means of education and rehabilitation, rather than simply punishing perpetrators after the fact.
  • Bhrigu says that he felt pressure in moving from the small college at Toledo to the much bigger one in Michigan. He says that some criticisms he received from Ross about his incomplete training and his work habits frustrated him, but he doesn't blame his actions on that. "In any kind of workplace there is bound to be some pressure," he says. "I just got jealous of others moving ahead and I wanted to slow them down."
  • At Washtenaw County Courthouse in July, having reviewed the case files, Pollard Hines delivered Bhrigu's sentence. She ordered him to pay around US$8,800 for reagents and experimental materials, plus $600 in court fees and fines — and to serve six months' probation, perform 40 hours of community service and undergo a psychiatric evaluation.
  • But the threat of a worse sentence hung over Bhrigu's head. At the request of the prosecutor, Ross had prepared a more detailed list of damages, including Bhrigu's entire salary, half of Ames's, six months' salary for a technician to help Ames get back up to speed, and a quarter of the lab's reagents. The court arrived at a possible figure of $72,000, with the final amount to be decided upon at a restitution hearing in September.
  • Ross, though, is happy that the ordeal is largely over. For the month-and-a-half of the investigation, she became reluctant to take on new students or to hire personnel. She says she considered packing up her research programme. She even questioned her own sanity, worrying that she was the one sabotaging Ames's work via "an alternate personality". Ross now wonders if she was too trusting, and urges other lab heads to "realize that the whole spectrum of humanity is in your lab. So, when someone complains to you, take it seriously."
  • She also urges others to speak up when wrongdoing is discovered. After Bhrigu pleaded guilty in June, Ross called Trempe at the University of Toledo. He was shocked, of course, and for more than one reason. His department at Toledo had actually re-hired Bhrigu. Bhrigu says that he lied about the reason he left Michigan, blaming it on disagreements with Ross. Toledo let Bhrigu go in July, not long after Ross's call.
  • Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says.
  • Ames says that the experience shook her trust in her chosen profession. "I did have doubts about continuing with science. It hurt my idea of science as a community that works together, builds upon each other's work and collaborates."
  •  
    Research integrity: Sabotage! Postdoc Vipul Bhrigu destroyed the experiments of a colleague in order to get ahead.
Weiye Loh

Roger Pielke Jr.'s Blog: The Flip Side of Extreme Event Attribution - 0 views

  • It is just logical that one cannot make the claim that action on climate change will influence future extreme events without first being able to claim that greenhouse gas emissions have a discernible influence on those extremes. This probably helps to explain why there is such a push to classify the attribution issue as settled. But this is just piling on one bad argument on top of another.
  • Even if you believe that attribution has been achieved, these are bad arguments for the simple fact that detecting the effects on the global climate system of emissions reductions would take many, many (many!) decades.  For instance, for an aggressive climate policy that would stabilize carbon dioxide at 450 ppm, detecting a change in average global temperatures would necessarily occur in the second half of this century.  Detection of changes in extreme events would take even longer.
  • To suggest that action on greenhouse gas emissions is a mechanism for modulating the impacts of extreme events remains a highly misleading argument.  There are better justifications for action on carbon dioxide that do not depend on contorting the state of the science.
  •  
    It is just logical that one cannot make the claim that action on climate change will influence future extreme events without first being able to claim that greenhouse gas emissions have a discernible influence on those extremes. This probably helps to explain why there is such a push to classify the attribution issue as settled. But this is just piling on one bad argument on top of another.
Weiye Loh

Miss Malaysia Toy Boy - 7 views

Yes, commodification has led to liberation. After all, capitalism is all about creating new markets for more production and consumption. Beauty has all along been commodified since the oldest trade...

Weiye Loh

Roger Pielke Jr.'s Blog: Blind Spots in Australian Flood Policies - 0 views

  • better management of flood risks in Australia will depend up better data on flood risk.  However, collecting such data has proven problematic
  • As many Queenslanders affected by January’s floods are realising, riverine flood damage is commonly excluded from household insurance policies. And this is unlikely to change until councils – especially in Queensland – stop dragging their feet and actively assist in developing comprehensive data insurance companies can use.
  • ? Because there is often little available information that would allow an insurer to adequately price this flood risk. Without this, there is little economic incentive for insurers to accept this risk. It would be irresponsible for insurers to cover riverine flood without quantifying and pricing the risk accordingly.
  • ...8 more annotations...
  • The first step in establishing risk-adjusted premiums is to know the likelihood of the depth of flooding at each address. This information has to be address-specific because the severity of flooding can vary widely over small distances, for example, from one side of a road to the other.
  • A litany of reasons is given for withholding data. At times it seems that refusal stems from a view that insurance is innately evil. This is ironic in view of the gratuitous advice sometimes offered by politicians and commentators in the aftermath of extreme events, exhorting insurers to pay claims even when no legal liability exists and riverine flood is explicitly excluded from policies.
  • Risk Frontiers is involved in jointly developing the National Flood Information Database (NFID) for the Insurance Council of Australia with Willis Re, a reinsurance broking intermediary. NFID is a five year project aiming to integrate flood information from all city councils in a consistent insurance-relevant form. The aim of NFID is to help insurers understand and quantify their risk. Unfortunately, obtaining the base data for NFID from some local councils is difficult and sometimes impossible despite the support of all state governments for the development of NFID. Councils have an obligation to assess their flood risk and to establish rules for safe land development. However, many are antipathetic to the idea of insurance. Some states and councils have been very supportive – in New South Wales and Victoria, particularly. Some states have a central repository – a library of all flood studies and digital terrain models (digital elevation data). Council reluctance to release data is most prevalent in Queensland, where, unfortunately, no central repository exists.
  • Second, models of flood risk are sometimes misused:
  • many councils only undertake flood modelling in order to create a single design flood level, usually the so-called one-in-100 year flood. (For reasons given later, a better term is the flood with an 1% annual likelihood of being exceeded.)
  • Inundation maps showing the extent of the flood with a 1% annual likelihood of exceedance are increasingly common on council websites, even in Queensland. Unfortunately these maps say little about the depth of water at an address or, importantly, how depth varies for less probable floods. Insurance claims usually begin when the ground is flooded and increase rapidly as water rises above the floor level. At Windsor in NSW, for example, the difference in the water depth between the flood with a 1% annual chance of exceedance and the maximum possible flood is nine metres. In other catchments this difference may be as small as ten centimetres. The risk of damage is quite different in both cases and an insurer needs this information if they are to provide coverage in these areas.
  • The ‘one-in-100 year flood’ term is misleading. To many it is something that happens regularly once every 100 years — with the reliability of a bus timetable. It is still possible, though unlikely, that a flood of similar magnitude or even greater flood could happen twice in one year or three times in successive years.
  • The calculations underpinning this are not straightforward but the probability that an address exposed to a 1-in-100 year flood will experience such an event or greater over the lifetime of the house – 50 years say – is around 40%. Over the lifetime of a typical home mortgage – 25 years – the probability of occurrence is 22%. These are not good odds.
  •  
    John McAneney of Risk Frontiers at Macquarie University in Sydney identifies some opportunities for better flood policies in Australia.
Weiye Loh

Effect of alcohol on risk of coronary heart diseas... [Vasc Health Risk Manag. 2006] - ... - 0 views

  • Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good.
  • . The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real.
  • However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one.
  •  
    Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake.
Weiye Loh

Joe Queenan: My 6,128 Favorite Books - WSJ.com - 0 views

  •  
    "If you have read 6,000 books in your lifetime, or even 600, it's probably because at some level you find "reality" a bit of a disappointment. People in the 19th century fell in love with "Ivanhoe" and "The Count of Monte Cristo" because they loathed the age they were living through. Women in our own era read "Pride and Prejudice" and "Jane Eyre" and even "The Bridges of Madison County"-a dimwit, hayseed reworking of "Madame Bovary"-because they imagine how much happier they would be if their husbands did not spend quite so much time with their drunken, illiterate golf buddies down at Myrtle Beach."
Jianwei Tan

Dominic Utton: How to scam a scammer |From the Guardian |The Guardian - 0 views

  •  
    Summary: Some people may have heard of the Nigerian 419 scams that were very infamous quite a few years back. These scammers who supposedly operated out of Nigeria created elaborate stories and solicited for help through e-mails. Although the initial intention of the e-mail is to ask for help, subsequent correspondences usually result in the scammer requesting for monetary aid through wire transfer. This person, Mike, has taken it upon himself to declare war on these scammers, baiting them to believe that he would send money to them but in actual fact plays pranks on them. The pranks played range from telling silly stories and wasting the scammer's time to persuading the scammer to get tattooed in order to get the money. Question: Scams are, without a doubt, unethical and probably criminal activities. However, is the act of scamming a would-be scammer an ethical thing to do? Problem: Let's imagine a situation where the scammer and the scambaiter (the person scamming the scammer) are from the same country or even the same state, thus both parties would be subject to the same laws. If the scammer were to try and launch a scam and instead was scambaited into severe consequences (I think getting tattoed is quite severe), should the scambaiter be prosecuted by the legal system?
Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

juliet huang

Go slow with Net law - 4 views

Article : Go slow with tech law Published : 23 Aug 2009 Source: Straits Times Background : When Singapore signed a free trade agreement with the USA in 2003, intellectual property rights was a ...

sim lim square

started by juliet huang on 26 Aug 09 no follow-up yet
Weiye Loh

Titans of science: David Attenborough meets Richard Dawkins | Science | The Guardian - 0 views

  • What is the one bit of science from your field that you think everyone should know?David Attenborough: The unity of life.Richard Dawkins: The unity of life that comes about through evolution, since we're all descended from a single common ancestor. It's almost too good to be true, that on one planet this extraordinary complexity of life should have come about by what is pretty much an intelligible process. And we're the only species capable of understanding it.
  • RD: I know you're working on a programme about Cambrian and pre-Cambrian fossils, David. A lot of people might think, "These are very old animals, at the beginning of evolution; they weren't very good at what they did." I suspect that isn't the case?DA: They were just as good, but as generalists, most were ousted from the competition.RD: So it probably is true there's a progressive element to evolution in the short term but not in the long term – that when a lineage branches out, it gets better for about five million years but not 500 million years. You wouldn't see progressive improvement over that kind of time scale.DA: No, things get more and more specialised. Not necessarily better.RD: The "camera" eyes of any modern animal would be better than what had come before.DA: Certainly... but they don't elaborate beyond function. When I listen to a soprano sing a Handel aria with an astonishing coloratura from that particular larynx, I say to myself, there has to be a biological reason that was useful at some stage. The larynx of a human being did not evolve without having some function. And the only function I can see is sexual attraction.RD: Sexual selection is important and probably underrated.DA: What I like to think is that if I think the male bird of paradise is beautiful, my appreciation of it is precisely the same as a female bird of paradise.
    • Weiye Loh
       
      Is survivability really all about sex and reproduction of future generation? 
  • People say Richard Feynman had one of these extraordinary minds that could grapple with ideas of which I have no concept. And you hear all the ancillary bits – like he was a good bongo player – that make him human. So I admire this man who could not only deal with string theory but also play the bongos. But he is beyond me. I have no idea what he was talking of.
  • ...6 more annotations...
  • RD: There does seem to be a sense in which physics has gone beyond what human intuition can understand. We shouldn't be too surprised about that because we're evolved to understand things that move at a medium pace at a medium scale. We can't cope with the very tiny scale of quantum physics or the very large scale of relativity.
  • DA: A physicist will tell me that this armchair is made of vibrations and that it's not really here at all. But when Samuel Johnson was asked to prove the material existence of reality, he just went up to a big stone and kicked it. I'm with him.
  • RD: It's intriguing that the chair is mostly empty space and the thing that stops you going through it is vibrations or energy fields. But it's also fascinating that, because we're animals that evolved to survive, what solidity is to most of us is something you can't walk through.
  • the science of the future may be vastly different from the science of today, and you have to have the humility to admit when you don't know. But instead of filling that vacuum with goblins or spirits, I think you should say, "Science is working on it."
  • DA: Yes, there was a letter in the paper [about Stephen Hawking's comments on the nonexistence of God] saying, "It's absolutely clear that the function of the world is to declare the glory of God." I thought, what does that sentence mean?!
  • What is the most difficult ethical dilemma facing science today?DA: How far do you go to preserve individual human life?RD: That's a good one, yes.DA: I mean, what are we to do with the NHS? How can you put a value in pounds, shillings and pence on an individual's life? There was a case with a bowel cancer drug – if you gave that drug, which costs several thousand pounds, it continued life for six weeks on. How can you make that decision?
  •  
    Of mind and matter: David Attenborough meets Richard Dawkins We paired up Britain's most celebrated scientists to chat about the big issues: the unity of life, ethics, energy, Handel - and the joy of riding a snowmobile
Weiye Loh

Why Did 17 Million Students Go to College? - Innovations - The Chronicle of Higher Educ... - 0 views

  • Over 317,000 waiters and waitresses have college degrees (over 8,000 of them have doctoral or professional degrees), along with over 80,000 bartenders, and over 18,000 parking lot attendants. All told, some 17,000,000 Americans with college degrees are doing jobs that the BLS says require less than the skill levels associated with a bachelor’s degree.
  • Charles Murray’s thesis that an increasing number of people attending college do not have the cognitive abilities or other attributes usually necessary for success at higher levels of learning. As more and more try to attend colleges, either college degrees will be watered down (something already happening I suspect) or drop-out rates will rise.
  • interesting new study was posted on the Web site of America’s most prestigious economic-research organization, the National Bureau of Economic Research. Three highly regarded economists (one of whom has won the Nobel Prize in Economic Science) have produced “Estimating Marginal Returns to Education,” Working Paper 16474 of the NBER. After very sophisticated and elaborate analysis, the authors conclude “In general, marginal and average returns to college are not the same.” (p. 28)
  • ...8 more annotations...
  • even if on average, an investment in higher education yields a good, say 10 percent, rate of return, it does not follow that adding to existing investments will yield that return, partly for reasons outlined above. The authors (Pedro Carneiro, James Heckman, and Edward Vytlacil) make that point explicitly, stating “Some marginal expansions of schooling produce gains that are well below average returns, in general agreement with the analysis of Charles Murray.”  (p.29)
  • Once the economy improves, and history tells us it will improve within our lifetimes, those who already have a college degree under their belts will be better equipped to take advantage of new employment opportunities than those who don’t. Perhaps not because of the actual knowledge obtained through their degrees, but definitely as an offset to the social stigma that still exists for those who do not attend college. A college degree may not help a young person secure professional work immediately – so new graduates spend a few years waiting tables until the right opportunity comes along. So what? It’s probably good for them. But they have 40-50 years in the workforce ahead of them and need to be forward-thinking if they don’t want to wait tables for that entire time. If we stop encouraging all young people to view college as both a goal and a possibility, and start weeding out those whose “prior academic records suggest little likelihood of academic success” which, let’s face it, will happen in larger proportions in poorer schools, then in 20 years we’ll find that efforts to reduce socioeconomic gaps between minorities and non-minorities have been seriously undermined.
  • Bet you a lot of those janitors with PhDs are from the humanities, in particular ethic studies, film studies,…basket weaving courses… or non-economics social sciences, eg., sociology, anthropology of never heard of country….There should be a buyer beware warning on all those non-quantitative majors that make people into sophisticated malcontent complainers!
  • This article also presumes that the purpose of higher education is merely to train one for a career path and enhance future income. This devalues the university, turning it into a vocational training institution. There’s nothing in this data that suggests that they are “sophisticated complainers”; that’s an unwarranted inference.
  • it was mentioned that the Bill and Melinda Gates Foundation would like 80% of American youth to attend and graduate from college. It is a nice thought in many ways. As a teacher and professor, intellectually I am all for it (if the university experience is a serious one, which these days, I don’t know).
  • students’ expectations in attending college are not just intellectual; they are careerist (probably far more so)
  • This employment issue has more to do with levels of training and subsequent levels of expectation. When a Korean student emerges from 20 years of intense study with a university degree, he or she reasonably expects a “good” job — which is to say, a well-paying professional or managerial job with good forward prospects. But here’s the problem. There does not exist, nor will there ever exist, a society in which 80% of the available jobs are professional, managerial, comfortable, and well-paid. No way.
  • Korea has a number of other jobs, but some are low-paid service work, and many others — in factories, farming, fishing — are scorned as 3-D jobs (difficult, dirty, and dangerous). Educated Koreans don’t want them. So the country is importing labor in droves — from China, Vietnam, Cambodia, the Philippines, even Uzbekistan. In the countryside, rural Korean men are having such a difficult time finding prospective wives to share their agricultural lifestyle that fully 40% of rural marriages are to poor women from those other Asian countries, who are brought in by match-makers and marriage brokers.
  •  
    Why Did 17 Million Students Go to College?
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Small answers to the big questions - Chris Blattman - 0 views

  • A reporter emailed me this morning to see if I could answer a few questions about poverty. Sure I said. The emailed questions that followed?It is realistic to think that poverty can one day end?What, in your view, are the best global solutions?How urgent is it to act (in the context of climate change)?
  • My first reaction: thanks for asking the easy questions, lady. Was this serious? How can one possibly answer the grand questions of development in a few sentences?
  • It is realistic to think that poverty can one day end?In America, you can be poor but own a car, a television, and have food on the table every day. In northern Uganda, that would make you a very wealthy man.Do I see a world where nearly every household has their basic needs covered, plus some of the comforts of life? Absolutely. I imagine most places on the planet will get to what we now think of as middle-income status—perhaps $8,000 to $14,000 per head in 2011 dollars and purchasing ability. The poorest nations will probably be in those places least advantageous to trade (the landlocked, for instance) and where cultures or political systems restrict innovation and freedoms.But poverty is a relative measure, and short of a Star Trek world where you can summon food and items out of a wall unit, there will always be people who struggle to keep up.
  • ...5 more annotations...
  • What, in your view, are the best global solutions?
  • There are plenty aid programs that seem to work, from de-worming to small business grants to incentives to send children to school. But none of these programs are likely to have transformative effects.
  • The difference between a country with $1,500 and $15,000 of income a head a head is simple: industry. All the microfinance and microenterprise programs in the world are not going to build large firms and import technology and provide most people with what they really want: a stable job, regular wages, and a decent work environment.
  • How you get these firms is the tricky question. Only a few firms will be home grown; most will be firms that spread across borders, because they have the markets and know-how. Probably we’ll need to see wages rise in China and India before manufacturing ever spreads to the poorest places on the planet, like Central Asia and Africa.The countries that will get them first are the ones that are close to trade routes, have stable political climates, make it easy to get finance, are open to trade, have large domestic markets, have able and educated workforces (i.e. secondary education), and have leaders in charge who don’t see the industrial sector as either a threat to their power or a garden from which they get to select the sweetest fruits for themselves.
  • How urgent is it to act (in the context of climate change)?The short answer: I wouldn’t know. For the US and China and Europe and India, they must change because if they don’t nothing will.For the Ugandas or Uzbekistans or Bolivias of the world, I can’t see it making a difference. Let them develop as green as possible, but let’s not impede their growth because of it, and rob them of the opportunity we took ourselves.
1 - 20 of 106 Next › Last »
Showing 20 items per page