Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged third

Rss Feed Group items tagged

Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

When Insurers Put Profits Before People - NYTimes.com - 0 views

  • Late in 2007
  • A 17-year-old girl named Nataline Sarkisyan was in desperate need of a transplant after receiving aggressive treatment that cured her recurrent leukemia but caused her liver to fail. Without a new organ, she would die in a matter of a days; with one, she had a 65 percent chance of surviving. Her doctors placed her on the liver transplant waiting list.
  • She was critically ill, as close to death as one could possibly be while technically still alive, and her fate was inextricably linked to another’s. Somewhere, someone with a compatible organ had to die in time for Nataline to live.
  • ...9 more annotations...
  • But even when the perfect liver became available a few days after she was put on the list, doctors could not operate. What made Nataline different from most transplant patients, and what eventually brought her case to the attention of much of the country, was that her survival did not depend on the availability of an organ or her clinicians or even the quality of care she received. It rested on her health insurance company.
  • Cigna had denied the initial request to cover the costs of the liver transplant. And the insurer persisted in its refusal, claiming that the treatment was “experimental” and unproven, and despite numerous pleas from Nataline’s physicians to the contrary.
  • But as relatives and friends organized campaigns to draw public attention to Nataline’s plight, the insurance conglomerate found itself embroiled in a public relations nightmare, one that could jeopardize its very existence. The company reversed its decision. But the change came too late. Nataline died just a few hours after Cigna authorized the transplant.
  • Mr. Potter was the head of corporate communications at two major insurers, first at Humana and then at Cigna. Now Mr. Potter has written a fascinating book that details the methods he and his colleagues used to manipulate public opinion
  • Mr. Potter goes on to describe the myth-making he did, interspersing descriptions of front groups, paid spies and jiggered studies with a deft retelling of the convoluted (and usually eye-glazing) history of health care insurance policies.
  • We learn that executives at Cigna worried that Nataline’s situation would only add fire to the growing public discontent with a health care system anchored by private insurance. As the case drew more national attention, the threat of a legislative overhaul that would ban for-profit insurers became real, and Mr. Potter found himself working on the biggest P.R. campaign of his career.
  • Cigna hired a large international law firm and a P.R. firm already well known to them from previous work aimed at discrediting Michael Moore and his film “Sicko.” Together with Cigna, these outside firms waged a campaign that would eventually include the aggressive placement of articles with friendly “third party” reporters, editors and producers who would “disabuse the media, politicians and the public of the notion that Nataline would have gotten the transplant if she had lived in Canada or France or England or any other developed country.” A “spy” was dispatched to Nataline’s funeral; and when the Sarkisyan family filed a lawsuit against the insurer, a team of lawyers was assigned to keep track of actions and comments by the family’s lawyer.
  • In the end, however, Nataline’s death proved to be the final straw for Mr. Potter. “It became clearer to me than ever that I was part of an industry that would do whatever it took to perpetuate its extraordinarily profitable existence,” he writes. “I had sold my soul.” He left corporate public relations for good less than six months after her death.
  • “I don’t mean to imply that all people who work for health insurance companies are greedier or more evil than other Americans,” he writes. “In fact, many of them feel — and justifiably so — that they are helping millions of people get they care they need.” The real problem, he says, lies in the fact that the United States “has entrusted one of the most important societal functions, providing health care, to private health insurance companies.” Therefore, the top executives of these companies become beholden not to the patients they have pledged to cover, but to the shareholders who hold them responsible for the bottom line.
Weiye Loh

Why Do Intellectuals Oppose Capitalism? - 0 views

  • Not all intellectuals are on the "left."
  • But in their case, the curve is shifted and skewed to the political left.
  • By intellectuals, I do not mean all people of intelligence or of a certain level of education, but those who, in their vocation, deal with ideas as expressed in words, shaping the word flow others receive. These wordsmiths include poets, novelists, literary critics, newspaper and magazine journalists, and many professors. It does not include those who primarily produce and transmit quantitatively or mathematically formulated information (the numbersmiths) or those working in visual media, painters, sculptors, cameramen. Unlike the wordsmiths, people in these occupations do not disproportionately oppose capitalism. The wordsmiths are concentrated in certain occupational sites: academia, the media, government bureaucracy.
  • ...6 more annotations...
  • Wordsmith intellectuals fare well in capitalist society; there they have great freedom to formulate, encounter, and propagate new ideas, to read and discuss them. Their occupational skills are in demand, their income much above average. Why then do they disproportionately oppose capitalism? Indeed, some data suggest that the more prosperous and successful the intellectual, the more likely he is to oppose capitalism. This opposition to capitalism is mainly "from the left" but not solely so. Yeats, Eliot, and Pound opposed market society from the right.
  • can distinguish two types of explanation for the relatively high proportion of intellectuals in opposition to capitalism. One type finds a factor unique to the anti-capitalist intellectuals. The second type of explanation identifies a factor applying to all intellectuals, a force propelling them toward anti-capitalist views. Whether it pushes any particular intellectual over into anti-capitalism will depend upon the other forces acting upon him. In the aggregate, though, since it makes anti-capitalism more likely for each intellectual, such a factor will produce a larger proportion of anti-capitalist intellectuals. Our explanation will be of this second type. We will identify a factor which tilts intellectuals toward anti-capitalist attitudes but does not guarantee it in any particular case.
  • Intellectuals now expect to be the most highly valued people in a society, those with the most prestige and power, those with the greatest rewards. Intellectuals feel entitled to this. But, by and large, a capitalist society does not honor its intellectuals. Ludwig von Mises explains the special resentment of intellectuals, in contrast to workers, by saying they mix socially with successful capitalists and so have them as a salient comparison group and are humiliated by their lesser status.
  • Why then do contemporary intellectuals feel entitled to the highest rewards their society has to offer and resentful when they do not receive this? Intellectuals feel they are the most valuable people, the ones with the highest merit, and that society should reward people in accordance with their value and merit. But a capitalist society does not satisfy the principle of distribution "to each according to his merit or value." Apart from the gifts, inheritances, and gambling winnings that occur in a free society, the market distributes to those who satisfy the perceived market-expressed demands of others, and how much it so distributes depends on how much is demanded and how great the alternative supply is. Unsuccessful businessmen and workers do not have the same animus against the capitalist system as do the wordsmith intellectuals. Only the sense of unrecognized superiority, of entitlement betrayed, produces that animus.
  • What factor produced feelings of superior value on the part of intellectuals? I want to focus on one institution in particular: schools. As book knowledge became increasingly important, schooling--the education together in classes of young people in reading and book knowledge--spread. Schools became the major institution outside of the family to shape the attitudes of young people, and almost all those who later became intellectuals went through schools. There they were successful. They were judged against others and deemed superior. They were praised and rewarded, the teacher's favorites. How could they fail to see themselves as superior? Daily, they experienced differences in facility with ideas, in quick-wittedness. The schools told them, and showed them, they were better.
  • We have refined the hypothesis somewhat. It is not simply formal schools but formal schooling in a specified social context that produces anti-capitalist animus in (wordsmith) intellectuals. No doubt, the hypothesis requires further refining. But enough. It is time to turn the hypothesis over to the social scientists, to take it from armchair speculations in the study and give it to those who will immerse themselves in more particular facts and data. We can point, however, to some areas where our hypothesis might yield testable consequences and predictions. First, one might predict that the more meritocratic a country's school system, the more likely its intellectuals are to be on the left. (Consider France.) Second, those intellectuals who were "late bloomers" in school would not have developed the same sense of entitlement to the very highest rewards; therefore, a lower percentage of the late-bloomer intellectuals will be anti-capitalist than of the early bloomers. Third, we limited our hypothesis to those societies (unlike Indian caste society) where the successful student plausibly could expect further comparable success in the wider society. In Western society, women have not heretofore plausibly held such expectations, so we would not expect the female students who constituted part of the academic upper class yet later underwent downward mobility to show the same anti-capitalist animus as male intellectuals. We might predict, then, that the more a society is known to move toward equality in occupational opportunity between women and men, the more its female intellectuals will exhibit the same disproportionate anti-capitalism its male intellectuals show.
Weiye Loh

Himalayan glaciers not melting because of climate change, report finds - Telegraph - 0 views

  • Himalayan glaciers are actually advancing rather than retreating, claims the first major study since a controversial UN report said they would be melted within quarter of a century.
  • Researchers have discovered that contrary to popular belief half of the ice flows in the Karakoram range of the mountains are actually growing rather than shrinking.
  • The discovery adds a new twist to the row over whether global warming is causing the world's highest mountain range to lose its ice cover.
  • ...13 more annotations...
  • It further challenges claims made in a 2007 report by the UN's Intergovernmental Panel on Climate Change that the glaciers would be gone by 2035.
  • Although the head of the panel Dr Rajendra Pachauri later admitted the claim was an error gleaned from unchecked research, he maintained that global warming was melting the glaciers at "a rapid rate", threatening floods throughout north India.
  • The new study by scientists at the Universities of California and Potsdam has found that half of the glaciers in the Karakoram range, in the northwestern Himlaya, are in fact advancing and that global warming is not the deciding factor in whether a glacier survives or melts.
  • Dr Bodo Bookhagen, Dirk Scherler and Manfred Strecker studied 286 glaciers between the Hindu Kush on the Afghan-Pakistan border to Bhutan, taking in six areas.Their report, published in the journal Nature Geoscience, found the key factor affecting their advance or retreat is the amount of debris – rocks and mud – strewn on their surface, not the general nature of climate change.
  • Glaciers surrounded by high mountains and covered with more than two centimetres of debris are protected from melting.Debris-covered glaciers are common in the rugged central Himalaya, but they are almost absent in subdued landscapes on the Tibetan Plateau, where retreat rates are higher.
  • In contrast, more than 50 per cent of observed glaciers in the Karakoram region in the northwestern Himalaya are advancing or stable.
  • "Our study shows that there is no uniform response of Himalayan glaciers to climate change and highlights the importance of debris cover for understanding glacier retreat, an effect that has so far been neglected in predictions of future water availability or global sea level," the authors concluded.
  • Dr Bookhagen said their report had shown "there is no stereotypical Himalayan glacier" in contrast to the UN's climate change report which, he said, "lumps all Himalayan glaciers together."
  • Dr Pachauri, head of the Nobel prize-winning UN Intergovernmental Panel on Climate Change, has remained silent on the matter since he was forced to admit his report's claim that the Himalayan glaciers would melt by 2035 was an error and had not been sourced from a peer-reviewed scientific journal. It came from a World Wildlife Fund report.
  • this latest tawdry addition to the pathetic lies of the Reality Deniers. If you go to a proper source which quotes the full study such as:http://www.sciencedaily.com/re...you discover that the findings of this study are rather different to those portrayed here.
  • only way to consistently maintain a lie is to refuse point-blank to publish ALL the findings of a study, but to cherry-pick the bits which are consistent with the ongoing lie, while ignoring the rest.
  • Bookhagen noted that glaciers in the Karakoram region of Northwestern Himalaya are mostly stagnating. However, glaciers in the Western, Central, and Eastern Himalaya are retreating, with the highest retreat rates -- approximately 8 meters per year -- in the Western Himalayan Mountains. The authors found that half of the studied glaciers in the Karakoram region are stable or advancing, whereas about two-thirds are in retreat elsewhere throughout High Asia
  • glaciers in the steep Himalaya are not only affected by temperature and precipitation, but also by debris coverage, and have no uniform and less predictable response, explained the authors. The debris coverage may be one of the missing links to creating a more coherent picture of glacial behavior throughout all mountains. The scientists contrast this Himalayan glacial study with glaciers from the gently dipping, low-relief Tibetan Plateau that have no debris coverage. Those glaciers behave in a different way, and their frontal changes can be explained by temperature and precipitation changes.
Weiye Loh

Debris on certain Himalayan glaciers may prevent melting - 0 views

  • ScienceDaily (Jan. 25, 2011) — A new scientific study shows that debris coverage -- pebbles, rocks, and debris from surrounding mountains -- may be a missing link in the understanding of the decline of glaciers. Debris is distinct from soot and dust, according to the scientists.
  • Experts have stated that global warming is a key element in the melting of glaciers worldwide.
  • With the aid of new remote-sensing methods and satellite images, we identified debris coverage to be an important contributor to glacial advance and retreat behaviors," said Bookhagen. "This parameter has been almost completely neglected in previous Himalayan and other mountainous region studies, although its impact has been known for some time.
  • ...4 more annotations...
  • "There is no 'stereotypical' Himalayan glacier," said Bookhagen. "This is in clear contrast to the IPCC reports that lumps all Himalayan glaciers together."
  • Bookhagen noted that glaciers in the Karakoram region of Northwestern Himalaya are mostly stagnating. However, glaciers in the Western, Central, and Eastern Himalaya are retreating, with the highest retreat rates -- approximately 8 meters per year -- in the Western Himalayan Mountains. The authors found that half of the studied glaciers in the Karakoram region are stable or advancing, whereas about two-thirds are in retreat elsewhere throughout High Asia. This is in contrast to the prevailing notion that all glaciers in the tropics are retreating.
  • debris cover has the opposite effect of soot and dust on glaciers. Debris coverage thickness above 2 centimeters, or about a half an inch, 'shields' the glacier and prevents melting. This is the case for many Himalayan glaciers that are surrounded by towering mountains that almost continuously shed pebbles, debris, and rocks onto the glacier.
  • glaciers in the steep Himalaya are not only affected by temperature and precipitation, but also by debris coverage, and have no uniform and less predictable response, explained the authors. The debris coverage may be one of the missing links to creating a more coherent picture of glacial behavior throughout all mountains. The scientists contrast this Himalayan glacial study with glaciers from the gently dipping, low-relief Tibetan Plateau that have no debris coverage. Those glaciers behave in a different way, and their frontal changes can be explained by temperature and precipitation changes.
Weiye Loh

BBC News - Should victims have a say in sentencing criminals? - 0 views

  • If someone does you wrong, should you have a say in their punishment?
  • Should victims have a say in sentencing criminals? That partly depends upon what you mean by "have a say". A weak form of involvement would have a judge listen to a statement from victims, but ensure the judge alone does the sentencing. A slightly stronger form would be when the impact on victims is considered as part of assessing the moral seriousness of the crime. The strongest form would be when victims have a direct say in the type of sentence. So which is the more just?
  • A utilitarian approach, which seeks people's greatest happiness and is associated with the British philosopher Jeremy Bentham, can provide one reason why victims should, in part, play judge. It can be called the therapeutic argument.
  • ...6 more annotations...
  • However, this might backfire. Given the choice, many victims might desire longer sentences than the judiciary would allow. When that desire is not satisfied, their anguish might be exacerbated. The therapeutic argument has also been called the "Oprahisation" of sentencing.
  • The second, Kantian approach emphasises reason and rights.
  • It stresses the law should be rational, and that includes keeping careful tabs on the irrational feelings that are inevitably present during legal proceedings. This would be harder to do, the more the voice of victims is heard.
  • More seriously still, strong forms of victim sentencing would reflect the capabilities of the victim. A victim who could powerfully express their feelings might win a longer sentence. That would be irrational because it would suggest that a crime is more serious if the victim is more articulate.
  • Taking considerations of moral seriousness into account would fit within a third approach, the one that stresses the common good and virtue and is associated with Aristotle. Would you want to meet the person who did this to you? Understanding the moral seriousness of a crime is important because it helps the criminal to take responsibility for what they've done. Victim feelings are also a crucial component in so-called restorative justice, in which the criminal is confronted with their crime, perhaps by meeting the victim.
  • virtue ethics approach would be concerned with the moral state of the victim too. Victims may need to forgive those who have wronged them, in order that they might flourish in the future. An impersonal legal system, that does not allow victims a say, might actually help with that, as it ensures objectivity.
Weiye Loh

Roger Pielke Jr.'s Blog: Good News on Poverty - 0 views

  • The Brookings Institution has a new report out by Lawrence Chandy and Geoffrey Gertz (here in PDF) on trends in global poverty
  • The new estimates of global poverty presented in this brief serve as a reminder of just how powerful high growth can be in freeing people from poverty. In the span of a decade, the share of the world’s population living in poverty could be cut by two-thirds, the number of countries where more than 1 in 6 people live in poverty could drop from 60 to 35, and 19 countries are poised to eliminate poverty altogether.
  • Of course, it is far too early to declare success in the fight against poverty. To begin with, our estimates are just that; they are not hard numbers that can be calculated in real time, and the gains we imagine might not be realized if projections of future consumption growth turn out to be overly optimistic or if the poor do not share in this growth. Moreover, even if our figures are broadly accurate, in 2015 there will still be close to 600 million people—twice the population of the United States—living on less than the meager sum of $1.25 a day. Their fates are far from secure and represent a strategic and moral failure for the rest of the world, arguably all the more so as millions of others escape poverty.
  • ...2 more annotations...
  • if a key factor in reducing poverty is economic growth, then it necessarily is the case that efforts to limit climate change by reducing growth (or as some even argue, putting growth into reverse) are in direct opposition to efforts to reduce poverty.  It is this tension that sets up what I call in The Climate Fix the "iron law" of climate policy.
  • Setting aside climate change, the trends in poverty reduction are of profound importance, as the report suggests: Over the past half century, the developing world, including many of the world’s poorest countries, have seen dramatic improvements in virtually all non-income measures of well-being: since 1960, global infant mortality has dropped by more than 50 percent, for example, and the share of the world’s children enrolled in primary school increased from less than half to nearly 90 percent between 1950 and today.5 Likewise there have been impressive gains in gender equality, access to justice and civil and political rights. Yet, through most of this period, the incomes of rich and poor countries diverged, and income poverty has proven a more persistent challenge than other measures of wellbeing.6 The rapid decline in global poverty now underway—and the early achievement of the MDG1a target—marks a break from these trends, and could come to be seen as a turning point in the history of global development.
Weiye Loh

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
Weiye Loh

TPM: The Philosophers' Magazine | Is morality relative? Depends on your personality - 0 views

  • no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
  • If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
  • in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here
  • ...8 more annotations...
  • when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
  • consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
  • Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
  • Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
  • People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
  • To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
  • The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
  • If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Weiye Loh

Climate change and extreme flooding linked by new evidence | George Monbiot | Environme... - 0 views

  • Two studies suggest for the first time a clear link between global warming and extreme precipitation
  • There's a sound rule for reporting weather events that may be related to climate change. You can't say that a particular heatwave or a particular downpour – or even a particular freeze – was definitely caused by human emissions of greenhouse gases. But you can say whether these events are consistent with predictions, or that their likelihood rises or falls in a warming world.
  • Weather is a complex system. Long-running trends, natural fluctuations and random patterns are fed into the global weather machine, and it spews out a series of events. All these events will be influenced to some degree by global temperatures, but it's impossible to say with certainty that any of them would not have happened in the absence of man-made global warming.
  • ...5 more annotations...
  • over time, as the data build up, we begin to see trends which suggest that rising temperatures are making a particular kind of weather more likely to occur. One such trend has now become clearer. Two new papers, published by Nature, should make us sit up, as they suggest for the first time a clear link between global warming and extreme precipitation (precipitation means water falling out of the sky in any form: rain, hail or snow).
  • We still can't say that any given weather event is definitely caused by man-made global warming. But we can say, with an even higher degree of confidence than before, that climate change makes extreme events more likely to happen.
  • One paper, by Seung-Ki Min and others, shows that rising concentrations of greenhouse gases in the atmosphere have caused an intensification of heavy rainfall events over some two-thirds of the weather stations on land in the northern hemisphere. The climate models appear to have underestimated the contribution of global warming on extreme rainfall: it's worse than we thought it would be.
  • The other paper, by Pardeep Pall and others, shows that man-made global warming is very likely to have increased the probability of severe flooding in England and Wales, and could well have been behind the extreme events in 2000. The researchers ran thousands of simulations of the weather in autumn 2000 (using idle time on computers made available by a network of volunteers) with and without the temperature rises caused by man-made global warming. They found that, in nine out of 10 cases, man-made greenhouse gases increased the risks of flooding. This is probably as solid a signal as simulations can produce, and it gives us a clear warning that more global heating is likely to cause more floods here.
  • As Richard Allan points out, also in Nature, the warmer the atmosphere is, the more water vapour it can carry. There's even a formula which quantifies this: 6-7% more moisture in the air for every degree of warming near the Earth's surface. But both models and observations also show changes in the distribution of rainfall, with moisture concentrating in some parts of the world and fleeing from others: climate change is likely to produce both more floods and more droughts.
Weiye Loh

Happiness: Do we have a choice? » Scienceline - 0 views

  • “Objective choices make a difference to happiness over and above genetics and personality,” said Bruce Headey, a psychologist at Melbourne University in Australia. Headey and his colleagues analyzed annual self-reports of life satisfaction from over 20,000 Germans who have been interviewed every year since 1984. He compared five-year averages of people’s reported life satisfaction, and plotted their relative happiness on a percentile scale from 1 to 100. Heady found that as time went on, more and more people recorded substantial changes in their life satisfaction. By 2008, more than a third had moved up or down on the happiness scale by at least 25 percent, compared to where they had started in 1984.
  • Headey’s findings, published in the October 19th issue of Proceedings of the National Academy of Sciences, run contrary to what is known as the happiness set-point theory — the idea that even if you win the lottery or become a paraplegic, you’ll revert back to the same fixed level of happiness within a year or two. This psychological theory was widely accepted in the 1990s because it explained why happiness levels seemed to remain stable over the long term: They were mainly determined early in life by genetic factors including personality traits.
  • But even this dynamic choice-driven picture does not fully capture the nuance of what it means to be happy, said Jerome Kagan, a Harvard University developmental psychologist. He warns against conflating two distinct dimensions of happiness: everyday emotional experience (an assessment of how you feel at the moment) and life evaluation (a judgment of how satisfied you are with your life). It’s the difference between “how often did you smile yesterday?” and “how does your life compare to the best possible life you can imagine?”
  • ...4 more annotations...
  • Kagan suggests that we may have more choice over the latter, because life evaluation is not a function of how we currently feel — it is a comparison of our life to what we decide the good life should be.
  • Kagan has found that young children differ biologically in the ease with which they can feel happy, or tense, or distressed, or sad — what he calls temperament. People establish temperament early in life and have little capacity to change it. But they can change their life evaluation, which Kagan describes as an ethical concept synonymous with “how good of a life have I led?” The answer will depend on individual choices and the purpose they create for themselves. A painter who is constantly stressed and moody (unhappy in the moment) may still feel validation in creating good artwork and may be very satisfied with his life (happy as a judgment).
  • when it comes to happiness, our choices may matter — but it depends on what the choices are about, and how we define what we want to change.
  • Graham thinks that people may evaluate their happiness based on whichever dimension — happiness at the moment, or life evaluation — they have a choice over.
  •  
    Instead of existing as a stable equilibrium, Headey suggests that happiness is much more dynamic, and that individual choices - about one's partner, working hours, social participation and lifestyle - make substantial and permanent changes to reported happiness levels. For example, doing more or fewer paid hours of work than you want, or exercising regularly, can have just as much impact on life satisfaction as having an extroverted personality.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

'There Is No Values-Free Form Of Education,' Says U.S. Philosopher - Radio Fr... - 0 views

  • from the earliest years, education should be based primarily on exploration, understanding in depth, and the development of logical, critical thinking. Such an emphasis, she says, not only produces a citizenry capable of recognizing and rooting out political jingoism and intolerance. It also produces people capable of questioning authority and perceived wisdom in ways that enhance innovation and economic competitiveness. Nussbaum warns against a narrow educational focus on technical competence.
  • a successful, long-term democracy depends on a citizenry with certain qualities that can be fostered by education.
  • The first is the capacity we associate in the Western tradition with Socrates, but it certainly appears in all traditions -- that is, the ability to think critically about proposals that are brought your way, to analyze an argument, to distinguish a good argument from a bad argument. And just in general, to lead what Socrates called “the examined life.” Now that’s, of course, important because we know that people are very prone to go along with authority, with fashion, with peer pressure. And this kind of critical enlivened citizenry is the only thing that can keep democracy vital.
  • ...15 more annotations...
  • it can be trained from very early in a child’s education. There’re ways that you can get quite young children to recognize what’s a good argument and what’s a bad argument. And as children grow older, it can be done in a more and more sophisticated form until by the time they’re undergraduates in universities they would be studying Plato’s dialogues for example and really looking at those tricky arguments and trying to figure out how to think. And this is important not just for the individual thinking about society, but it’s important for the way people talk to each other. In all too many public discussions people just throw out slogans and they throw out insults. And what democracy needs is listening. And respect. And so when people learn how to analyze an argument, then they look at what the other person’s saying differently. And they try to take it apart, and they think: “Well, do I share some of those views and where do I differ here?” and so on. And this really does produce a much more deliberative, respectful style of public interaction.
  • The second [quality] is what I call “the ability to think as a citizen of the whole world.” We’re all narrow and this is again something that we get from our animal heritage. Most non-human animals just think about the group. But, of course, in this world we need to think, first of all, our whole nation -- its many different groups, minority and majority. And then we need to think outside the nation, about how problems involving, let’s say, the environment or global economy and so on need cooperative resolution that brings together people from many different nations.
  • That’s complicated and it requires learning a lot of history, and it means learning not just to parrot some facts about history but to think critically about how to assess historical evidence. It means learning how to think about the global economy. And then I think particularly important in this era, it means learning something about the major world religions. Learning complicated, nonstereotypical accounts of those religions because there’s so much fear that’s circulating around in every country that’s based usually on just inadequate stereotypes of what Muslims are or whatever. So knowledge can at least begin to address that.
  • the third thing, which I think goes very closely with the other two, is what I call “the narrative imagination,” which is the ability to put yourself in the shoes of another person to have some understanding of how the world looks from that point of view. And to really have that kind of educated sympathy with the lives of others. Now again this is something we come into the world with. Psychologists have now found that babies less than a year old are able to take up the perspective of another person and do things, see things from that perspective. But it’s very narrow and usually people learn how to think about what their parents are thinking and maybe other family members but we need to extend that and develop it, and learn how the world looks from the point of view of minorities in our own culture, people outside our culture, and so on.
  • since we can’t go to all the places that we need to understand -- it’s accomplished by reading narratives, reading literature, drama, participating through the arts in the thought processes of another culture. So literature and the arts are the major ways we would develop and extend that capacity.
  • For many years, the leading model of development ... used by economists and international agencies measuring welfare was simply that for a country to develop means to increase [its] gross domestic product per capita. Now, in recent years, there has been a backlash to that because people feel that it just doesn’t ask enough about what goods are really doing for people, what can people really do and be.
  • so since 1990s the United Nations’ development program has produced annually what’s called a “Human Development Report” that looks at things like access to education, access to health care. In other words, a much richer menu of human chances and opportunities that people have. And at the theoretical end I’ve worked for about 20 years now with economist Amartya Sen, who won the Nobel Prize in 1998 for economics. And we’ve developed this as account of -- so for us what it is for a country to do better is to enhance the set of capabilities meaning substantial opportunities that people have to lead meaningful, fruitful lives. And then I go on to focus on a certain core group of those capabilities that I think ought to be protected by constitutional law in every country.
  • Life; health; bodily integrity; the development of senses, imagination, and thought; the development of practical reason; opportunities to have meaningful affiliations both friendly and political with other people; the ability to have emotional health -- not to be in other words dominated by overwhelming fear and so on; the ability to have a productive relationship with the environment and the world of nature; the ability to play and have leisure time, which is something that I think people don’t think enough about; and then, finally, control over one’s material and social environment, some measure of control. Now of course, each of these is very abstract, and I specify them further. Although I also think that each country needs to finally specify them with its own particular circumstances in view.
  • when kids learn in a classroom that just makes them sit in a chair, well, they can take in something in their heads, but it doesn’t make them competent at negotiating in the world. And so starting, at least, with Jean Jacques Rousseau in the 18th century, people thought: “Well, if we really want people to be independent citizens in a democracy that means that we can’t have whole classes of people who don’t know how to do anything, who are just simply sitting there waiting to be waited on in practical matters.” And so the idea that children should participate in their practical environment came out of the initial democratizing tendencies that went running through the 18th century.
  • even countries who absolutely do not want that kind of engaged citizenry see that for the success of business these abilities are pretty important. Both Singapore and China have conducted mass education reforms over the last five years because they realized that their business cultures don’t have enough imagination and they also don’t have enough critical thinking, because you can have awfully corrupt business culture if no one is willing to say the unpleasant word or make a criticism.
  • So they have striven to introduce more critical thinking and more imagination into their curricula. But, of course, for them, they want to cordon it off -- they want to do it in the science classroom, in the business classroom, but not in the politics classroom. Well, we’ll see -- can they do that? Can they segment it that way? I think democratic thinking is awfully hard to segment as current events in the Middle East are showing us. It does have the tendency to spread.
  • so maybe the people in Singapore and China will not like the end result of what they tried to do or maybe the reform will just fail, which is equally likely -- I mean the educational reform.
  • if you really don’t want democracy, this is not the education for you. It had its origins in the ancient Athenian democracy which was a very, very strong participatory democracy and it is most at home in really true democracy, where our whole goal is to get each and every person involved and to get them thinking about things. So, of course, if politicians have ambivalence about that goal they may well not want this kind of education.
  • when we bring up children in the family or in the school, we are always engineering. I mean, there is no values-free form of education in the world. Even an education that just teaches you a list of facts has values built into it. Namely, it gives a negative value to imagination and to the critical faculties and a very high value to a kind of rote, technical competence. So, you can't avoid shaping children.
  • ncreasingly the child should be in control and should become free. And that's what the critical thinking is all about -- it's about promoting freedom as the child goes on. So, the end product should be an adult who is really thinking for him- or herself about the direction of society. But you don't get freedom just by saying, "Oh, you are free." Progressive educators that simply stopped teaching found out very quickly that that didn't produce freedom. Even some of the very extreme forms of progressive school where children were just allowed to say every day what it was they wanted to learn, they found that didn't give the child the kind of mastery of self and of the world that you really need to be a free person.
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Largest Protests in Wisconsin's History | the kent ridge common - 0 views

  • American mainstream media (big news channels or newspapers) are not reporting these protests. (Note the Sydney Morning Herald comes in at third place on the google news search) A quick web-tour of Fox News, New York Times and CNN: all 3 have headlines of Japanese nuclear reactors in the wake of the earthquake. NYT had zero articles on the protests on its main page, Fox News did at the bottom – “Wisconsin Union Fight Not Over Yet” – and CNN had one iReport linked from its main page, consisting of 10 black-and-white photos, none of them giving a bird’s eye view to show the massive turnout. A web commenter had this to say:
Weiye Loh

Report: Piracy a "global pricing problem" with only one solution - 0 views

  • Over the last three years, 35 researchers contributed to the Media Piracy Project, released last week by the Social Science Research Council. Their mission was to examine media piracy in emerging economies, which account for most of the world's population, and to find out just how and why piracy operates in places like Russia, Mexico, and India.
  • Their conclusion is not that citizens of such piratical societies are somehow morally deficient or opposed to paying for content. Instead, they write that “high prices for media goods, low incomes, and cheap digital technologies are the main ingredients of global media piracy. If piracy is ubiquitous in most parts of the world, it is because these conditions are ubiquitous.”
  • When legitimate CDs, DVDs, and computer software are five to ten times higher (relative to local incomes) than they are in the US and Europe, simply ratcheting up copyright enforcement won't do enough to fix the problem. In the view of the report's authors, the only real solution is the creation of local companies that “actively compete on price and services for local customers” as they sell movies, music, and more.
  • ...7 more annotations...
  • Some markets have local firms that compete on price to offer legitimate content (think the US, which has companies like Hulu, Netflix, Apple, and Microsoft that compete to offer legal video content). But the authors conclude that, in most of the world, legitimate copyrighted goods are only distributed by huge multinational corporations whose dominant goals are not to service a large part of local markets but to “protect the pricing structure in the high-income countries that generate most of their profits.”
  • This might increase profits globally, but it has led to disaster in many developing economies, where piracy may run north of 90 percent. Given access to cheap digital tools, but charged terrific amounts of money for legitimate versions of content, users choose piracy.
  • In Russia, for instance, researchers noted that legal versions of the film The Dark Knight went for $15. That price, akin to what a US buyer would pay, might sound reasonable until you realize that Russians make less money in a year than US workers. As a percentage of their wages, that $15 price is actually equivalent to a US consumer dropping $75 on the film. Pirate versions can be had for one-third the price.
  • Simple crackdowns on pirate behavior won't work in the absence of pricing and other reforms, say the report's authors (who also note that even "developed" economies routinely pirate TV shows and movies that are not made legally available to them for days, weeks, or months after they originally appear elsewhere).
  • The "strong moralization of the debate” makes it difficult to discuss issues beyond enforcement, however, and the authors slam the content companies for lacking any credible "endgame" to their constant requests for more civil and police powers in the War on Piracy.
  • piracy is a “signal of unmet consumer demand.
  • Our studies raise concerns that it may be a long time before such accommodations to reality reach the international policy arena. Hardline enforcement positions may be futile at stemming the tide of piracy, but the United States bears few of the costs of such efforts, and US companies reap most of the modest benefits. This is a recipe for continued US pressure on developing countries, very possibly long after media business models in the United States and other high-income countries have changed.
  •  
    A major new report from a consortium of academic researchers concludes that media piracy can't be stopped through "three strikes" Internet disconnections, Web censorship, more police powers, higher statutory damages, or tougher criminal penalties. That's because the piracy of movies, music, video games, and software is "better described as a global pricing problem." And the only way to solve it is by changing the price.
Weiye Loh

Singapore has priced honesty correctly - 0 views

  • Second, the turnover of ministers in comparable countries is relatively much faster compared to Singapore, where a minister can be in the Cabinet for decades with no fixed term limit.
  • Third, the candidates who enter politics, as is often seen in countries like the United States, may have already made their fortune, as in the case of former US treasury secretary Hank Paulson, who was chief executive officer of premier investment house Goldman Sachs before he joined the White House Cabinet. Such holders of high office can afford to serve their country out of conviction alone.
  • Singapore does not offer such luxuries because we are a small country with a small pool of talent that can be considered for key government positions.
Weiye Loh

Roger Pielke Jr.'s Blog: The Guardian on Difficult Energy Choices - 0 views

  • For all the emotive force of events in Japan, though, this is one issue where there is a pressing need to listen to what our heads say about the needs of the future, as opposed to subjecting ourselves to jittery whims of the heart. One of the few solid lessons to emerge from the aged Fukushima plant is that the tendency in Britain and elsewhere to postpone politically painful choices about building new nuclear stations by extending the life-spans of existing ones is dangerous. Beyond that, with or without Fukushima, the undisputed nastiness of nuclear – the costs, the risks and the waste – still need to be carefully weighed in the balance against the different poisons pumped out by coal, which remains the chief economic alternative. Most of the easy third ways are illusions. Energy efficiency has been improving for over 200 years, but it has worked to increase not curb demand. Off-shore wind remains so costly that market forces would simply push pollution overseas if it were taken up in a big way. A massive expansion of shale gas may yet pave the way to a plausible non-nuclear future, and it certainly warrants close examination. The fundamentals of the difficult decisions ahead, however, have not moved with the Earth.
  •  
    The Guardian hits the right note on energy policy choices in the aftermath of the still unfolding Japanese nuclear crisis:
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
« First ‹ Previous 41 - 60 of 68 Next ›
Showing 20 items per page