Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Economist

Rss Feed Group items tagged

Weiye Loh

How should we use data to improve our lives? - By Michael Agger - Slate Magazine - 0 views

  • The Swiss economists Bruno Frey and Alois Stutzer argue that people do not appreciate the real cost of a long commute. And especially when that commute is unpredictable, it takes a toll on our daily well-being.
  • imagine if we shared our commuting information so that we could calculate the average commute from various locations around a city. When the growing family of four pulls up to a house for sale for in New Jersey, the listing would indicate not only the price and the number of bathrooms but also the rush-hour commute time to Midtown Manhattan. That would be valuable information to have, since buyers could realistically factor the tradeoffs of remaining in a smaller space closer to work against moving to a larger space and taking on a longer commute.
  • In a cover story for the New York Times Magazine, the writer Gary Wolf documented the followers of “The Data-Driven Life,” programmers, students, and self-described geeks who track various aspects of their lives. Seth Roberts does a daily math exercise to measure small changes in his mental acuity. Kiel Gilleade is a "Body Blogger" who shares his heart rate via Twitter. On the more extreme end, Mark Carranza has a searchable database of every idea he's had since 1984. They're not alone. This community continues to thrive, and its efforts are chronicled at a blog called the Quantified Self, co-founded by Wolf and Kevin Kelly.
  • ...3 more annotations...
  • If you've ever asked Nike+ to log your runs or given Google permission to keep your search history, you've participated in a bit of self-tracking. Now that more people have location-aware smartphones and the Web has made data easy to share, personal data is poised to become an important tool to understand how we live, and how we all might live better. One great example of this phenomenon in action is the site Cure Together, which allows you to enter your symptoms—for, say, "anxiety" or "insomnia"—and the various remedies you've tried to feel better. One thing the site does is aggregate this information and present the results in chart form. Here is the chart for depression:
  • Instead of being isolated in your own condition, you can now see what has worked for others. The same principle is at work at the site Fuelly, where you can "track, share, and compare" your miles per gallon and see how efficient certain makes and models really are.
  • Businesses are also using data tracking to spur their employees to accomplishing companywide goals: Wal-Mart partnered with Zazengo to help employees track their "personal sustainability" actions such as making a home-cooked meal or buying local produce. The app Rescue Time, which records all of the activity on your computer, gives workers an easy way to account for their time. And that comes in handy when you want to show the boss how efficient telecommuting can be.
  •  
    Data for a better planet
Weiye Loh

Times Higher Education - Unconventional thinkers or recklessly dangerous minds? - 0 views

  • The origin of Aids denialism lies with one man. Peter Duesberg has spent the whole of his academic career at the University of California, Berkeley. In the 1970s he performed groundbreaking work that helped show how mutated genes cause cancer, an insight that earned him a well-deserved international reputation.
  • in the early 1980s, something changed. Duesberg attempted to refute his own theories, claiming that it was not mutated genes but rather environmental toxins that are cancer's true cause. He dismissed the studies of other researchers who had furthered his original work. Then, in 1987, he published a paper that extended his new train of thought to Aids.
  • Initially many scientists were open to Duesberg's ideas. But as evidence linking HIV to Aids mounted - crucially the observation that ARVs brought Aids sufferers who were on the brink of death back to life - the vast majority concluded that the debate was over. Nonetheless, Duesberg persisted with his arguments, and in doing so attracted a cabal of supporters
  • ...12 more annotations...
  • In 1999, denialism secured its highest-profile advocate: Thabo Mbeki, who was then president of South Africa. Having studied denialist literature, Mbeki decided that the consensus on Aids sounded too much like a "biblical absolute truth" that couldn't be questioned. The following year he set up a panel of advisers, nearly half of whom were Aids denialists, including Duesberg. The resultant health policies cut funding for clinics distributing ARVs, withheld donor medication and blocked international aid grants. Meanwhile, Mbeki's health minister, Manto Tshabalala-Msimang, promoted the use of alternative Aids remedies, such as beetroot and garlic.
  • In 2007, Nicoli Nattrass, an economist and director of the Aids and Society Research Unit at the University of Cape Town, estimated that, between 1999 and 2007, Mbeki's Aids denialist policies led to more than 340,000 premature deaths. Later, scientists Max Essex, Pride Chigwedere and other colleagues at the Harvard School of Public Health arrived at a similar figure.
  • "I don't think it's hyperbole to say the (Mbeki regime's) Aids policies do not fall short of a crime against humanity," says Kalichman. "The science behind these medications was irrefutable, and yet they chose to buy into pseudoscience and withhold life-prolonging, if not life-saving, medications from the population. I just don't think there's any question that it should be looked into and investigated."
  • In fairness, there was a reason to have faint doubts about HIV treatment in the early days of Mbeki's rule.
  • some individual cases had raised questions about their reliability on mass rollout. In 2002, for example, Sarah Hlalele, a South African HIV patient and activist from a settlement background, died from "lactic acidosis", a side-effect of her drugs combination. Today doctors know enough about mixing ARVs not to make the same mistake, but at the time her death terrified the medical community.
  • any trial would be futile because of the uncertainties over ARVs that existed during Mbeki's tenure and the fact that others in Mbeki's government went along with his views (although they have since renounced them). "Mbeki was wrong, but propositions we had established then weren't as incontestably established as they are now ... So I think these calls (for genocide charges or criminal trials) are misguided, and I think they're a sideshow, and I don't support them."
  • Regardless of the culpability of politicians, the question remains whether scientists themselves should be allowed to promote views that go wildly against the mainstream consensus. The history of science is littered with offbeat ideas that were ridiculed by the scientific communities of the time. Most of these ideas missed the textbooks and went straight into the waste-paper basket, but a few - continental drift, the germ basis of disease or the Earth's orbit around the Sun, for instance - ultimately proved to be worth more than the paper they were written on. In science, many would argue, freedom of expression is too important to throw away.
  • Such an issue is engulfing the Elsevier journal Medical Hypotheses. Last year the journal, which is not peer reviewed, published a paper by Duesberg and others claiming that the South African Aids death-toll estimates were inflated, while reiterating the argument that there is "no proof that HIV causes Aids". That prompted several Aids scientists to complain to Elsevier, which responded by retracting the paper and asking the journal's editor, Bruce Charlton, to implement a system of peer review. Having refused to change the editorial policy, Charlton faces the sack
  • There are people who would like the journal to keep its current format and continue accepting controversial papers, but for Aids scientists, Duesberg's paper was a step too far. Although it was deleted from both the journal's website and the Medline database, its existence elsewhere on the internet drove Chigwedere and Essex to publish a peer-reviewed rebuttal earlier this year in AIDS and Behavior, lest any readers be "hoodwinked" into thinking there was genuine debate about the causes of Aids.
  • Duesberg believes he is being "censored", although he has found other outlets. In 1991, he helped form "The Group for the Scientific Reappraisal of the HIV/Aids Hypothesis" - now called Rethinking Aids, or simply The Group - to publicise denialist information. Backed by his Berkeley credentials, he regularly promotes his views in media articles and films. Meanwhile, his closest collaborator, David Rasnick, tells "anyone who asks" that "HIV drugs do more harm than good".
  • "Is academic freedom such a precious concept that scientists can hide behind it while betraying the public so blatantly?" asked John Moore, an Aids scientist at Cornell University, on a South African health news website last year. Moore suggested that universities could put in place a "post-tenure review" system to ensure that their researchers act within accepted bounds of scientific practice. "When the facts are so solidly against views that kill people, there must be a price to pay," he added.
  • Now it seems Duesberg may have to pay that price since it emerged last month that his withdrawn paper has led to an investigation at Berkeley for misconduct. Yet for many in the field, chasing fellow scientists comes second to dealing with the Aids pandemic.
  •  
    6 May 2010 Aids denialism is estimated to have killed many thousands. Jon Cartwright asks if scientists should be held accountable, while overleaf Bruce Charlton defends his decision to publish the work of an Aids sceptic, which sparked a row that has led to his being sacked and his journal abandoning its raison d'etre: presenting controversial ideas for scientific debate
Weiye Loh

Meet the Ethical Placebo: A Story that Heals | NeuroTribes - 0 views

  • In modern medicine, placebos are associated with another form of deception — a kind that has long been thought essential for conducting randomized clinical trials of new drugs, the statistical rock upon which the global pharmaceutical industry was built. One group of volunteers in an RCT gets the novel medication; another group (the “control” group) gets pills or capsules that look identical to the allegedly active drug, but contain only an inert substance like milk sugar. These faux drugs are called placebos.
  • Inevitably, the health of some people in both groups improves, while the health of others grows worse. Symptoms of illness fluctuate for all sorts of reasons, including regression to the mean.
  • Since the goal of an RCT, from Big Pharma’s perspective, is to demonstrate the effectiveness of a new drug, the return to robust health of a volunteer in the control group is considered a statistical distraction. If too many people in the trial get better after downing sugar pills, the real drug will look worse by comparison — sometimes fatally so for the purpose of earning approval from the Food and Drug Adminstration.
  • ...12 more annotations...
  • For a complex and somewhat mysterious set of reasons, it is becoming increasingly difficult for experimental drugs to prove their superiority to sugar pills in RCTs
  • in recent years, however, has it become obvious that the abatement of symptoms in control-group volunteers — the so-called placebo effect — is worthy of study outside the context of drug trials, and is in fact profoundly good news to anyone but investors in Pfizer, Roche, and GlaxoSmithKline.
  • The emerging field of placebo research has revealed that the body’s repertoire of resilience contains a powerful self-healing network that can help reduce pain and inflammation, lower the production of stress chemicals like cortisol, and even tame high blood pressure and the tremors of Parkinson’s disease.
  • more and more studies each year — by researchers like Fabrizio Benedetti at the University of Turin, author of a superb new book called The Patient’s Brain, and neuroscientist Tor Wager at the University of Colorado — demonstrate that the placebo effect might be potentially useful in treating a wide range of ills. Then why aren’t doctors supposed to use it?
  • The medical establishment’s ethical problem with placebo treatment boils down to the notion that for fake drugs to be effective, doctors must lie to their patients. It has been widely assumed that if a patient discovers that he or she is taking a placebo, the mind/body password will no longer unlock the network, and the magic pills will cease to do their job.
  • For “Placebos Without Deception,” the researchers tracked the health of 80 volunteers with irritable bowel syndrome for three weeks as half of them took placebos and the other half didn’t.
  • In a previous study published in the British Medical Journal in 2008, Kaptchuk and Kirsch demonstrated that placebo treatment can be highly effective for alleviating the symptoms of IBS. This time, however, instead of the trial being “blinded,” it was “open.” That is, the volunteers in the placebo group knew that they were getting only inert pills — which they were instructed to take religiously, twice a day. They were also informed that, just as Ivan Pavlov trained his dogs to drool at the sound of a bell, the body could be trained to activate its own built-in healing network by the act of swallowing a pill.
  • In other words, in addition to the bogus medication, the volunteers were given a true story — the story of the placebo effect. They also received the care and attention of clinicians, which have been found in many other studies to be crucial for eliciting placebo effects. The combination of the story and a supportive clinical environment were enough to prevail over the knowledge that there was really nothing in the pills. People in the placebo arm of the trial got better — clinically, measurably, significantly better — on standard scales of symptom severity and overall quality of life. In fact, the volunteers in the placebo group experienced improvement comparable to patients taking a drug called alosetron, the standard of care for IBS. Meet the ethical placebo: a powerfully effective faux medication that meets all the standards of informed consent.
  • The study is hardly the last word on the subject, but more like one of the first. Its modest sample size and brief duration leave plenty of room for followup research. (What if “ethical” placebos wear off more quickly than deceptive ones? Does the fact that most of the volunteers in this study were women have any bearing on the outcome? Were any of the volunteers skeptical that the placebo effect is real, and did that affect their response to treatment?) Before some eager editor out there composes a tweet-baiting headline suggesting that placebos are about to drive Big Pharma out of business, he or she should appreciate the fact that the advent of AMA-approved placebo treatments would open numerous cans of fascinatingly tangled worms. For example, since the precise nature of placebo effects is shaped largely by patients’ expectations, would the advertised potency and side effects of theoretical products like Placebex and Therastim be subject to change by Internet rumors, requiring perpetual updating?
  • It’s common to use the word “placebo” as a synonym for “scam.” Economists talk about placebo solutions to our economic catastrophe (tax cuts for the rich, anyone?). Online skeptics mock the billion-dollar herbal-medicine industry by calling it Big Placebo. The fact that our brains and bodies respond vigorously to placebos given in warm and supportive clinical environments, however, turns out to be very real.
  • We’re also discovering that the power of narrative is embedded deeply in our physiology.
  • in the real world of doctoring, many physicians prescribe medications at dosages too low to have an effect on their own, hoping to tap into the body’s own healing resources — though this is mostly acknowledged only in whispers, as a kind of trade secret.
Weiye Loh

Book Review: Future Babble by Dan Gardner « Critical Thinking « Skeptic North - 0 views

  • I predict that you will find this review informative. If you do, you will congratulate my foresight. If you don’t, you’ll forget I was wrong.
  • My playful intro summarizes the main thesis of Gardner’s excellent book, Future Babble: Why Expert Predictions Fail – and Why We Believe Them Anyway.
  • In Future Babble, the research area explored is the validity of expert predictions, and the primary researcher examined is Philip Tetlock. In the early 1980s, Tetlock set out to better understand the accuracy of predictions made by experts by conducting a methodologically sound large-scale experiment.
  • ...10 more annotations...
  • Gardner presents Tetlock’s experimental design in an excellent way, making it accessible to the lay person. Concisely, Tetlock examined 27450 judgments in which 284 experts were presented with clear questions whose answers could later be shown to be true or false (e.g., “Will the official unemployment rate be higher, lower or the same a year from now?”). For each prediction, the expert must answer clearly and express their degree of certainty as a percentage (e.g., dead certain = 100%). The usage of precise numbers adds increased statistical options and removes the complications of vague or ambiguous language.
  • Tetlock found the surprising and disturbing truth “that experts’ predictions were no more accurate than random guesses.” (p. 26) An important caveat is that there was a wide range of capability, with some experts being completely out of touch, and others able to make successful predictions.
  • What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same is true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.” (p. 26)
  • The experts who did poorly were not comfortable with complexity and uncertainty, and tended to reduce most problems to some core theoretical theme. It was as if they saw the world through one lens or had one big idea that everything else had to fit into. Alternatively, the experts who did decently were self-critical, used multiple sources of information and were more comfortable with uncertainty and correcting their errors. Their thinking style almost results in a paradox: “The experts who were more accurate than others tended to be less confident they were right.” (p.27)
  • Gardner then introduces the terms ‘Hedgehog’ and ‘Fox’ to refer to bad and good predictors respectively. Hedgehogs are the ones you see pushing the same idea, while Foxes are likely in the background questioning the ability of prediction itself while making cautious proposals. Foxes are more likely to be correct. Unfortunately, it is Hedgehogs that we see on the news.
  • one of Tetlock’s findings was that “the bigger the media profile of an expert, the less accurate his predictions.” (p.28)
  • Chapter 2 – The Unpredictable World An exploration into how many events in the world are simply unpredictable. Gardner discusses chaos theory and necessary and sufficient conditions for events to occur. He supports the idea of actually saying “I don’t know,” which many experts are reluctant to do.
  • Chapter 3 – In the Minds of Experts A more detailed examination of Hedgehogs and Foxes. Gardner discusses randomness and the illusion of control while using narratives to illustrate his points à la Gladwell. This chapter provides a lot of context and background information that should be very useful to those less initiated.
  • Chapter 6 – Everyone Loves a Hedgehog More about predictions and how the media picks up hedgehog stories and talking points without much investigation into their underlying source or concern for accuracy. It is a good demolition of the absurdity of so many news “discussion shows.” Gardner demonstrates how the media prefer a show where Hedgehogs square off against each other, and it is important that these commentators not be challenged lest they become exposed and, by association, implicate the flawed structure of the program/network.Gardner really singles out certain people, like Paul Ehrlich, and shows how they have been wrong many times and yet can still get an audience.
  • “An assertion that cannot be falsified by any conceivable evidence is nothing more than dogma. It can’t be debated. It can’t be proven or disproven. It’s just something people choose to believe or not for reasons that have nothing to do with fact and logic. And dogma is what predictions become when experts and their followers go to ridiculous lengths to dismiss clear evidence that they failed.”
Weiye Loh

Roger Pielke Jr.'s Blog: Outlier - 0 views

  •  
    POSTED BY ROGER PIELKE, JR. AT 2/17/2011 12:57:00 PM
Weiye Loh

The world through language » Scienceline - 0 views

  • If you know only one language, you live only once. A man who knows two languages is worth two men. He who loses his language loses his world. (Czech, French and Gaelic proverbs.)
  • The hypothesis first put forward fifty years ago by linguist Benjamin Lee Whorf—that our language significantly affects our experience of the world—is making a comeback in various forms, and with it no shortage of debate.
  • The idea that language shapes thought was taboo for a long time, said Dan Slobin, a psycholinguist at the University of California, Berkeley. “Now the ice is breaking.” The taboo, according to Slobin, was largely due to the widespread acceptance of the ideas of Noam Chomsky, one of the most influential linguists of the 20th century. Chomsky proposed that the human brain comes equipped at birth with a set of rules—or universal grammar—that organizes language. As he likes to say, a visiting Martian would conclude that everyone on Earth speaks mutually unintelligible dialects of a single language.
  • ...11 more annotations...
  • Chomsky is hesitant to accept the recent claims of language’s profound influence on thought. “I’m rather skeptical about all of this, though there probably are some marginal effects,” he said.
  • Some advocates of the Whorfian view find support in studies of how languages convey spatial orientation. English and Dutch speakers describe orientation from an egocentric frame of reference (to my left or right). Mayan speakers use a geocentric frame of reference (to the north or south).
  • Does this mean they think about space in fundamentally different ways? Not exactly, said Lila Gleitman, a psychologist from the University of Pennsylvania. Since we ordinarily assume that others talk like us, she explained, vague instructions like “arrange it the same way” will be interpreted in whatever orientation (egocentric or geocentric) is most common in our language. “That’s going to influence how you solve an ambiguous problem, but it doesn’t mean that’s the way you think, or must think,” said Gleitman. In fact, she repeated the experiment with unambiguous instructions, providing cues to indicate whether objects should be arranged north-south or left-right. She found that people in both languages are just as good at arranging objects in either orientation.
  • Similarly, Anna Papafragou, a psychologist at the University of Delaware, thinks that the extent of language’s effect on thought has been somewhat exaggerated.
  • Papafragou compared how long Greek and English speakers paid attention to clip-art animation sequences, for example, a man skating towards a snowman. By measuring their eye movements, Papafragou was able to tell which parts of the scene held their gaze the longest. Because English speakers generally use verbs that describe manner of motion, like slide and skip, she predicted they would pay more attention to what was moving (the skates). Since Greeks use verbs that describe path, like approach and ascend, they should pay more attention to endpoint of the motion (the snowman). She found that this was true only when people had to describe the scene; when asked to memorize it, attention patterns were nearly identical. According to Papafragou, when people need to speak about what they see, they’ll focus on the parts relevant for planning sentences. Otherwise, language does not show much of an effect on attention.
  • “Each language is a bright transparent medium through which our thoughts may pass, relatively undistorted,” said Gleitman.
  • Others think that language does, in fact, introduce some distortion. Linguist Guy Deutscher of the University of Manchester in the U.K. suggests that while language can’t prevent you from thinking anything, it does compel you to think in specific ways. Language forces you to habitually pay attention to different aspects of the world.
  • For example, many languages assign genders to nouns (“bridge” is feminine in German and masculine in Spanish). A study by cognitive psychologist Lera Boroditsky of Stanford University found that German speakers were more likely to describe “bridge” with feminine terms like elegant and slender, while Spanish speakers picked words like sturdy and towering. Having to constantly keep track of gender, Deutscher suggests, may subtly change the way native speakers imagine object’s characteristics.
  • However, this falls short of the extreme view some ascribe to Whorf: that language actually determines thought. According to Steven Pinker, an experimental psychologist and linguist at Harvard University, three things have to hold for the Whorfian hypothesis to be true: speakers of one language should find it nearly impossible to think like speakers of another language; the differences in language should affect actual reasoning; and the differences should be caused by language, not just correlated with it. Otherwise, we may just be dealing with a case of “crying Whorf.”
  • But even mild claims may reveal complexities in the relationship between language and thought. “You can’t actually separate language, thought and perception,” said Debi Roberson, a psychologist at the University of Essex in the U.K. “All of these processes are going on, not just in parallel, but interactively.”
  • Language may not, as the Gaelic proverb suggests, form our entire world. But it will continue to provide insights into our thoughts—whether as a window, a looking glass, or a distorted mirror.
Weiye Loh

Roger Pielke Jr.'s Blog: The Fall of Karl-Theodor zu Guttenberg - 0 views

  • The German defense minister, Karl-Theodor zu Guttenberg, has resigned following the exposure of plagiarism on a massive scale in his PhD dissertation.  The figure above shows the results of a page-by-page Wiki effort to "audit" his dissertation.  The black and red colors indicate text that was directly (black) or partially (red) copied from other sources.  The white parts were judged OK and the blue represents the front and back matter.
  • Guttenberg's defense of his actions, which were supported by Chancellor Angela Merkel, sought to focus attention on those critiquing him in an effort to downplay the significance of the academic misconduct
  • But in the end, it appears that the presures brought to bear from Germany's substantial academic community made continuation for Guttenberg impossible
  • ...1 more annotation...
  • Even so, I expect that we will again see Karl-Theodor zu Guttenberg in German politics, and Germany will then re-engage a debate over science, politics, trust and legitimacy
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Sharing Information Corrupts Wisdom of Crowds | Wired Science | Wired.com - 0 views

  • The effect — perhaps better described as the accuracy of crowds, since it best applies to questions involving quantifiable estimates — has been described for decades, beginning with Francis Galton’s 1907 account of fairgoers guessing an ox’s weight. It reached mainstream prominence with economist James Surowiecki’s 2004 bestseller, The Wisdom of Crowds.
  • As Surowiecki explained, certain conditions must be met for crowd wisdom to emerge. Members of the crowd ought to have a variety of opinions, and to arrive at those opinions independently.
  • Take those away, and crowd intelligence fails, as evidenced in some market bubbles. Computer modeling of crowd behavior also hints at dynamics underlying crowd breakdowns, with he balance between information flow and diverse opinions becoming skewed.
  •  
    When people can learn what others think, the wisdom of crowds may veer towards ignorance. In a new study of crowd wisdom - the statistical phenomenon by which individual biases cancel each other out, distilling hundreds or thousands of individual guesses into uncannily accurate average answers - researchers told test participants about their peers' guesses. As a result, their group insight went awry.
Weiye Loh

Roger Pielke Jr.'s Blog: Faith-Based Education and a Return to Shop Class - 0 views

  • In the United States, nearly a half century of research, application of new technologies and development of new methods and policies has failed to translate into improved reading abilities for the nation’s children1.
  • the reasons why progress has been so uneven point to three simple rules for anticipating when more research and development (R&D) could help to yield rapid social progress. In a world of limited resources, the trick is distinguishing problems amenable to technological fixes from those that are not. Our rules provide guidance\ in making this distinction . . .
  • unlike vaccines, the textbooks and software used in education do not embody the essence of what needs to be done. That is, they don’t provide the basic ‘go’ of teaching and learning. That depends on the skills of teachers and on the attributes of classrooms and students. Most importantly, the effectiveness of a vaccine is largely independent of who gives or receives it, and of the setting in which it is given.
  • ...5 more annotations...
  • The three rules for a technological fix proposed by Sarewitz and Nelson are: I. The technology must largely embody the cause–effect relationship connecting problem to solution. II. The effects of the technological fix must be assessable using relatively unambiguous or uncontroversial criteria. III. Research and development is most likely to contribute decisively to solving a social problem when it focuses on improving a standardized technical core that already exists.
  • technology in the classroom fails with respect to each of the three criteria: (a) technology is not a causal factor in learning in the sense that more technology means more learning, (b) assessment of educational outcome sis itself difficult and contested, much less disentangling various causal factors, and (c) the lack of evidence that technology leads to improved educational outcomes means that there is no such standardized technological core.
  • This conundrum calls into question one of the most significant contemporary educational movements. Advocates for giving schools a major technological upgrade — which include powerful educators, Silicon Valley titans and White House appointees — say digital devices let students learn at their own pace, teach skills needed in a modern economy and hold the attention of a generation weaned on gadgets. Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
  • absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills — like using PowerPoint and multimedia tools — at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.
  • [D]emand for educated labour is being reconfigured by technology, in much the same way that the demand for agricultural labour was reconfigured in the 19th century and that for factory labour in the 20th. Computers can not only perform repetitive mental tasks much faster than human beings. They can also empower amateurs to do what professionals once did: why hire a flesh-and-blood accountant to complete your tax return when Turbotax (a software package) will do the job at a fraction of the cost? And the variety of jobs that computers can do is multiplying as programmers teach them to deal with tone and linguistic ambiguity. Several economists, including Paul Krugman, have begun to argue that post-industrial societies will be characterised not by a relentless rise in demand for the educated but by a great “hollowing out”, as mid-level jobs are destroyed by smart machines and high-level job growth slows. David Autor, of the Massachusetts Institute of Technology (MIT), points out that the main effect of automation in the computer era is not that it destroys blue-collar jobs but that it destroys any job that can be reduced to a routine. Alan Blinder, of Princeton University, argues that the jobs graduates have traditionally performed are if anything more “offshorable” than low-wage ones. A plumber or lorry-driver’s job cannot be outsourced to India.
  •  
    In 2008 Dick Nelson and Dan Sarewitz had a commentary in Nature (here in PDF) that eloquently summarized why it is that we should not expect technology in the classroom to reault in better educational outcomes as they suggest we should in the case of a tehcnology like vaccines
Weiye Loh

Taking On Climate Skepticism as a Field of Study - NYTimes.com - 0 views

  • Q. The debate over climate science has involved very complex physical models and rarefied areas of scientific knowledge. What role do you think social scientists have to play, given the complexity of the actual physical science?
  • A. We have to think about the process by which something, an idea, develops scientific consensus and a second process by which is developed a social and political consensus. The first part is the domain of data and models and physical science. The second is very much a social and political process. And that brings to the fore a whole host of value-based, worldview-based, cognitive and cultural dimensions that need to be addressed.
  • Social scientists, beyond economists, have a lot to say on cognition, perceptions, values, social movements and political processes that are very important for understanding whether the public accepts the conclusions of a scientific body.
  • ...13 more annotations...
  • So when I hear scientists say, “The data speak for themselves,” I cringe. Data never speak. And data generally and most often are politically and socially inflected. They have import for people’s lives. To ignore that is to ignore the social and cultural dimensions within which this science is taking place.
  • I do think that there is a process by which, for example, the connection between cigarette smoking and cancer for decades had a scientific consensus that this was an issue, then a social process begins, and then it becomes accepted.
  • The interesting thing with climate change, I find, is that positioning on climate change is strikingly predictable based on someone’s political leanings. One-third of Republicans and three-quarters of Democrats think that climate change is real. That to me speaks to the political, ideological and cultural dimensions of this debate.
  • It’s interesting because it wasn’t always so. In 1997 with the Kyoto treaty, with the development of regulations that would impact economic and political interests, sides started to be drawn. We’ve reached the stage today that climate change has become part of the culture wars, the same as health care, abortion, gun control and evolution.
  • There are many who distrust the peer-review process and distrust scientists. So that can be step one. I think a lot of people will be uncomfortable accepting a scientific conclusion if it necessarily leads to outcomes they find objectionable. People will be hesitant to accept the notion of climate change if that leads directly towards ideas that are at variance with values that they hold dear.
  • do you trust the scientific process? Do you trust scientists? The faith-and-reason debate has been around for centuries. I just read a book that I thought was prescient, “Anti-Intellectualism in American Life,” about this suspicion people have about intellectuals who are working on issues that are inaccessible, opaque to them, yielding conclusions that alter the way we structure our society, the way we live our lives.
  • There’s a certain helpless frustration people have: Who are these cultural elites, these intellectual elites who can make these conclusions in the ivory tower of academia or other scientific institutions and tell me how to live my life?
  • And we can’t leave out power. There are certain powerful interests out there that will not accept the conclusions this will yield to, therefore they will not accept the definition of the problem if they are not going to accept the solutions that follow it. I’m speaking of certain industry sectors that stand to lose in a carbon-constrained world.
  • Also, if you can’t define solutions on climate change and you’re asking me to accept it, you’re asking me to accept basically a pretty dismal reality that I refuse to accept. And many climate proponents fall into this when they give these horrific, apocalyptic predictions of cities under water and ice ages and things like that. That tends to get people to dig their heels in even harder.
  • Some people look at this as just a move for more government, more government bureaucracy. And I think importantly fear or resist the idea of world government. Carbon dioxide is part of the economy of every country on earth. This is a global cooperation challenge the likes of which we have never seen before.
  • Do you trust the message and do you trust the messenger? If I am inclined to resist the notion of global cooperation — which is a nice way to put what others may see as a one-world government — and if the scientific body that came to that conclusion represents that entity, I will be less inclined to believe it. People will accept a message from someone that they think shares their values and beliefs. And for a lot of people, environmentalists are not that kind of person. There’s a segment of the population that sees environmentalists as socialists, trying to control people’s lives.
  • In our society today, I think people have more faith in economic institutions than they do in scientific institutions. Scientists can talk until they are blue in the face about climate change. But if businesses are paying money to address this issue, then people will say: It must be true, because they wouldn’t be throwing their money away.
  • what I’m laying out is that this is very much a value- and culture-based debate. And to ignore that – you will never resolve it and you will end up in what I have described a logic schism, where the two sides talk about completely different things, completely different issues, demonizing the other, only looking for things that confirm their opinion. And we get nowhere.
Weiye Loh

The messy business of cleaning up carbon policy (and how to sell it to the electorate) ... - 0 views

  • 1. Putting a price on carbon is not only about the climate.Yes, humans are affecting the climate and reducing carbon dioxide emissions is a key commitment of this government, and indeed the stated views of the opposition. But there are other reasons to price carbon, primarily to put Australia at the forefront of a global energy technology revolution that is already underway.In future years and decades the world is going to need vastly more energy that is secure, reliable, clean and affordable. Achieving these outcomes will require an energy technology revolution. The purpose of pricing carbon is to raise the revenues needed to invest in this future, just as we invest in health, agriculture and defence.
  • 2. A price on carbon raises revenues to invest in stimulating that energy technology revolution.Australia emits almost 400 million tonnes of carbon dioxide into the atmosphere every year. In round numbers, every dollar carbon tax per tonne on those emissions would raise about A$100 million. A significant portion of the proceeds from a carbon tax should be used to invest in energy technology innovation, using today’s energy economy to build a bridge to tomorrow’s economy. This is exactly the strategy that India has adopted with a small levy on coal and Germany has adopted with a tax on nuclear fuel rods, with proceeds in both instances invested into energy innovation.
  • 3. The purpose of a carbon tax is not to make energy, food, petrol or consumer goods appreciably more expensive.Just as scientists are in broad agreement that humans are affecting the global climate, economists and other experts are in broad agreement that we cannot revolutionise our energy economy through pricing mechanisms alone. Thus, we propose starting with a low carbon tax - one that has broad political support - and then committing to increasing it in a predictable manner over time.The Coalition has proposed a “direct action plan” on carbon policy that would cost A$30 billion over the next 8 years, which is the equivalent of about a $2.50 per tonne carbon tax. The question to be put to the Coalition is not whether we should be investing in a carbon policy, as we agree on that point, but how much and how it should be paid for. The Coalition’s plans leave unanswered how they would pay for their plan.A carbon tax offers a responsible and effective manner to raise funds without harming the economy or jobs. In fact, to the extent that investments in energy innovation bear fruit, new markets will be opened and new jobs will be created. The Coalition’s plan is not focused on energy technology innovation.The question for the Coalition should thus be, at what level would you set a carbon tax (or what other taxes would you raise?), and how would you invest the proceeds in a manner that accelerates energy technology innovation?
  • ...1 more annotation...
  • 4. Even a low carbon tax will make some goods cost a bit more, so it is important to help those who are most affected.Our carbon tax proposal is revenue neutral in the sense that we will lower other taxes in direct proportion to the impact, however modest, of a low carbon tax. We will do this with particular attention to those who may be most directly affected by a price on carbon.In addition, some portion of the revenue raised by a carbon tax will be returned to the public. But not all. It is important to invest in tomorrow’s energy technologies today and a carbon tax provides the mechanism for doing so.
Weiye Loh

Did file-sharing cause recording industry collapse? Economists say no - 0 views

  • a 2007 Journal of Political Economy study found that most downloaders would not buy that content, even if they couldn't share it. "Downloads have an effect on sales that is statistically indistinguishable from zero," the authors flatly concluded then. "Our estimates are inconsistent with claims that file sharing is the primary reason for the decline in music sales during our study period."
  • But a later 2010 meta-study by the same authors concluded that piracy did, in fact, account for a bit of the decline in music sales—around 20 percent. The other 80 percent could be chalked up to the sale of digital singles rather than whole albums and the rise of other media options like video games.
  • "Downward pressure on leisure expenditure is likely to continue to increase due to rising costs of living and unemployment and drastic rises in the costs of (public) services," says the report. Having less money for entertainment has played a huge role in the decline of items like CDs. A 2004 US Consumer Expenditure Survey showed that even spending on CDs by people who had no computer (and were therefore unlikely to download and use BitTorrent) dropped by over 40 percent from 1999 through 2004. "Household budgets for entertainment are relatively inelastic as competition for spending on culture and entertainment increases and there are shifts in household expenditure as well," the LSE study notes.
  • ...3 more annotations...
  • Content industry analyses of the file sharing phenomenon tend to downplay key sources of income for musicians, the LSE report charges, most notably revenue from live concert performances.
  • Legal file sharing also grew by nine percent globally in 2009, along with an eight percent increase in performance rights revenue.
  • So what is emerging is an increasingly "ephemeral" global music culture based not upon the purchasing of discrete physical packages of music, but on the discovery and subsequent promotion of musicians through file sharing. The big winner in this model is not the digital music file seller, but the touring band, whose music is easily discoverable on the 'Net. As with so much of the rest of the emerging world economy, the shift is away from buying things and towards purchasing services—in this case tickets to concerts and related activities.
Weiye Loh

The Black Swan of Cairo | Foreign Affairs - 0 views

  • It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability "tail risks" to disappear from policymakers' fields of observation.
  • Such environments eventually experience massive blowups, catching everyone off-guard and undoing years of stability or, in some cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm in both economic and political systems.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups. And it is the same misperception of the properties of natural systems that led to both the economic crisis of 2007-8 and the current turmoil in the Arab world. The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying.
  • ...21 more annotations...
  • Just as a robust economic system is one that encourages early failures (the concepts of "fail small" and "fail fast"), the U.S. government should stop supporting dictatorial regimes for the sake of pseudostability and instead allow political noise to rise to the surface. Making an economy robust in the face of business swings requires allowing risk to be visible; the same is true in politics.
  • Both the recent financial crisis and the current political crisis in the Middle East are grounded in the rise of complexity, interdependence, and unpredictability. Policymakers in the United Kingdom and the United States have long promoted policies aimed at eliminating fluctuation -- no more booms and busts in the economy, no more "Iranian surprises" in foreign policy. These policies have almost always produced undesirable outcomes. For example, the U.S. banking system became very fragile following a succession of progressively larger bailouts and government interventions, particularly after the 1983 rescue of major banks (ironically, by the same Reagan administration that trumpeted free markets). In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of "the great moderation" in 2004.
  • Washington stabilized the market with bailouts and by allowing certain companies to grow "too big to fail." Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • The foreign policy equivalent is to support the incumbent no matter what. And just as banks took wild risks thanks to Greenspan's implicit insurance policy, client governments such as Hosni Mubarak's in Egypt for years engaged in overt plunder thanks to similarly reliable U.S. support.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion.
  • In the realm of economics, price controls are designed to constrain volatility on the grounds that stable prices are a good thing. But although these controls might work in some rare situations, the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued. The risks of a dictatorship, no matter how seemingly stable, are no different, in the long run, from those of an artificially controlled price.
  • Such attempts to institutionally engineer the world come in two types: those that conform to the world as it is and those that attempt to reform the world. The nature of humans, quite reasonably, is to intervene in an effort to alter their world and the outcomes it produces. But government interventions are laden with unintended -- and unforeseen -- consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of "experts" in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile. Due to the complexity of markets, intricate regulations simply serve to generate fees for lawyers and profits for sophisticated derivatives traders who can build complicated financial products that skirt those regulations.
  • The life of a turkey before Thanksgiving is illustrative: the turkey is fed for 1,000 days and every day seems to confirm that the farmer cares for it -- until the last day, when confidence is maximal. The "turkey problem" occurs when a naive analysis of stability is derived from the absence of past variations. Likewise, confidence in stability was maximal at the onset of the financial crisis in 2007.
  • The turkey problem for humans is the result of mistaking one environment for another. Humans simultaneously inhabit two systems: the linear and the complex. The linear domain is characterized by its predictability and the low degree of interaction among its components, which allows the use of mathematical methods that make forecasts reliable. In complex systems, there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability. Nonlinear elements are also present, such as those commonly known, and generally misunderstood, as "tipping points." Imagine someone who keeps adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • Engineering, architecture, astronomy, most of physics, and much of common science are linear domains. The complex domain is the realm of the social world, epidemics, and economics. Crucially, the linear domain delivers mild variations without large shocks, whereas the complex domain delivers massive jumps and gaps. Complex systems are misunderstood, mostly because humans' sophistication, obtained over the history of human knowledge in the linear domain, does not transfer properly to the complex domain. Humans can predict a solar eclipse and the trajectory of a space vessel, but not the stock market or Egyptian political events. All man-made complex systems have commonalities and even universalities. Sadly, deceptive calm (followed by Black Swan surprises) seems to be one of those properties.
  • The system is responsible, not the components. But after the financial crisis of 2007-8, many people thought that predicting the subprime meltdown would have helped. It would not have, since it was a symptom of the crisis, not its underlying cause. Likewise, Obama's blaming "bad intelligence" for his administration's failure to predict the crisis in Egypt is symptomatic of both the misunderstanding of complex systems and the bad policies involved.
  • Obama's mistake illustrates the illusion of local causal chains -- that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect. The final episode of the upheaval in Egypt was unpredictable for all observers, especially those involved. As such, blaming the CIA is as foolish as funding it to forecast such events. Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Political and economic "tail events" are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.
  • Most explanations being offered for the current turmoil in the Middle East follow the "catalysts as causes" confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships. But Bahrain and Libya are countries with high gdps that can afford to import grain and other commodities. Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied -- what physicists call "percolation theory," in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • When dealing with a system that is inherently unpredictable, what should be done? Differentiating between two types of countries is useful. In the first, changes in government do not lead to meaningful differences in political outcomes (since political tensions are out in the open). In the second type, changes in government lead to both drastic and deeply unpredictable changes.
  • Humans fear randomness -- a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced fitness and increased chances of survival, it can have the reverse effect in today's complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of "great moderation." This is not to say that any and all volatility should be embraced. Insurance should not be banned, for example.
  • But alongside the "catalysts as causes" confusion sit two mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing). This leads to the desire to impose man-made solutions
  • Variation is information. When there is no variation, there is no information. This explains the CIA's failure to predict the Egyptian revolution and, a generation before, the Iranian Revolution -- in both cases, the revolutionaries themselves did not have a clear idea of their relative strength with respect to the regime they were hoping to topple. So rather than subsidize and praise as a "force for stability" every tin-pot potentate on the planet, the U.S. government should encourage countries to let information flow upward through the transparency that comes with political agitation. It should not fear fluctuations per se, since allowing them to be in the open, as Italy and Lebanon both show in different ways, creates the stability of small jumps.
  • As Seneca wrote in De clementia, "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all . . . just as trees that have been trimmed throw out again countless branches." The imposition of peace through repeated punishment lies at the heart of many seemingly intractable conflicts, including the Israeli-Palestinian stalemate. Furthermore, dealing with seemingly reliable high-level officials rather than the people themselves prevents any peace treaty signed from being robust. The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty. Treaties that are negotiated with the consent of a broad swath of the populations on both sides of a conflict tend to survive. Just as no central bank is powerful enough to dictate stability, no superpower can be powerful enough to guarantee solid peace alone.
  • As Jean-Jacques Rousseau put it, "A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom." With freedom comes some unpredictable fluctuation. This is one of life's packages: there is no freedom without noise -- and no stability without volatility.∂
« First ‹ Previous 41 - 55 of 55
Showing 20 items per page