Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "blogger" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

How should we use data to improve our lives? - By Michael Agger - Slate Magazine - 0 views

  • The Swiss economists Bruno Frey and Alois Stutzer argue that people do not appreciate the real cost of a long commute. And especially when that commute is unpredictable, it takes a toll on our daily well-being.
  • imagine if we shared our commuting information so that we could calculate the average commute from various locations around a city. When the growing family of four pulls up to a house for sale for in New Jersey, the listing would indicate not only the price and the number of bathrooms but also the rush-hour commute time to Midtown Manhattan. That would be valuable information to have, since buyers could realistically factor the tradeoffs of remaining in a smaller space closer to work against moving to a larger space and taking on a longer commute.
  • In a cover story for the New York Times Magazine, the writer Gary Wolf documented the followers of “The Data-Driven Life,” programmers, students, and self-described geeks who track various aspects of their lives. Seth Roberts does a daily math exercise to measure small changes in his mental acuity. Kiel Gilleade is a "Body Blogger" who shares his heart rate via Twitter. On the more extreme end, Mark Carranza has a searchable database of every idea he's had since 1984. They're not alone. This community continues to thrive, and its efforts are chronicled at a blog called the Quantified Self, co-founded by Wolf and Kevin Kelly.
  • ...3 more annotations...
  • If you've ever asked Nike+ to log your runs or given Google permission to keep your search history, you've participated in a bit of self-tracking. Now that more people have location-aware smartphones and the Web has made data easy to share, personal data is poised to become an important tool to understand how we live, and how we all might live better. One great example of this phenomenon in action is the site Cure Together, which allows you to enter your symptoms—for, say, "anxiety" or "insomnia"—and the various remedies you've tried to feel better. One thing the site does is aggregate this information and present the results in chart form. Here is the chart for depression:
  • Instead of being isolated in your own condition, you can now see what has worked for others. The same principle is at work at the site Fuelly, where you can "track, share, and compare" your miles per gallon and see how efficient certain makes and models really are.
  • Businesses are also using data tracking to spur their employees to accomplishing companywide goals: Wal-Mart partnered with Zazengo to help employees track their "personal sustainability" actions such as making a home-cooked meal or buying local produce. The app Rescue Time, which records all of the activity on your computer, gives workers an easy way to account for their time. And that comes in handy when you want to show the boss how efficient telecommuting can be.
  •  
    Data for a better planet
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Studying the politics of online science « through the looking glass - 0 views

  • Mendick, H. and Moreau, M. (2010). Monitoring the presence and representation of  women in SET occupations in UK based online media. Bradford: The UKRC.
  • Mendick and Moreau considered the representation of women on eight ‘SET’ (science, engineering and technology) websites: New Scientist, Bad Science, the Science Museum, the Natural History Museum, Neuroskeptic, Science: So What, Watt’s Up With That and RichardDawkins.net. They also monitored SET content across eight more general sites: the BBC, Channel 4, Sky, the Guardian, the Daily Mail, Wikipedia, YouTube and Twitter.
  • Their results suggest online science informational content is male dominated in that far more men than women are present. On some websites, they found no SET women. All of the 14 people in SET identified on the sampled pages of the RichardDawkins.net website were men, and so were all 29 of those mentioned on the sampled pages of the Channel 4 website (Mendick & Moreau, 2010: 11).
  • ...8 more annotations...
  • They found less hyperlinking of women’s than men’s names (Mendick & Moreau, 2010: 7). Personally, I’d have really liked some detail as to how they came up with this, and what constituted ‘hyperlinking of women’s names’ precisely. It’s potentially an interesting finding, but I can’t quite get a grip on what they are saying.
  • They also note that the women that did appear, they were often peripheral to the main story, or ‘subject to muting’ (i.e. seen but not heard). They also noted many instances where women were pictured but remain anonymous, as if there are used to illustrate a piece – for ‘ornamental’ purposes – and give the example of the wikipedia entry on scientists, which includes a picture a women as an example, but stress she is anonymous (Mendick & Moreau, 2010: 12).
  • Echoing findings of earlier research on science in the media (e.g. the Bimbo or Boffin paper), they noted that women, when represented, tended to be associated with ‘feminine’ attributes and activities, demonstrating empathy with children and animals, etc. They also noted a clustering in specific fields. For example, in the pages they’d sampled of the Guardian, they found seven mentions of women scientists compared with twenty-eight of men, and three of the these women were in a single article, about Jane Goodall (Mendick & Moreau, 2010: 12-13).
  • The women presented were often discussed in terms of appearance, personality, sexuality and personal circumstances, again echoing previous research. They also noted that women scientists, when present, tended to be younger than the men, and there was a striking lack of ethnic diversity (Mendick & Moreau, 2010: 14).
  • I’m going to be quite critical of this research. It’s not actively bad, it just seems to lack depth and precision. I suspect Mendick and Moreau were doing their best with low resources and an overly-broad brief. I also think that we are still feeling our way in terms of working out how to study online science media, and so can learn something from such a critique.
  • Problem number one: it’s a small study, and yet a ginormous topic. I’d much rather they had looked at less, but made more of it. At times I felt like I was reading a cursory glance at online science.
  • Problem number two: the methodological script seemed a bit stuck in the print era. I felt the study lacked a feel for the variety of routes people take through online science. It lacked a sense of online science’s communities and cliques, its cultures and sub-cultures, its history and its people. It lacked context. Most of all, it lacked a sense of what I think sits at the center of online communication: the link.
  • It tries to look at too much, too quickly. We’re told that of the blog entries sampled from Bad Science, three out of four of the women mentioned were associated with ‘bad science’, compared to 12 out of 27 of the men . They follow up this a note that Goldacre has appeared on television critiquing Greenfield,­ a clip of which is on his site (Mendick & Moreau, 2010: 17-18). OK, but ‘bad’ needs unpacking here, as does the gendered nature of the area Goldacre takes aim at. As for Susan Greenfield, she is a very complex character when it comes to the politics of science and gender (one I’d say it is dangerous to treat representations of simplistically). Moreover, this is a very small sample, without much feel for the broader media context the Bad Science blog works within, including not only other platforms for Ben Goldacre’s voice but comment threads, forums and a whole community of other ‘bad science bloggers’ (and their relationships with each other)
  •  
    okmark
Weiye Loh

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The Guardian - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Join Us | Save the Internet - 0 views

  • The SavetheInternet.com Coalition is two million everyday people who have banded together with thousands of nonprofit organizations, businesses and bloggers to protect Internet freedom. The Coalition believes that the Internet is a crucial engine for economic growth, civic engagement and free speech. We're working together to preserve Net Neutrality, the First Amendment of the Internet, which ensures that the Internet remains open to new ideas, innovation and voices. Because of Net Neutrality, the Internet has always been a level playing field. People everywhere can have their voices heard by thousands, even millions, of others online. The SavetheInternet.com Coalition wants our leaders in Washington to pass strong Net Neutrality protections. We're calling on the president, Congress and the Federal Communications Commission to stand with the public and keep the Internet open.
Weiye Loh

Real Climate faces libel suit | Environment | guardian.co.uk - 0 views

  • Gavin Schmidt, a climate modeller and Real Climate member based at Nasa's Goddard Institute for Space Studies in New York, has claimed that Energy & Environment (E&E) has "effectively dispensed with substantive peer review for any papers that follow the editor's political line." The journal denies the claim, and, according to Schmidt, has threatened to take further action unless he retracts it.
  • Every paper that is submitted to the journal is vetted by a number of experts, she said. But she did not deny that she allows her political agenda to influence which papers are published in the journal. "I'm not ashamed to say that I deliberately encourage the publication of papers that are sceptical of climate change," said Boehmer-Christiansen, who does not believe in man-made climate change.
  • Simon Singh, a science writer who last year won a major libel battle with the British Chiropractic Association (BCA), said: "A libel threat is potentially catastrophic. It can lead to a journalist going bankrupt or a blogger losing his house. A lot of journalists and scientists will understandably react to the threat of libel by retracting their articles, even if they are confident they are correct. So I'm delighted that Gavin Schmidt is going to stand up for what he has written." During the case with the BCA, Singh also received a libel threat in response to an article he had written about climate change, but Singh stood by what he had written and threat was not carried through.
  • ...7 more annotations...
  • Schmidt has refused to retract his comments and maintains that the majority of papers published in the journal are "dross"."I would personally not credit any article that was published there with any useful contribution to the science," he told the Guardian. "Saying a paper was published in E&E has become akin to immediately discrediting it." He also describes the journal as a "backwater" of poorly presented and incoherent contributions that "anyone who has done any science can see are fundamentally flawed from the get-go."
  • Schmidt points to an E&E paper that claimed that the Sun is made of iron. "The editor sent it out for review, where it got trashed (as it should have been), and [Boehmer-Christiansen] published it anyway," he says.
  • The journal also published a much-maligned analysis suggesting that levels of the greenhouse gas carbon dioxide could go up and down by 100 parts per million in a year or two, prompting marine biologist Ralph Keeling at the Scripps Institute of Oceanography in La Jolla, California to write a response to the journal, in which he asked: "Is it really the intent of E&E to provide a forum for laundering pseudo-science?"
  • Schmidt and Keeling are not alone in their criticisms. Roger Pielke Jr, a professor of environmental studies at the University of Colorado, said he regrets publishing a paper in the journal in 2000 – one year after it was established and before he had time to realise that it was about to become a fringe platform for climate sceptics. "[E&E] has published a number of low-quality papers, and the editor's political agenda has clearly undermined the legitimacy of the outlet," Pielke says. "If I had a time machine I'd go back and submit our paper elsewhere."
  • Any paper published in E&E is now ignored by the broader scientific community, according to Pielke. "In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis," he said. In the long run, Pielke is confident that good ideas will win out over bad ideas. "But without care to the legitimacy of our science institutions – including journals and peer review – that long run will be a little longer," he says.
  • she has no intention of changing the way she runs E&E – which is not listed on the ISI Journal Master list, an official list of academic journals – in response to his latest criticisms.
  • Schmidt is unsurprised. "You would need a new editor, new board of advisors, and a scrupulous adherence to real peer review, perhaps ... using an open review process," he said. "But this is very unlikely to happen since their entire raison d'être is political, not scientific."
Weiye Loh

Libel Chill and Me « Skepticism « Critical Thinking « Skeptic North - 0 views

  • Skeptics may by now be very familiar with recent attempts in Canada to ban wifi from public schools and libraries.  In short: there is no valid scientific reason to be worried about wifi.  It has also been revealed that the chief scientists pushing the wifi bans have been relying on poor data and even poorer studies.  By far the vast majority of scientific data that currently exists supports the conclusion that wifi and cell phone signals are perfectly safe.
  • So I wrote about that particular topic in the summer.  It got some decent coverage, but the fear mongering continued. I wrote another piece after I did a little digging into one of the main players behind this, one Rodney Palmer, and I discovered some decidedly pseudo-scientific tendencies in his past, as well as some undisclosed collusion.
  • One night I came home after a long day at work, a long commute, and a phone call that a beloved family pet was dying, and will soon be in significant pain.  That is the state I was in when I read the news about Palmer and Parliamentary committee.
  • ...18 more annotations...
  • That’s when I wrote my last significant piece for Skeptic North.  Titled, “Rodney Palmer: When Pseudoscience and Narcissism Collide,” it was a fiery take-down of every claim I heard Palmer speak before the committee, as well as reiterating some of his undisclosed collusion, unethical media tactics, and some reasons why he should not be considered an expert.
  • This time, the article got a lot more reader eyeballs than anything I had ever written for this blog (or my own) and it also caught the attention of someone on a school board which was poised to vote on wifi.  In these regards: Mission very accomplished.  I finally thought that I might be able to see some people in the media start to look at Palmer’s claims with a more critical eye than they had been previously, and I was flattered at the mountain of kind words, re-tweets, reddit comments and Facebook “likes.”
  • The comments section was mostly supportive of my article, and they were one of the few things that kept me from hiding in a hole for six weeks.  There were a few comments in opposition to what I wrote, some sensible, most incoherent rambling (one commenter, when asked for evidence, actually linked to a YouTube video which they referred to as “peer reviewed”)
  • One commenter was none other than the titular subject of the post, Rodney Palmer himself.  Here is a screen shot of what he said: Screen shot of the Libel/Slander threat.
  • Knowing full well the story of the libel threat against Simon Singh, I’ve always thought that if ever a threat like that came my way, I’d happily beat it back with the righteous fury and good humour of a person with the facts on their side.  After all, if I’m wrong, you’d be able to prove me wrong, rather than try to shut me up with a threat of a lawsuit.  Indeed, I’ve been through a similar situation once before, so I should be an old hat at this! Let me tell you friends, it’s not that easy.  In fact, it’s awful.  Outside observers could easily identify that Palmer had no case against me, but that was still cold comfort to me.  It is a very stressful situation to find yourself in.
  • The state of libel and slander laws in this country are such that a person can threaten a lawsuit without actually threatening a lawsuit.  There is no need to hire a lawyer to investigate the claims, look into who I am, where I live, where I work, and issue a carefully worded threatening letter demanding compliance.  All a person has to say is some version of  “Libel.  Slander.  Hmmmm….,” and that’s enough to spook a lot of people into backing off. It’s a modern day bogeyman.  They don’t have to prove it.  They don’t have to act on it.  A person or organization just has to say “BOO!” with sufficient seriousness, and unless you’ve got a good deal of editorial and financial support, discussion goes out the window. Libel Chill refers to the ‘chilling effect’ that the possibility of a libel/slander lawsuit has.  If a person is scared they might get sued, then they won’t even comment on a piece at all.  In my case, I had already commented three times on the wifi scaremongering, but this bogus threat against me was surely a major contributing factor to my not commenting again.
  • I ceased to discuss anything in the comment thread of the original article, and even shied away from other comment threads, calling me out.  I learned a great deal about the wifi/EMF issue since I wrote the article, but I did not comment on any of it, because I knew that Palmer and his supporters were watching me like a hawk (sorry to stretch the simile), and would likely try to silence me again.  I couldn’t risk a lawsuit.  Even though I knew there was no case against me, I couldn’t afford a lawyer just to prove that I didn’t do anything illegal.
  • The Libel and Slanders Act of Ontario, 1990 hasn’t really caught up with the internet.  There isn’t a clear precedent that defines a blog post, Twitter feed or Facebook post as falling under the umbrella of “broadcast,” which is what the bill addresses.  If I had written the original article in print, Palmer would have had six weeks to file suit against me.  But the internet is only kind of considered ‘broadcast.’  So it could be just six weeks, but he could also have up to two years to act and get a lawyer after me.  Truth is, there’s not a clear demarcation point for our Canadian legal system.
  • Libel laws in Canada are somewhere in between the Plaintiff-favoured UK system, and the Defendant-favoured US system.  On the one hand, if Palmer chose to incur the expense and time to hire a lawyer and file suit against me, the burden of proof would be on me to prove that I did not act with malice.  Easy peasy.  On the other hand, I would have a strong case that I acted in the best interests of Canadians, which would fall under the recent Supreme Court of Canada decision on protecting what has been termed, “Responsible Communication.”  The Supreme Court of Canada decision does not grant bloggers immunity from libel and slander suits, but it is a healthy dose of welcome freedom to discuss issues of importance to Canadians.
  • Palmer himself did not specify anything against me in his threat.  There was nothing particular that he complained about, he just said a version of “Libel and Slander!” at me.  He may as well have said “Boo!”
  • This is not a DBAD discussion (although I wholeheartedly agree with Phil Plait there). 
  • If you’d like to boil my lessons down to an acronym, I suppose the best one would be DBRBC: Don’t be reckless. Be Careful.
  • I wrote a piece that, although it was not incorrect in any measurable way, was written with fire and brimstone, piss and vinegar.  I stand by my piece, but I caution others to be a little more careful with the language they use.  Not because I think it is any less or more tactically advantageous (because I’m not sure anyone can conclusively demonstrate that being an aggressive jerk is an inherently better or worse communication tool), but because the risks aren’t always worth it.
  • I’m not saying don’t go after a person.  There are egomaniacs out there who deserve to be called out and taken down (verbally, of course).  But be very careful with what you say.
  • ask yourself some questions first: 1) What goal(s) are you trying to accomplish with this piece? Are you trying to convince people that there is a scientific misunderstanding here?  Are you trying to attract the attention of the mainstream media to a particular facet of the issue?  Are you really just pissed off and want to vent a little bit?  Is this article a catharsis, or is it communicative?  Be brutally honest with your intentions, it’s not as easy as you think.  Venting is okay.  So is vicious venting, but be careful what you dress it up as.
  • 2) In order to attain your goals, did you use data, or personalities?  If the former, are you citing the best, most current data you have available to you? Have you made a reasonable effort to check your data against any conflicting data that might be out there? If the latter, are you providing a mountain of evidence, and not just projecting onto personalities?  There is nothing inherently immoral or incorrect with going after the personalities.  But it is a very risky undertaking. You have to be damn sure you know what you’re talking about, and damn ready to defend yourself.  If you’re even a little loose with your claims, you will be called out for it, and a legal threat is very serious and stressful. So if you’re going after a personality, is it worth it?
  • 3) Are you letting the science speak for itself?  Are you editorializing?  Are you pointing out what part of your piece is data and what part is your opinion?
  • 4) If this piece was written in anger, frustration, or otherwise motivated by a powerful emotion, take a day.  Let your anger subside.  It will.  There are many cathartic enterprises out there, and you don’t need to react to the first one that comes your way.  Let someone else read your work before you share it with the internet.  Cooler heads definitely do think more clearly.
Weiye Loh

TOC - selective censorship? | The Online Citizen - 0 views

  • A recent article on Temasek Review has raised the issue of TOC’s moderation policy again. Titled ‘TOC: The overkill censor‘ the article’s main contention was that TOC practices selective censorship especially with regards to ‘Western style social issues’. Specifically, it points to the discussion on an article regarding LGBT issues as an example of how TOC tries to skew the discussion to its stance
  • We make no apologies on being stricter with our moderation on the LGBT issues, not only because past experiences have shown that such discussions can easily degenerate into name-callings (words like ‘fags’ are disallowed) and derogatory remarks from both sides, but also because it also touches on religion. We have taken pains to ensure that anyone’s religion is not derided simply because the person opposes LGBT rights. We have also made sure that no religious scriptures are referred to, as we feel that discussions on theology and intepretations of scriptures should best be discussed separately elsewhere.  As such we have moderated references to scriptures, be it from people who are for, or against LGBT rights.
  • There were other allegations made against TOC as well especially whenever we publish articles on LGBT issues: TOC is pro-gay. Actually, TOC is pro-a-lot-of-things.  TOC is a platform for the disenfranchised. And this includes gay people who’re fighting for rights – the same way those anti-death penalty folks are, or those like TWc2 and HOME are fighting for migrant rights. So, really, it is not that TOC supports the gay community per se but more that it supports what they’re fighting for. There is a difference which people who discriminate against LGBTs do not seem to understand. We understand that this may not be a popular stance. However, it would be far more hypocritical to not speak up on the LGBT issue simply for the sake of fearing a loss of readership.
  • ...2 more annotations...
  • As for the allegations in the articles that TOC seem more concerned with ‘Western social issues’, we suggest that readers do a count of the number of articles on LGBT issues as opposed to the articles we have done on the daily concerns of the average Singaporean. It is inaccurate to suggest that we have also not campaigned for these issues. We have held a Speakers Corner event to protest fare hikes. We have in our individual capacity written letters to the mainstream press on several issues, such as homelessness, some of which were published. Ironically, the one thing that TOC has not held a Speakers Corner event for, was on LGBT rights!
  • There those who have accused us of being anti-Christians or anti-religious.  That is untrue. The TOC team and its contributors consists of Christians, Catholics, Muslims, Buddhists, Taoists, atheists, agnostics, etc. TOC has survived all these because of one simple reason – it continues to tell stories of the disenfranchised and it lets readers be the judge.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business trends to watch - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
Weiye Loh

Twitter unmasks anonymous British user in landmark legal battle | Technology | The Guardian - 0 views

  • Giggs brought the lawsuit at the high court in London and the move to use California courts is likely to be seen as a landmark moment in the internet privacy battle.Ahmed Khan, the south Tyneside councillor accused of being the author of the pseudonymous Twitter accounts, described the council's move as "Orwellian". Khan received an email from Twitter earlier this month informing him that the site had handed over his personal information. He denies being the author of the allegedly defamatory material.
  • Khan said the information Twitter handed over was "just a great long list of numbers". The subpeona ordered Twitter to hand over 30 pieces of information relating to several Twitter accounts, including @fatcouncillor and @ahmedkhan01."I don't fully understand it but it all relates to my Twitter account and it not only breaches my human rights, but it potentially breaches the human rights of anyone who has ever sent me a message on Twitter.
  • He added: "I was never even told they were taking this case to court in California. The first I heard was when Twitter contacted me. I had just 14 days to defend the case and I was expected to fly 6,000 miles and hire my own lawyer – all at my expense."Even if they unmask this blogger, what does the council hope to achieve ? The person or persons concerned is simply likely to declare bankruptcy and the council won't recover any money it has spent."
Weiye Loh

Skepticblog » Global Warming Skeptic Changes His Tune - by Doing the Science Himself - 0 views

  • To the global warming deniers, Muller had been an important scientific figure with good credentials who had expressed doubt about the temperature data used to track the last few decades of global warming. Muller was influenced by Anthony Watts, a former TV weatherman (not a trained climate scientist) and blogger who has argued that the data set is mostly from large cities, where the “urban heat island” effect might bias the overall pool of worldwide temperature data. Climate scientists have pointed out that they have accounted for this possible effect already, but Watts and Muller were unconvinced. With $150,000 (25% of their funding) from the Koch brothers (the nation’s largest supporters of climate denial research), as well as the Getty Foundation (their wealth largely based on oil money) and other funding sources, Muller set out to reanalyze all the temperature data by setting up the Berkeley Earth Surface Temperature Project.
  • Although only 2% of the data were analyzed by last month, the Republican climate deniers in Congress called him to testify in their March 31 hearing to attack global warming science, expecting him to give them scientific data supporting their biases. To their dismay, Muller behaved like a real scientist and not an ideologue—he followed his data and told them the truth, not what they wanted to hear. Muller pointed out that his analysis of the data set almost exactly tracked what the National Oceanographic and Atmospheric Administration (NOAA), the Goddard Institute of Space Science (GISS), and the Hadley Climate Research Unit at the University of East Anglia in the UK had already published (see figure).
  • Muller testified before the House Committee that: The Berkeley Earth Surface Temperature project was created to make the best possible estimate of global temperature change using as complete a record of measurements as possible and by applying novel methods for the estimation and elimination of systematic biases. We see a global warming trend that is very similar to that previously reported by the other groups. The world temperature data has sufficient integrity to be used to determine global temperature trends. Despite potential biases in the data, methods of analysis can be used to reduce bias effects well enough to enable us to measure long-term Earth temperature changes. Data integrity is adequate. Based on our initial work at Berkeley Earth, I believe that some of the most worrisome biases are less of a problem than I had previously thought.
  • ...4 more annotations...
  • The right-wing ideologues were sorely disappointed, and reacted viciously in the political sphere by attacking their own scientist, but Muller’s scientific integrity overcame any biases he might have harbored at the beginning. He “called ‘em as he saw ‘em” and told truth to power.
  • it speaks well of the scientific process when a prominent skeptic like Muller does his job properly and admits that his original biases were wrong. As reported in the Los Angeles Times : Ken Caldeira, an atmospheric scientist at the Carnegie Institution for Science, which contributed some funding to the Berkeley effort, said Muller’s statement to Congress was “honorable” in recognizing that “previous temperature reconstructions basically got it right…. Willingness to revise views in the face of empirical data is the hallmark of the good scientific process.”
  • This is the essence of the scientific method at its best. There may be biases in our perceptions, and we may want to find data that fits our preconceptions about the world, but if science is done properly, we get a real answer, often one we did not expect or didn’t want to hear. That’s the true test of when science is giving us a reality check: when it tells us “an inconvenient truth”, something we do not like, but is inescapable if one follows the scientific method and analyzes the data honestly.
  • Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing.
Weiye Loh

TODAYonline | Commentary | For the info-rich and time-poor, digital curators to the rescue? - 0 views

  • digital "curators" choose and present things related to a specific topic and context. They "curate", as opposed to "aggregate", which implies plain collecting with little or no value add. Viewed in this context, Google search does the latter, not the former. So, who curates? The Huffington Post, or HuffPo, is one high-profile example and, it appears, a highly-valued one too, going by AOL numbers-crunchers who forked out US$315 million (S$396.9 million) to acquire it. Accolades have also come in for Arianna Huffington's team of contributors and more than 3,000 bloggers - from politicians to celebrities to think-tankers. The website was named second among the 25 best blogs of 2009 by Time magazine, and most powerful blog in the world by The Observer.
  • By sifting, sorting and presenting news and views - yes, "curating" - HuffPo makes itself useful in an age of too much information and too many opinions. (Strictly speaking, HuffPo is both a creator and curator.) If what HuffPo is doing seems deja vu, it is hardly surprising. Remember the good old "curated" news of the pre-Internet days when newspapers decided what news was published and what we read? Then, the Editor was the Curator with the capital "C".
  • But with the arrival of the Internet and the uploading of news and views by organisations and netizens, the bits and bytes have turned into a tsunami. Aggregators like Google search threw us some life buoys, using text and popularity to filter the content. But with millions of new articles and videos added to the Internet daily, the "right" content has become that proverbial needle in the haystack. Hence the need for curation.
  •  
    Inundated by the deluge of information, and with little time on our hands, some of us turn to social media networks. Sometimes, postings by friends are useful. But often, the typically self-indulgent musings are not. It's "curators" to the rescue.
Weiye Loh

Response to Guardian's Article on Singapore Elections | the kent ridge common - 0 views

  • The first reductive move made by the writer occurs here: “Singapore is known worldwide for censorship and corporal punishment.” This is the Western media’s favourite trope of our island-nation. A whole political context and dynamic society gets reduced to these two ‘dirty’ words, at least for a ‘Western’ world that prides itself on ‘freedom’ and believes itself to be on a moral high ground because of this veritable self-image. (One could argue that censorship in the ‘West’ exists but in a different form – there, capitalist hegemons control media companies which quite effectively draw the boundaries of public debate.)
  • The writer first makes the observation that lots of people have started to speak up and speak out against the “clan” that has ruled Singapore for almost 50 years. The People’s Action Party, is for Ms Hodal, not a political party, but a “clan” – which harks back to tribal societies, to tribalism.
  • Out of all these unsuitable candidates, the writer chose the Arab Spring as the comparative situation of choice for Singapore, despite the fact that the Arab Spring movements did not occur at a time of elections, that much of the physical ‘protesting’ in Singapore was witnessed at political rallies, that there was no bottom-up movement of ‘revolt’. It is the time of the elections; it is a nationally-licensed period of political behaviour and action, for society to perform a cathartic release, for the Bakhtinian carnavalesque to unfold.
  • ...2 more annotations...
  • The reductive move is completed in this next sentence: “Parallels with the Arab spring are striking, even if revolution is not just around the corner.”
  • She writes, “Most murmurs of discontent can be found online: fears of reprisal are diminished for anonymous bloggers. On internet forums, blogs, Facebook and Twitter, grumblings about high housing prices, the widening gap between rich and poor, immigration laws and the salaries of government ministers (among the highest in the world) are hot topics.” The most popular online newspapers, barring the Temasek Review, are The Online Citizen, mr. brown, Mr. Wang Says So, Yawningbread, etc. All these are run by people who publicly reveal their names, which increases the credibility of these sites and also instills a sense of responsibility in their writings. This is part of the reason for their enduring popularity.
Weiye Loh

FleetStreetBlues: Independent columnist Johann Hari admits copying and pasting interview quotes - 0 views

  • this isn't just a case of referencing something the interviewee has written previously - 'As XXX has written before...', or such like. No, Hari adds dramatic context to quotes which were never said - the following paragraph, for instance, is one of the quotes from the Levy interview which seems to have appeared elsewhere before. After saying this, he falls silent, and we stare at each other for a while. Then he says, in a quieter voice: “The facts are clear. Israel has no real intention of quitting the territories or allowing the Palestinian people to exercise their rights. No change will come to pass in the complacent, belligerent, and condescending Israel of today. This is the time to come up with a rehabilitation programme for Israel.”
  • So how does Hari justify it? Well, his post on 'Interview etiquette', as he calls it, is so stunningly brazen about playing fast-and-loose with quotes
  • When I’ve interviewed a writer, it’s quite common that they will express an idea or sentiment to me that they have expressed before in their writing – and, almost always, they’ve said it more clearly in writing than in speech. (I know I write much more clearly than I speak – whenever I read a transcript of what I’ve said, or it always seems less clear and more clotted. I think we’ve all had that sensation in one form or another). So occasionally, at the point in the interview where the subject has expressed an idea, I’ve quoted the idea as they expressed it in writing, rather than how they expressed it in speech. It’s a way of making sure the reader understands the point that (say) Gideon Levy wants to make as clearly as possible, while retaining the directness of the interview. Since my interviews are intellectual portraits that I hope explain how a person thinks, it seemed the most thorough way of doing it...
  • ...3 more annotations...
  • ...I’m a bit bemused to find one blogger considers this “plagiarism”. Who’s being plagiarized? Plagiarism is passing off somebody else’s intellectual work as your own – whereas I’m always making it clear that (say) Gideon Levy’s thought is Gideon Levy’s thought. I’m also a bit bemused to find that some people consider this “churnalism”. Churnalism is a journalist taking a press release and mindlessly recycling it – not a journalist carefully reading over all a writer’s books and selecting parts of it to accurately quote at certain key moments to best reflect how they think.
  • I called round a few other interviewers for British newspapers and they said what I did was normal practice and they had done it themselves from time to time. My test for journalism is always – would the readers mind you did this, or prefer it? Would they rather I quoted an unclear sentence expressing a thought, or a clear sentence expressing the same thought by the same person very recently? Both give an accurate sense of what a person is like, but one makes their ideas as accessible as possible for the reader while also being an accurate portrait of the person.
  • The Independent's top columnist and interviewer has just admitted that he routinely adds things his interviewees have written at some point in the past to their quotes, and then deliberately passes these statements off as though they were said to him in the course of an interview. The main art of being an interviewer is to be skilled at eliciting the right quotes from your subject. If Johann Hari wants to write 'intellectual portraits', he should go and write fiction. Do his editors really know that the copy they're printing ('we stare at each other for a while. Then he says in a quieter voice...') is essentially made up? What would Jayson Blair make of it all? Astonishing.
  •  
    In the last few days, a couple of blogs have been scrutinising the work of Johann Hari, the multiple award-winning Independent columnist and interviewer. A week ago on Friday the political DSG blog pointed out an eerie series of similarities between the quotes in Hari's interview with Toni Negri in 2004, and quotes in the book Negri on Negri, published in 2003. Brian Whelan, an editor with Yahoo! Ireland and a regular FleetStreetBlues contributor, spotted this and got in touch to suggest perhaps this wasn't the only time quotes in Hari's interviews had appeared elsewhere before. We ummed and ahhed slightly about running the piece based on one analysis from a self-proclaimed leftist blog - so Brian went away and did some analysis of his own. And found that a number of quotes in Hari's interview with Gideon Levy in the Independent last year had also been copied from elsewhere. So far, so scurrilous. But what's really astonishing is that Johann Hari has now responded to the blog accusations. And cheerfully admitted that he regularly includes in interviews quotes which the interviewee never actually said to him.
Weiye Loh

Interview etiquette : Johann Hari - 0 views

  • occasionally, at the point in the interview where the subject has expressed an idea, I’ve quoted the idea as they expressed it in writing, rather than how they expressed it in speech. It’s a way of making sure the reader understands the point that (say) Gideon Levy wants to make as clearly as possible, while retaining the directness of the interview.
  • if somebody interviewed me and asked my views of Martin Amis, instead of quoting me as saying “Um, I think, you know, he got the figures for, uh, how many Muslims there are in Europe upside down”, they could quote instead what I’d written more cogently about him a month before, as a more accurate representation of my thoughts. I stress: I have only ever done this where the interviewee was making the same or very similar point to me in the interview that they had already made more clearly in print.
  • after doing what must be over fifty interviews, none of my interviewees have ever said they had been misquoted, even when they feel I’ve been very harsh on them in other ways.
  • ...3 more annotations...
  • Gideon Levy said, after my interview with him was published, that it was “the most accurate take on me anyone has written” and “profoundly moved him” – which hardly fits with the idea it was an inaccurate or misleading picture.
  • one blogger considers this “plagiarism”. Who’s being plagiarized? Plagiarism is passing off somebody else’s intellectual work as your own – whereas I’m always making it clear that (say) Gideon Levy’s thought is Gideon Levy’s thought. I’m also a bit bemused to find that some people consider this “churnalism”. Churnalism is a journalist taking a press release and mindlessly recycling it – not a journalist carefully reading over all a writer’s books and selecting parts of it to accurately quote at certain key moments to best reflect how they think.
  • I called round a few other interviewers for British newspapers and they said what I did was normal practice and they had done it themselves from time to time. My test for journalism is always – would the readers mind you did this, or prefer it? Would they rather I quoted an unclear sentence expressing a thought, or a clear sentence expressing the same thought by the same person very recently? Both give an accurate sense of what a person is like, but one makes their ideas as accessible as possible for the reader while also being an accurate portrait of the person.
« First ‹ Previous 41 - 57 of 57
Showing 20 items per page