Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged support

Rss Feed Group items tagged

Weiye Loh

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
Weiye Loh

Anonymous speaks: the inside story of the HBGary hack - 0 views

  • It has been an embarrassing week for security firm HBGary and its HBGary Federal offshoot. HBGary Federal CEO Aaron Barr thought he had unmasked the hacker hordes of Anonymous and was preparing to name and shame those responsible for co-ordinating the group's actions, including the denial-of-service attacks that hit MasterCard, Visa, and other perceived enemies of WikiLeaks late last year.
  • When Barr told one of those he believed to be an Anonymous ringleader about his forthcoming exposé, the Anonymous response was swift and humiliating. HBGary's servers were broken into, its e-mails pillaged and published to the world, its data destroyed, and its website defaced. As an added bonus, a second site owned and operated by Greg Hoglund, owner of HBGary, was taken offline and the user registration database published.
  • HBGary and HBGary Federal position themselves as experts in computer security. The companies offer both software and services to both the public and private sectors. On the software side, HBGary has a range of computer forensics and malware analysis tools to enable the detection, isolation, and analysis of worms, viruses, and trojans. On the services side, it offers expertise in implementing intrusion detection systems and secure networking, and performs vulnerability assessment and penetration testing of systems and software. A variety of three letter agencies, including the NSA, appeared to be in regular contact with the HBGary companies, as did Interpol, and HBGary also worked with well-known security firm McAfee. At one time, even Apple expressed an interest in the company's products or services.
  • ...1 more annotation...
  • One might think that such an esteemed organization would prove an insurmountable challenge for a bunch of disaffected kids to hack. World-renowned, government-recognized experts against Anonymous? HBGary should be able to take their efforts in stride. Unfortunately for HBGary, neither the characterization of Anonymous nor the assumption of competence on the security company's part are accurate, as the story of how HBGary was hacked will make clear. Anonymous is a diverse bunch: though they tend to be younger rather than older, their age group spans decades. Some may still be in school, but many others are gainfully employed office-workers, software developers, or IT support technicians, among other things. With that diversity in age and experience comes a diversity of expertise and ability.
Weiye Loh

Roger Pielke Jr.'s Blog: Flood Disasters and Human-Caused Climate Change - 0 views

  • [UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.] [UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  
  • Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.
  • I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?
  • ...12 more annotations...
  • What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?
  • First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.
  • However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.
  • My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes: Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
  • Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.
  • Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave. Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.
  • More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.
  • In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.
  • Jessica Weinkle said...
  • The thing about the nature articles is that Nature itself made the leap from the science findings to damages in the News piece by Q. Schiermeier through the decision to bring up the topic of insurance. (Not to mention that which is symbolically represented merely by the journal’s cover this week). With what I (maybe, naively) believe to be a particularly ballsy move, the article quoted Muir-Wood, an industry scientists. However, what he is quoted as saying is admirably clever. Initially it is stated that Dr. Muir-Wood backs the notion that one cannot put the blame of increased losses on climate change. Then, the article ends with a quote from him, “If there’s evidence that risk is changing, then this is something we need to incorporate in our models.”
  • This is a very slippery slope and a brilliant double-dog dare. Without doing anything but sitting back and watching the headlines, one can form the argument that “science” supports the remodeling of the hazard risk above the climatological average and is more important then the risks stemming from socioeconomic factors. The reinsurance industry itself has published that socioeconomic factors far outweigh changes in the hazard in concern of losses. The point is (and that which has particularly gotten my knickers in a knot) is that Nature, et al. may wish to consider what it is that they want to accomplish. Is it greater involvement of federal governments in the insurance/reinsurance industry on the premise that climate change is too great a loss risk for private industry alone regardless of the financial burden it imposes? The move of insurance mechanisms into all corners of the earth under the auspices of climate change adaptation? Or simply a move to bolster prominence, regardless of whose back it breaks- including their own, if any of them are proud owners of a home mortgage? How much faith does one have in their own model when they are told that hundreds of millions of dollars in the global economy is being bet against the odds that their models produce?
  • What Nature says matters to the world; what scientists say matters to the world- whether they care for the responsibility or not. That is after all, the game of fame and fortune (aka prestige).
Weiye Loh

The world through language » Scienceline - 0 views

  • If you know only one language, you live only once. A man who knows two languages is worth two men. He who loses his language loses his world. (Czech, French and Gaelic proverbs.)
  • The hypothesis first put forward fifty years ago by linguist Benjamin Lee Whorf—that our language significantly affects our experience of the world—is making a comeback in various forms, and with it no shortage of debate.
  • The idea that language shapes thought was taboo for a long time, said Dan Slobin, a psycholinguist at the University of California, Berkeley. “Now the ice is breaking.” The taboo, according to Slobin, was largely due to the widespread acceptance of the ideas of Noam Chomsky, one of the most influential linguists of the 20th century. Chomsky proposed that the human brain comes equipped at birth with a set of rules—or universal grammar—that organizes language. As he likes to say, a visiting Martian would conclude that everyone on Earth speaks mutually unintelligible dialects of a single language.
  • ...11 more annotations...
  • Chomsky is hesitant to accept the recent claims of language’s profound influence on thought. “I’m rather skeptical about all of this, though there probably are some marginal effects,” he said.
  • Some advocates of the Whorfian view find support in studies of how languages convey spatial orientation. English and Dutch speakers describe orientation from an egocentric frame of reference (to my left or right). Mayan speakers use a geocentric frame of reference (to the north or south).
  • Does this mean they think about space in fundamentally different ways? Not exactly, said Lila Gleitman, a psychologist from the University of Pennsylvania. Since we ordinarily assume that others talk like us, she explained, vague instructions like “arrange it the same way” will be interpreted in whatever orientation (egocentric or geocentric) is most common in our language. “That’s going to influence how you solve an ambiguous problem, but it doesn’t mean that’s the way you think, or must think,” said Gleitman. In fact, she repeated the experiment with unambiguous instructions, providing cues to indicate whether objects should be arranged north-south or left-right. She found that people in both languages are just as good at arranging objects in either orientation.
  • Similarly, Anna Papafragou, a psychologist at the University of Delaware, thinks that the extent of language’s effect on thought has been somewhat exaggerated.
  • Papafragou compared how long Greek and English speakers paid attention to clip-art animation sequences, for example, a man skating towards a snowman. By measuring their eye movements, Papafragou was able to tell which parts of the scene held their gaze the longest. Because English speakers generally use verbs that describe manner of motion, like slide and skip, she predicted they would pay more attention to what was moving (the skates). Since Greeks use verbs that describe path, like approach and ascend, they should pay more attention to endpoint of the motion (the snowman). She found that this was true only when people had to describe the scene; when asked to memorize it, attention patterns were nearly identical. According to Papafragou, when people need to speak about what they see, they’ll focus on the parts relevant for planning sentences. Otherwise, language does not show much of an effect on attention.
  • “Each language is a bright transparent medium through which our thoughts may pass, relatively undistorted,” said Gleitman.
  • Others think that language does, in fact, introduce some distortion. Linguist Guy Deutscher of the University of Manchester in the U.K. suggests that while language can’t prevent you from thinking anything, it does compel you to think in specific ways. Language forces you to habitually pay attention to different aspects of the world.
  • For example, many languages assign genders to nouns (“bridge” is feminine in German and masculine in Spanish). A study by cognitive psychologist Lera Boroditsky of Stanford University found that German speakers were more likely to describe “bridge” with feminine terms like elegant and slender, while Spanish speakers picked words like sturdy and towering. Having to constantly keep track of gender, Deutscher suggests, may subtly change the way native speakers imagine object’s characteristics.
  • However, this falls short of the extreme view some ascribe to Whorf: that language actually determines thought. According to Steven Pinker, an experimental psychologist and linguist at Harvard University, three things have to hold for the Whorfian hypothesis to be true: speakers of one language should find it nearly impossible to think like speakers of another language; the differences in language should affect actual reasoning; and the differences should be caused by language, not just correlated with it. Otherwise, we may just be dealing with a case of “crying Whorf.”
  • But even mild claims may reveal complexities in the relationship between language and thought. “You can’t actually separate language, thought and perception,” said Debi Roberson, a psychologist at the University of Essex in the U.K. “All of these processes are going on, not just in parallel, but interactively.”
  • Language may not, as the Gaelic proverb suggests, form our entire world. But it will continue to provide insights into our thoughts—whether as a window, a looking glass, or a distorted mirror.
Weiye Loh

takchek (读书 ): When Scientific Research and Higher Education become just Poli... - 0 views

  • A mere two years after the passage of the economic stimulus package, the now Republican-controlled House of Representatives have started swinging their budget cutting axe at scientific research and higher education.One point stood out in the midst of all this "fiscal responsibility" talk:The House bill does not specify cuts to five of the Office of Science's six programs, namely, basic energy sciences, high-energy physics, nuclear physics, fusion energy sciences, and advanced scientific computing. However, it explicitly whacks funding for the biological and environmental research program from $588 million to $302 million, a 49% reduction that would effectively zero out the program for the remainder of the year. The program supports much of DOE's climate and bioenergy research and in the past has funded much of the federal government's work on decoding the human genome. - Science , 25 February 2011: Vol. 331 no. 6020 pp. 997-998 DOI: 10.1126/science.331.6020.997 Do the terms Big Oil, Creationism/Intelligent Design come to your mind?
  • In other somewhat related news, tenure rights are being weakened in Louisiana and state legislatures are trying to have greater control over how colleges are run. It is hard not to see that there seems to be a coordinated assault on academia (presumably since many academics are seen by the Republican right as leftist liberals.)Lawmakers are inserting themselves even more directly into the classroom in South Carolina, where a proposal would require professors to teach a minimum of nine credit hours per semester."I think we need to have professors in the classroom and not on sabbatical and out researching and doing things to that effect," State Rep. Murrell G. Smith Jr., a Republican, told the Associated Press.I think they are attempting to turn research universities into trade/vocational schools.
Weiye Loh

Roger Pielke Jr.'s Blog: The Fall of Karl-Theodor zu Guttenberg - 0 views

  • The German defense minister, Karl-Theodor zu Guttenberg, has resigned following the exposure of plagiarism on a massive scale in his PhD dissertation.  The figure above shows the results of a page-by-page Wiki effort to "audit" his dissertation.  The black and red colors indicate text that was directly (black) or partially (red) copied from other sources.  The white parts were judged OK and the blue represents the front and back matter.
  • Guttenberg's defense of his actions, which were supported by Chancellor Angela Merkel, sought to focus attention on those critiquing him in an effort to downplay the significance of the academic misconduct
  • But in the end, it appears that the presures brought to bear from Germany's substantial academic community made continuation for Guttenberg impossible
  • ...1 more annotation...
  • Even so, I expect that we will again see Karl-Theodor zu Guttenberg in German politics, and Germany will then re-engage a debate over science, politics, trust and legitimacy
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Mike Adams Remains True to Form « Alternative Medicine « Health « Skeptic North - 0 views

  • The 10:23 demonstrations and the CBC Marketplace coverage have elicited fascinating case studies in CAM professionalism. Rather than offering any new information or evidence about homeopathy itself, some homeopaths have spuriously accused skeptical groups of being malicious Big Pharma shills.
  • Mike Adams of the Natural News website
  • has decided to provide his own coverage of the 10:23 campaign
  • ...17 more annotations...
  • Mike’s thesis is essentially: Silly skeptics, it’s impossible to OD on homeopathy!
  • 1. “Notice that they never consume their own medicines in large doses? Chemotherapy? Statin drugs? Blood thinners? They wouldn’t dare drink those.
  • Of course we wouldn’t. Steven Novella rightly points out that, though Mike thinks he’s being clever here, he’s actually demonstrating a lack of understanding for what the 10:23 campaign is about by using a straw man. Mike later issues a challenge for skeptics to drink their favourite medicines while he drinks homeopathy. Since no one will agree to that for the reasons explained above, he can claim some sort of victory — hence his smugness. But no one is saying that drugs aren’t harmful.
  • The difference between medicine and poison is in the dose. The vitamins and herbs promoted by the CAM industry are just as potentially harmful as any pharmaceutical drug, given enough of it. Would Adams be willing to OD on the vitamins or herbal remedies that he sells?
  • Even Adams’ favorite panacea, vitamin D, is toxic if you take enough of it (just ask Gary Null). Notice how skeptics don’t consume those either, because that is not the point they’re making.
  • The point of these demonstrations is that homeopathy has nothing in it, has no measurable physiological effects, and does not do what is advertised on the package.
  • 2. “Homeopathy, you see, isn’t a drug. It’s not a chemical.” Well, he’s got that right. “You know the drugs are kicking in when you start getting worse. Toxicity and conventional medicine go hand in hand.” [emphasis his]
  • Here I have to wonder if Adams knows any people with diabetes, AIDS, or any other illness that used to mean a death sentence before the significant medical advances of the 20th century that we now take for granted. So far he seems to be a firm believer in the false dichotomy that drugs are bad and natural products are good, regardless of what’s in them or how they’re used (as we know, natural products can have biologically active substances and effectively act as impure drugs – but leave it to Adams not to get bogged down with details). There is nothing to support the assertion that conventional medicine is nothing but toxic symptom-inducers.
  • 3-11. “But homeopathy isn’t a chemical. It’s a resonance. A vibration, or a harmony. It’s the restructuring of water to resonate with the particular energy of a plant or substance. We can get into the physics of it in a subsequent article, but for now it’s easy to recognize that even from a conventional physics point of view, liquid water has tremendous energy, and it’s constantly in motion, not just at the molecular level but also at the level of its subatomic particles and so-called “orbiting electrons” which aren’t even orbiting in the first place. Electrons are vibrations and not physical objects.” [emphasis his]
  • This is Star Trek-like technobabble – lots of sciency words
  • if something — anything — has an effect, then that effect is measurable by definition. Either something works or it doesn’t, regardless of mechanism. In any case, I’d like to see the well-documented series of research that conclusively proves this supposed mechanism. Actually, I’d like to see any credible research at all. I know what the answer will be to that: science can’t detect this yet. Well if you agree with that statement, reader, ask yourself this: then how does Adams know? Where did he get this information? Without evidence, he is guessing, and what is that really worth?
  • 13. “But getting back to water and vibrations, which isn’t magic but rather vibrational physics, you can’t overdose on a harmony. If you have one violin playing a note in your room, and you add ten more violins — or a hundred more — it’s all still the same harmony (with all its complex higher frequencies, too). There’s no toxicity to it.” [emphasis his]
  • Homeopathy has standard dosing regimes (they’re all the same), but there is no “dose” to speak of: the ingredients have usually been diluted out to nothing. But Adams is also saying that homeopathy doesn’t work by dose at all, it works by the properties of “resonance” and “vibration”. Then why any dosing regimen? To maintain the resonance? How is this resonance measured? How long does the “resonance” last? Why does it wear off? Why does he think televisions can inactivate homeopathy? (I think I might know the answer to that last one, as electronic interference is a handy excuse for inefficacy.)
  • “These skeptics just want to kill themselves… and they wouldn’t mind taking a few of you along with them, too. Hence their promotion of vaccines, pharmaceuticals, chemotherapy and water fluoridation. We’ll title the video, “SKEPTICS COMMIT MASS SUICIDE BY DRINKING PHARMACEUTICALS AS IF THEY WERE KOOL-AID.” Jonestown, anyone?”
  • “Do you notice the irony here? The only medicines they’re willing to consume in large doses in public are homeopathic remedies! They won’t dare consume large quantities of the medicines they all say YOU should be taking! (The pharma drugs.)” [emphasis his]
  • what Adams seems to have missed is that the skeptics have no intention of killing themselves, so his bizarre claims that the 10:23 participants are psychopathic, self-loathing, and suicidal makes not even a little bit of sense. Skeptics know they aren’t going to die with these demonstrations, because homeopathy has no active ingredients and no evidence of efficacy.
  • The inventor of homeopathy himself, Samuel Hahnemann believed that excessive doses of homeopathy could be harmful (see sections 275 and 276 of his Organon). Homeopaths are pros at retconning their own field to fit in with Hahnemann’s original ideas (inventing new mechanisms, such as water memory and resonance, in the face of germ theory). So how does Adams reconcile this claim?
Weiye Loh

RealClimate: E&E threatens a libel suit - 0 views

  • From: Bill Hughes Cc: Sonja Boehmer-Christiansen Subject:: E&E libel Date: 02/18/11 10:48:01 Gavin, your comment about Energy & Environment which you made on RealClimate has been brought to my attention: “The evidence for this is in precisely what happens in venues like E&E that have effectively dispensed with substantive peer review for any papers that follow the editor’s political line. ” To assert, without knowing, as you cannot possibly know, not being connected with the journal yourself, that an academic journal does not bother with peer review, is a terribly damaging charge, and one I’m really quite surprised that you’re prepared to make. And to further assert that peer review is abandoned precisely in order to let the editor publish papers which support her political position, is even more damaging, not to mention being completely ridiculous. At the moment, I’m prepared to settle merely for a retraction posted on RealClimate. I’m quite happy to work with you to find a mutually satisfactory form of words: I appreciate you might find it difficult. I look forward to hearing from you. With best wishes Bill Hughes Director Multi-Science Publsihing [sic] Co Ltd
  • The comment in question was made in the post “From blog to Science”
  • The point being that if the ‘peer-review’ bar gets lowered, the result is worse submissions, less impact and a declining reputation. Something that fits E&E in spades. This conclusion is based on multiple years of evidence of shoddy peer-review at E&E and, obviously, on the statements of the editor, Sonja Boehmer-Christiansen. She was quoted by Richard Monastersky in the Chronicle of Higher Education (3 Sep 2003) in the wake of the Soon and Baliunas fiasco: The journal’s editor, Sonja Boehmer-Christiansen, a reader in geography at the University of Hull, in England, says she sometimes publishes scientific papers challenging the view that global warming is a problem, because that position is often stifled in other outlets. “I’m following my political agenda — a bit, anyway,” she says. “But isn’t that the right of the editor?”
  • ...4 more annotations...
  • the claim that the ‘an editor publishes papers based on her political position’ while certainly ‘terribly damaging’ to the journal’s reputation is, unfortunately, far from ridiculous.
  • Other people have investigated the peer-review practices of E&E and found them wanting. Greenfyre, dissecting a list of supposedly ‘peer-reviewed’ papers from E&E found that: A given paper in E&E may have been peer reviewed (but unlikely). If it was, the review process might have been up to the normal standards for science (but unlikely). Hence E&E’s exclusion from the ISI Journal Master list, and why many (including Scopus) do not consider E&E a peer reviewed journal at all. Further, even the editor states that it is not a science journal and that it is politically motivated/influenced. Finally, at least some of what it publishes is just plain loony.
  • Also, see comments from John Hunter and John Lynch. Nexus6 claimed to found the worst climate paper ever published in its pages, and that one doesn’t even appear to have been proof-read (a little like Bill’s email). A one-time author, Roger Pielke Jr, said “…had we known then how that outlet would evolve beyond 1999 we certainly wouldn’t have published there. “, and Ralph Keeling once asked, “Is it really the intent of E&E to provide a forum for laundering pseudo-science?”. We report, you decide.
  • We are not surprised to find that Bill Hughes (the publisher) is concerned about his journal’s evidently appalling reputation. However, perhaps the way to fix that is to start applying a higher level of quality control rather than by threatening libel suits against people who publicly point out the problems?
Weiye Loh

nanopolitan: Plagiarism Derails German (Ex) Minister - 0 views

  • The outcry has taken   several   forms, including Guttenberg being dubbed zu Googleberg and, even worse, Germany's Sarah Palin! The most substantive protest is through this letter to Chancellor Merkel, signed by over 20,000 academics, post-docs, and students. Here's an excerpt: ... When it is no longer an important value to protect ideas in our society, then we have gambled away our future. We don't expect thankfulness for our scientific work, but we expect respect, we expect that our work be taken seriously. By handling the case of zu Guttenberg as a trifle, Germany's position in world science, its credibility as the "Land of Ideas", suffers.
  • A second line of attack -- which probably clinched the issue -- targeted his leadership of defence academies, especially since it came from political adversaries partners: "Should he continue to allow the circumstances of his dissertation to remain so unclear, I think that he, as minister and as the top official of two Bundeswehr universities, is no longer acceptable," Martin Neumann, parliamentary spokesman for academic issues for the business-friendly Free Democratic Party (FDP), Merkel's junior coalition partner, told the Financial Times Deutschland newspaper.
Weiye Loh

If climate scientists are in it for the money, they're doing it wrong - 0 views

  • Since it doesn't have a lot of commercial appeal, most of the people working in the area, and the vast majority of those publishing the scientific literature, work in academic departments or at government agencies. Penn State, home of noted climatologists Richard Alley and Michael Mann, has a strong geosciences department and, conveniently, makes the department's salary information available. It's easy to check, and find that the average tenured professor earned about $120,000 last year, and a new hire a bit less than $70,000.
  • That's a pretty healthy salary by many standards, but it's hardly a racket. Penn State appears to be on the low end of similar institutions, and is outdone by two other institutions in its own state (based on this report). But, more significantly for the question at hand, we can see that Earth Sciences faculty aren't paid especially well. Sure, they do much better than the Arts faculty, but they're somewhere in the middle of the pack, and get stomped on by professors in the Business and IT departments.
  • This is all, of course, ignoring what someone who can do the sort of data analysis or modeling of complex systems that climatologists perform might make if they went to Wall Street.
  • ...10 more annotations...
  • It's also worth pointing out what they get that money for, as exemplified by a fairly typical program announcement for NSF grants. Note that it calls for studies of past climate change and its impact on the weather. This sort of research could support the current consensus view, but it just as easily might not. And here's the thing: it's impossible to tell before the work's done. Even a study looking at the flow of carbon into and out of the atmosphere, which would seem to be destined to focus on anthropogenic climate influences, might identify a previously unknown or underestimated sink or feedback. So, even if the granting process were biased (and there's been no indication that it is), there is no way for it to prevent people from obtaining contrary data. The granting system is also set up to induce people to publish it, since a grant that doesn't produce scientific papers can make it impossible for a professor to obtain future funding.
  • Maybe the money is in the perks that come with grants, which provide for travel and lab toys. Unfortunately, there's no indication that there's lots of money out there for the taking, either from the public or private sector. For the US government, spending on climate research across 13 different agencies (from the Department of State to NASA) is tracked by the US Climate Change Science Program. The group has tracked the research budget since 1989, but not everything was brought under its umbrella until 1991. That year, according to CCSP figures, about $1.45 billion was spent on climate research (all figures are in 2007 dollars). Funding peaked back in 1995 at $2.4 billion, then bottomed out in 2006 at only $1.7 billion.
  • Funding has gone up a bit over the last couple of years, and some stimulus money went into related programs. But, in general, the trend has been a downward one for 15 years; it's not an area you'd want to go into if you were looking for a rich source of grant money. If you were, you would target medical research, for which the NIH had a $31 billion budget plus another $10 billion in stimulus money.
  • Not all of this money went to researchers anyway; part of the budget goes to NASA, and includes some of that agency's (rather pricey) hardware. For example, the Orbiting Carbon Observatory cost roughly $200 million, but failed to go into orbit; its replacement is costing another $170 million.
  • Might the private sector make up for the lack of government money? Pretty unlikely. For starters, it's tough to identify many companies that have a vested interest in the scientific consensus. Renewable energy companies would seem to be the biggest winners, but they're still relatively tiny. Neither the largest wind or photovoltaic manufacturers (Vestas and First Solar) appear in the Financial Times' list of the world's 500 largest companies. In contrast, there are 16 oil companies in the of the top 100, and they occupy the top two spots. Exxon's profits in 2010 were nearly enough to buy both Vestas and First Solar, given their market valuations in late February.
  • climate researchers are scrambling for a piece of a smaller piece of the government-funded pie, and the resources of the private sector are far, far more likely to go to groups that oppose their conclusions.
  • If you were paying careful attention to that last section, you would have noticed something funny: the industry that seems most likely to benefit from taking climate change seriously produces renewable energy products. However, those companies don't employ any climatologists. They probably have plenty of space for engineers, materials scientists, and maybe a quantum physicist or two, but there's not much that a photovoltaic company would do with a climatologist. Even by convincing the public of their findings—namely, climate change is real, and could have serious impacts—the scientists are not doing themselves any favors in terms of job security or alternative careers.
  • But, surely, by convincing the public, or at least the politicians, that there's something serious here, they ensure their own funding? That's arguably not true either, and the stimulus package demonstrates that nicely. The US CCSP programs, in total, got a few hundred million dollars from the stimulus. In contrast, the Department of Energy got a few billion. Carbon capture and sequestration alone received $2.4 billion, more than the entire CCSP budget.
  • The problem is that climatologists are well equipped to identify potential problems, but very poorly equipped to solve them; it would be a bit like expecting an astronomer to know how to destroy a threatening asteroid.
  • The solutions to problems related to climate change are going to come in areas like renewable energy, carbon sequestration, and efficiency measures; that's where most of the current administration's efforts have focused. None of these are areas where someone studying the climate is likely to have a whole lot to add. So, when they advocate that the public take them seriously, they're essentially asking the public to send money to someone else.
Weiye Loh

Can a group of scientists in California end the war on climate change? | Science | The ... - 0 views

  • Muller calls his latest obsession the Berkeley Earth project. The aim is so simple that the complexity and magnitude of the undertaking is easy to miss. Starting from scratch, with new computer tools and more data than has ever been used, they will arrive at an independent assessment of global warming. The team will also make every piece of data it uses – 1.6bn data points – freely available on a website. It will post its workings alongside, including full information on how more than 100 years of data from thousands of instruments around the world are stitched together to give a historic record of the planet's temperature.
  • Muller is fed up with the politicised row that all too often engulfs climate science. By laying all its data and workings out in the open, where they can be checked and challenged by anyone, the Berkeley team hopes to achieve something remarkable: a broader consensus on global warming. In no other field would Muller's dream seem so ambitious, or perhaps, so naive.
  • "We are bringing the spirit of science back to a subject that has become too argumentative and too contentious," Muller says, over a cup of tea. "We are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find." Why does Muller feel compelled to shake up the world of climate change? "We are doing this because it is the most important project in the world today. Nothing else comes close," he says.
  • ...20 more annotations...
  • There are already three heavyweight groups that could be considered the official keepers of the world's climate data. Each publishes its own figures that feed into the UN's Intergovernmental Panel on Climate Change. Nasa's Goddard Institute for Space Studies in New York City produces a rolling estimate of the world's warming. A separate assessment comes from another US agency, the National Oceanic and Atmospheric Administration (Noaa). The third group is based in the UK and led by the Met Office. They all take readings from instruments around the world to come up with a rolling record of the Earth's mean surface temperature. The numbers differ because each group uses its own dataset and does its own analysis, but they show a similar trend. Since pre-industrial times, all point to a warming of around 0.75C.
  • You might think three groups was enough, but Muller rolls out a list of shortcomings, some real, some perceived, that he suspects might undermine public confidence in global warming records. For a start, he says, warming trends are not based on all the available temperature records. The data that is used is filtered and might not be as representative as it could be. He also cites a poor history of transparency in climate science, though others argue many climate records and the tools to analyse them have been public for years.
  • Then there is the fiasco of 2009 that saw roughly 1,000 emails from a server at the University of East Anglia's Climatic Research Unit (CRU) find their way on to the internet. The fuss over the messages, inevitably dubbed Climategate, gave Muller's nascent project added impetus. Climate sceptics had already attacked James Hansen, head of the Nasa group, for making political statements on climate change while maintaining his role as an objective scientist. The Climategate emails fuelled their protests. "With CRU's credibility undergoing a severe test, it was all the more important to have a new team jump in, do the analysis fresh and address all of the legitimate issues raised by sceptics," says Muller.
  • This latest point is where Muller faces his most delicate challenge. To concede that climate sceptics raise fair criticisms means acknowledging that scientists and government agencies have got things wrong, or at least could do better. But the debate around global warming is so highly charged that open discussion, which science requires, can be difficult to hold in public. At worst, criticising poor climate science can be taken as an attack on science itself, a knee-jerk reaction that has unhealthy consequences. "Scientists will jump to the defence of alarmists because they don't recognise that the alarmists are exaggerating," Muller says.
  • The Berkeley Earth project came together more than a year ago, when Muller rang David Brillinger, a statistics professor at Berkeley and the man Nasa called when it wanted someone to check its risk estimates of space debris smashing into the International Space Station. He wanted Brillinger to oversee every stage of the project. Brillinger accepted straight away. Since the first meeting he has advised the scientists on how best to analyse their data and what pitfalls to avoid. "You can think of statisticians as the keepers of the scientific method, " Brillinger told me. "Can scientists and doctors reasonably draw the conclusions they are setting down? That's what we're here for."
  • For the rest of the team, Muller says he picked scientists known for original thinking. One is Saul Perlmutter, the Berkeley physicist who found evidence that the universe is expanding at an ever faster rate, courtesy of mysterious "dark energy" that pushes against gravity. Another is Art Rosenfeld, the last student of the legendary Manhattan Project physicist Enrico Fermi, and something of a legend himself in energy research. Then there is Robert Jacobsen, a Berkeley physicist who is an expert on giant datasets; and Judith Curry, a climatologist at Georgia Institute of Technology, who has raised concerns over tribalism and hubris in climate science.
  • Robert Rohde, a young physicist who left Berkeley with a PhD last year, does most of the hard work. He has written software that trawls public databases, themselves the product of years of painstaking work, for global temperature records. These are compiled, de-duplicated and merged into one huge historical temperature record. The data, by all accounts, are a mess. There are 16 separate datasets in 14 different formats and they overlap, but not completely. Muller likens Rohde's achievement to Hercules's enormous task of cleaning the Augean stables.
  • The wealth of data Rohde has collected so far – and some dates back to the 1700s – makes for what Muller believes is the most complete historical record of land temperatures ever compiled. It will, of itself, Muller claims, be a priceless resource for anyone who wishes to study climate change. So far, Rohde has gathered records from 39,340 individual stations worldwide.
  • Publishing an extensive set of temperature records is the first goal of Muller's project. The second is to turn this vast haul of data into an assessment on global warming.
  • The big three groups – Nasa, Noaa and the Met Office – work out global warming trends by placing an imaginary grid over the planet and averaging temperatures records in each square. So for a given month, all the records in England and Wales might be averaged out to give one number. Muller's team will take temperature records from individual stations and weight them according to how reliable they are.
  • This is where the Berkeley group faces its toughest task by far and it will be judged on how well it deals with it. There are errors running through global warming data that arise from the simple fact that the global network of temperature stations was never designed or maintained to monitor climate change. The network grew in a piecemeal fashion, starting with temperature stations installed here and there, usually to record local weather.
  • Among the trickiest errors to deal with are so-called systematic biases, which skew temperature measurements in fiendishly complex ways. Stations get moved around, replaced with newer models, or swapped for instruments that record in celsius instead of fahrenheit. The times measurements are taken varies, from say 6am to 9pm. The accuracy of individual stations drift over time and even changes in the surroundings, such as growing trees, can shield a station more from wind and sun one year to the next. Each of these interferes with a station's temperature measurements, perhaps making it read too cold, or too hot. And these errors combine and build up.
  • This is the real mess that will take a Herculean effort to clean up. The Berkeley Earth team is using algorithms that automatically correct for some of the errors, a strategy Muller favours because it doesn't rely on human interference. When the team publishes its results, this is where the scrutiny will be most intense.
  • Despite the scale of the task, and the fact that world-class scientific organisations have been wrestling with it for decades, Muller is convinced his approach will lead to a better assessment of how much the world is warming. "I've told the team I don't know if global warming is more or less than we hear, but I do believe we can get a more precise number, and we can do it in a way that will cool the arguments over climate change, if nothing else," says Muller. "Science has its weaknesses and it doesn't have a stranglehold on the truth, but it has a way of approaching technical issues that is a closer approximation of truth than any other method we have."
  • It might not be a good sign that one prominent climate sceptic contacted by the Guardian, Canadian economist Ross McKitrick, had never heard of the project. Another, Stephen McIntyre, whom Muller has defended on some issues, hasn't followed the project either, but said "anything that [Muller] does will be well done". Phil Jones at the University of East Anglia was unclear on the details of the Berkeley project and didn't comment.
  • Elsewhere, Muller has qualified support from some of the biggest names in the business. At Nasa, Hansen welcomed the project, but warned against over-emphasising what he expects to be the minor differences between Berkeley's global warming assessment and those from the other groups. "We have enough trouble communicating with the public already," Hansen says. At the Met Office, Peter Stott, head of climate monitoring and attribution, was in favour of the project if it was open and peer-reviewed.
  • Peter Thorne, who left the Met Office's Hadley Centre last year to join the Co-operative Institute for Climate and Satellites in North Carolina, is enthusiastic about the Berkeley project but raises an eyebrow at some of Muller's claims. The Berkeley group will not be the first to put its data and tools online, he says. Teams at Nasa and Noaa have been doing this for many years. And while Muller may have more data, they add little real value, Thorne says. Most are records from stations installed from the 1950s onwards, and then only in a few regions, such as North America. "Do you really need 20 stations in one region to get a monthly temperature figure? The answer is no. Supersaturating your coverage doesn't give you much more bang for your buck," he says. They will, however, help researchers spot short-term regional variations in climate change, something that is likely to be valuable as climate change takes hold.
  • Despite his reservations, Thorne says climate science stands to benefit from Muller's project. "We need groups like Berkeley stepping up to the plate and taking this challenge on, because it's the only way we're going to move forwards. I wish there were 10 other groups doing this," he says.
  • Muller's project is organised under the auspices of Novim, a Santa Barbara-based non-profit organisation that uses science to find answers to the most pressing issues facing society and to publish them "without advocacy or agenda". Funding has come from a variety of places, including the Fund for Innovative Climate and Energy Research (funded by Bill Gates), and the Department of Energy's Lawrence Berkeley Lab. One donor has had some climate bloggers up in arms: the man behind the Charles G Koch Charitable Foundation owns, with his brother David, Koch Industries, a company Greenpeace called a "kingpin of climate science denial". On this point, Muller says the project has taken money from right and left alike.
  • No one who spoke to the Guardian about the Berkeley Earth project believed it would shake the faith of the minority who have set their minds against global warming. "As new kids on the block, I think they will be given a favourable view by people, but I don't think it will fundamentally change people's minds," says Thorne. Brillinger has reservations too. "There are people you are never going to change. They have their beliefs and they're not going to back away from them."
Weiye Loh

Skepticblog » Kirsten Sanford - 0 views

  • This Sunday before game-time you might want to set your Tivos to record Dateline. This week, supposedly, Matt Lauer interviews Dr. Andrew Wakefield and several other affiliates of the Thoughtful House Center for Children, along with Dr. Paul Offit and journalist Brian Deer.
  • Please, Matt… don’t go Jenny McCarthy on us. Don’t do the usual journalistic job of being “fair-and-balanced”. This is not a “he said, she said” issue. This is science. Do tell the world what the science supports.
  • Depending on how this major media outlet writes the script, it could either be a major affirmation of what many within the science community already know, or it could increase the divide between anti-vax’ers and science.
Weiye Loh

The Irrationality of the Anti-Sex Lobby - 0 views

  • with so little ethical and credible research on children in this area, the case is far from closed. See, for instance, the recent Scottish Executive report on the topic, with indications that both children’s and parents’ understanding of sexualised imagery is rather more nuanced than the media and government give them credit for. [i] However, as far as the public are concerned, there is no debate to be had. And so the endless ‘childhood in crisis’ nonsense is trotted out again and again.
  • when it comes down to Facts vs. Fear Related To Your Kids, most people will choose the fear option “just to be on the safe side”.
  • So what are the options? Basically, to find the trigger issues that will help people understand why restricting adult access to adult materials is in no-one’s interest, why it is important to support the rights of sex workers to work, and why deciding what children are and are not exposed to is a job for families and communities, not governments.
Weiye Loh

Scientists Are Cleared of Misuse of Data - NYTimes.com - 0 views

  • The inquiry, by the Commerce Department’s inspector general, focused on e-mail messages between climate scientists that were stolen and circulated on the Internet in late 2009 (NOAA is part of the Commerce Department). Some of the e-mails involved scientists from NOAA.
  • Climate change skeptics contended that the correspondence showed that scientists were manipulating or withholding information to advance the theory that the earth is warming as a result of human activity.
  • In a report dated Feb. 18 and circulated by the Obama administration on Thursday, the inspector general said, “We did not find any evidence that NOAA inappropriately manipulated data.”
  • ...6 more annotations...
  • The finding comes at a critical moment for NOAA as some newly empowered Republican House members seek to rein in the Environmental Protection Agency’s plans to regulate greenhouse gas emissions, often contending that the science underpinning global warming is flawed. NOAA is the federal agency tasked with monitoring climate data.
  • The inquiry into NOAA’s conduct was requested last May by Senator James M. Inhofe, Republican of Oklahoma, who has challenged the science underlying human-induced climate change. Mr. Inhofe was acting in response to the controversy over the e-mail messages, which were stolen from the Climatic Research Unit at the University of East Anglia in England, a major hub of climate research. Mr. Inhofe asked the inspector general of the Commerce Department to investigate how NOAA scientists responded internally to the leaked e-mails. Of 1,073 messages, 289 were exchanges with NOAA scientists.
  • The inspector general reviewed the 1,073 e-mails, and interviewed Dr. Lubchenco and staff members about their exchanges. The report did not find scientific misconduct; it did however, challenge the agency over its handling of some Freedom of Information Act requests in 2007. And it noted the inappropriateness of e-mailing a collage cartoon depicting Senator Inhofe and five other climate skeptics marooned on a melting iceberg that passed between two NOAA scientists.
  • The report was not a review of the climate data itself. It joins a series of investigations by the British House of Commons, Pennsylvania State University, the InterAcademy Council and the National Research Council into the leaked e-mails that have exonerated the scientists involved of scientific wrongdoing.
  • But Mr. Inhofe said the report was far from a clean bill of health for the agency and that contrary to its executive summary, showed that the scientists “engaged in data manipulation.”
  • “It also appears that one senior NOAA employee possibly thwarted the release of important federal scientific information for the public to assess and analyze,” he said, referring to an employee’s failure to provide material related to work for the Intergovernmental Panel on Climate Change, a different body that compiles research, in response to a Freedom of Information request.
Weiye Loh

Hiding the Decline | Climate Etc. - 0 views

  • we need to understand the magnitude and characteristics and causes of natural climate variability over the current interglacial, particularly the last 2000 years.  I’m more interested in the handle than the blade of the hockey stick.  I also view understanding regional climate variations as much more important than trying to use some statistical model to create global average anomalies (which I personally regard as pointless, given the sampling issue).
  • I am really hoping that the AR5 will do a better job of providing a useful analysis and assessment of the paleodata for the last millennium.  However I am not too optimistic. There was another Workshop in Lisbon this past year (Sept 2010), on the Medieval Warm Period.  The abstracts for the presentations are found here.  No surprises, many of the usual people doing the usual things.
  • This raises the issue as to whether there is any value at all in the tree ring analyses for this application, and whether these paleoreconstructions can tell us anything.  Apart from the issue of the proxies not matching the observations from the current period of warming (which is also the period of best historical data), there is the further issue as to whether these hemispheric or global temperature analyses make any sense at all because of the sampling issue.  I am personally having a difficult time in seeing how this stuff has any credibility at the level of “likely” confidence levels reported in the TAR and AR4.
  • ...5 more annotations...
  • There is no question that the diagrams and accompanying text in the IPCC TAR, AR4 and WMO 1999 are misleading.  I was misled.  Upon considering the material presented in these reports, it did not occur to me that recent paleo data was not consistent with the historical record.  The one statement in AR4 (put in after McIntyre’s insistence as a reviewer) that mentions the divergence problem is weak tea.
  • It is obvious that there has been deletion of adverse data in figures shown IPCC AR3 and AR4, and the 1999 WMO document.  Not only is this misleading, but it is dishonest (I agree with Muller on this one).  The authors defend themselves by stating that there has been no attempt to hide the divergence problem in the literature, and that the relevant paper was referenced.  I infer then that there is something in the IPCC process or the authors’ interpretation of the IPCC process  (i.e. don’t dilute the message) that corrupted the scientists into deleting the adverse data in these diagrams.
  • McIntyre’s analysis is sufficiently well documented that it is difficult to imagine that his analysis is incorrect in any significant way.  If his analysis is incorrect, it should be refuted.  I would like to know what the heck Mann, Briffa, Jones et al. were thinking when they did this and why they did this, and how they can defend this, although the emails provide pretty strong clues.  Does the IPCC regard this as acceptable?  I sure don’t.
  • paleoproxies are outside the arena of my personal research expertise, and I find my eyes glaze over when I start reading about bristlecones, etc.  However, two things this week have changed my mind, and I have decided to take on one aspect of this issue: the infamous “hide the decline.” The first thing that contributed to my mind change was this post at Bishop Hill entitled “Will Sir John condemn hide the decline?”, related to Sir John Beddington’s statement:  It is time the scientific community became proactive in challenging misuse of scientific evidence.
  • The second thing was this youtube clip of physicist Richard Muller (Director of the Berkeley Earth Project), where he discusses “hide the decline” and vehemently refers to this as “dishonest,” and says “you are not allowed to do this,” and further states that he intends not to read further papers by these authors (note “hide the decline” appears around minute 31 into the clip).  While most of his research is in physics, Muller has also published important papers on paleoclimate, including a controversial paper that supported McIntyre and McKitrick’s analysis.
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • Internet-based news websites and the growing popularity of social media have broken the mainstream media's monopoly on news - though not completely. Singapore's PAP-led government was one of the first in the world to devise content regulations for the Internet, issuing restrictions on topics it deemed as sensitive as early as 1996.
  • While political parties are broadly allowed to use the Internet to campaign, they were previously prohibited from employing some of the medium's most powerful features, including live audio and video streaming and so-called "viral marketing". Websites not belonging to political parties or candidates but registered as political sites have been banned from activities that could be considered online electioneering.
  • George argued that despite the growing influence of online media, it would be naive to conclude that the PAP's days of domination are numbered. "While the government appears increasingly liberal towards individual self-expression, it continues to intervene strategically at points at which such expression may become politically threatening," he said. "It is safe to assume that the government's digital surveillance capabilities far outstrip even its most technologically competent opponent's evasive abilities."
  • ...2 more annotations...
  • consistent with George's analysis, authorities last week relaxed past regulations that limited the use of the Internet and social media for election campaigning. Political parties and candidates will be allowed to use a broader range of new media platforms, including blogs, micro-blogs, online photo-sharing platforms, social networking sites and electronic media applications used on mobile phones, for election advertising. The loosening, however, only applies for political party-run websites, chat rooms and online discussion forums. Candidates must declare the new media content they intend to use within 12 hours after the start of the election campaign period. George warned in a recent blog entry that the new declaration requirements could open the way for PAP-led defamation suits against new media using opposition politicians. PAP leaders have historically relied on expensive litigation to suppress opposition and media criticism. "The PAP won't subject everyone's postings to legal scrutiny. But if it decides that a particular opposition politician needs to be utterly demolished, you can bet that no tweet of his would be too tiny, no Facebook update too fleeting ... in order a build the case against the individual," George warned in a journalism blog.
  • While opposition politicians will rely more on new than mainstream media to communicate with voters, they already recognize that the use of social media will not necessarily translate into votes. "[Online support] can give a too rosy a picture and false degree of comfort," said the RP's Jeyaretnam. "People who [interact with] us online are those who are already convinced with our messages anyway."
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • rather than being forced to wait for parliament to meet to air their dissent, now opposition parties are able to post pre-emptively their criticisms online, shifting the time and space of Singapore's political debate
  • Singapore's People's Action Party (PAP)-dominated politics are increasingly being contested online and over social media like blogs, Facebook and Twitter. Pushed by the perceived pro-PAP bias of the mainstream media, Singapore's opposition parties are using various new media to communicate with voters and express dissenting views. Alternative news websites, including The Online Citizen and Temasek Review, have won strong followings by presenting more opposition views in their news mix.
  • Despite its democratic veneer, Singapore rates poorly in global press freedom rankings due to a deeply entrenched culture of self-censorship and a pro-state bias in the mainstream media. Reporters Without Borders, a France-based press freedom advocacy group, recently ranked Singapore 136th in its global press freedom rankings, scoring below repressive countries like Iraq and Zimbabwe. The country's main media publishing house, Singapore Press Holdings, is owned by the state and its board of directors is made up largely of PAP members or other government-linked executives. Senior newspaper editors, including at the Straits Times, must be vetted and approved by the PAP-led government.
  • ...3 more annotations...
  • The local papers have a long record of publicly endorsing the PAP-led government's position, according to Tan Tarn How, a research fellow at the Institute of Policy Studies (IPS) and himself a former journalist. In his research paper "Singapore's print media policy - a national success?" published last year he quoted Leslie Fong, a former editor of the Straits Times, saying that the press "should resist the temptation to arrogate itself the role of a watchdog, or permanent critic, of the government of the day".
  • With regularly briefed and supportive editors, there is no need for pre-publication censorship, according to Tan. When the editors are perceived to get things "wrong", the government frequently takes to task, either publicly or privately, the newspaper's editors or individual journalists, he said.
  • The country's main newspaper, the Straits Times, has consistently stood by its editorial decision-making. Editor Han Fook Kwang said last year: "Our circulation is 380,000 and we have a readership of 1.4 million - these are people who buy the paper every day. We're aware people say we're a government mouthpiece or that we are biased but the test is if our readers believe in the paper and continue to buy it."
Weiye Loh

A lesson in citing irrelevant statistics | The Online Citizen - 0 views

  • Statistics that are quoted, by themselves, may be quite meaningless, unless they are on a comparative basis. To illustrate this, if we want to say that Group A (poorer kids) is not significantly worse off than Group B (richer kids), then it may be pointless to just cite the statistics for Group A, without Group B’s.
  • “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” “One in six scores in the top one-third at PSLE” What we need to know for comparative purposes, is the percentage of richer kids who scores in the top two-thirds too.
  • “… one in five scores in the top 30% at O and A levels… One in five goes to university and polys” What’s the data for richer kids? Since the proportion of the entire population going to university and polys has increased substantially, this clearly shows that poorer kids are worse off!
  • ...4 more annotations...
  • The Minister was quoted as saying: “My  parents had six children.  My first home as a young boy was a rental flat in Zion Road.  We shared it as tenants with other families” Citing individuals who made it, may be of no “statistical” relevance, as what we need are the statistics as to the proportion of poorer kids to richer kids, who get scholarships, proportional to their representation in the population.
  • “More spent on primary and secondary/JC schools.  This means having significantly more and better teachers, and having more programmes to meet children’s specific needs” What has spending more money, which what most countries do, got to do with the argument whether poorer kids are disadvantaged?
  • Straits Times journalist, Li XueYing put the crux of the debate in the right perspective: “Dr Ng had noted that ensuring social mobility “cannot mean equal outcomes, because students are inherently different”, But can it be that those from low-income families are consistently “inherently different” to such an extent?”
  • Relevant statistics Perhaps the most damning statistics that poorer kids are disadvantaged was the chart from the Ministry of Education (provided by the Straits Times), which showed that the percentage of Primary 1 pupils who lived in 1 to 3-room HDB flats and subsequently progressed to University and/or Polytechnic, has been declining since around 1986.
Weiye Loh

Roger Pielke Jr.'s Blog: It Is Always the Media's Fault - 0 views

  • Last summer NCAR issued a dramatic press release announcing that oil from the Gulf spill would soon be appearing on the beaches of the Atlantic ocean.  I discussed it here. Here are the first four paragraphs of that press release: BOULDER—A detailed computer modeling study released today indicates that oil from the massive spill in the Gulf of Mexico might soon extend along thousands of miles of the Atlantic coast and open ocean as early as this summer. The modeling results are captured in a series of dramatic animations produced by the National Center for Atmospheric Research (NCAR) and collaborators. he research was supported in part by the National Science Foundation, NCAR’s sponsor. The results were reviewed by scientists at NCAR and elsewhere, although not yet submitted for peer-review publication. “I’ve had a lot of people ask me, ‘Will the oil reach Florida?’” says NCAR scientist Synte Peacock, who worked on the study. “Actually, our best knowledge says the scope of this environmental disaster is likely to reach far beyond Florida, with impacts that have yet to be understood.” The computer simulations indicate that, once the oil in the uppermost ocean has become entrained in the Gulf of Mexico’s fast-moving Loop Current, it is likely to reach Florida's Atlantic coast within weeks. It can then move north as far as about Cape Hatteras, North Carolina, with the Gulf Stream, before turning east. Whether the oil will be a thin film on the surface or mostly subsurface due to mixing in the uppermost region of the ocean is not known.
  • A few weeks ago NCAR's David Hosansky who presumably wrote that press release, asks whether NCAR got it wrong.  His answer?  No, not really: During last year’s crisis involving the massive release of oil into the Gulf of Mexico, NCAR issued a much-watched animation projecting that the oil could reach the Atlantic Ocean. But detectable amounts of oil never made it to the Atlantic, at least not in an easily visible form on the ocean surface. Not surprisingly, we’ve heard from a few people asking whether NCAR got it wrong. These events serve as a healthy reminder of a couple of things: *the difference between a projection and an actual forecast *the challenges of making short-term projections of natural processes that can act chaotically, such as ocean currents
  • What then went wrong? First, the projection. Scientists from NCAR, the Department of Energy’s Los Alamos National Laboratory, and IFM-GEOMAR in Germany did not make a forecast of where the oil would go. Instead, they issued a projection. While there’s not always a clear distinction between the two, forecasts generally look only days or hours into the future and are built mostly on known elements (such as the current amount of humidity in the atmosphere). Projections tend to look further into the future and deal with a higher number of uncertainties (such as the rate at which oil degrades in open waters and the often chaotic movements of ocean currents). Aware of the uncertainties, the scientific team projected the likely path of the spill with a computer model of a liquid dye. They used dye rather than actual oil, which undergoes bacterial breakdown, because a reliable method to simulate that breakdown was not available. As it turned out, the oil in the Gulf broke down quickly due to exceptionally strong bacterial action and, to some extent, the use of chemical dispersants.
  • ...3 more annotations...
  • Second, the challenges of short-term behavior. The Gulf's Loop Current acts as a conveyor belt, moving from the Yucatan through the Florida Straits into the Atlantic. Usually, the current curves northward near the Louisiana and Mississippi coasts—a configuration that would have put it on track to pick up the oil and transport it into open ocean. However, the current’s short-term movements over a few weeks or even months are chaotic and impossible to predict. Sometimes small eddies, or mini-currents, peel off, shifting the position and strength of the main current. To determine the threat to the Atlantic, the research team studied averages of the Loop Current’s past behavior in order to simulate its likely course after the spill and ran several dozen computer simulations under various scenarios. Fortunately for the East Coast, the Loop Current did not behave in its usual fashion but instead remained farther south than usual, which kept it far from the Louisiana and Mississippi coast during the crucial few months before the oil degraded and/or was dispersed with chemical treatments.
  • The Loop Current typically goes into a southern configuration about every 6 to 19 months, although it rarely remains there for very long. NCAR scientist Synte Peacock, who worked on the projection, explains that part of the reason the current is unpredictable is “no two cycles of the Loop Current are ever exactly the same." She adds that the cycles are influenced by such variables as how large the eddy is, where the current detaches and moves south, and how long it takes for the current to reform. Computer models can simulate the currents realistically, she adds. But they cannot predict when the currents will change over to a new cycle. The scientists were careful to explain that their simulations were a suite of possible trajectories demonstrating what was likely to happen, but not a definitive forecast of what would happen. They reiterated that point in a peer-reviewed study on the simulations that appeared last August in Environmental Research Letters. 
  • So who was at fault?  According to Hosansky it was those dummies in the media: These caveats, however, got lost in much of the resulting media coverage.Another perspective is that having some of these caveats in the press release might have been a good idea.
« First ‹ Previous 101 - 120 of 150 Next › Last »
Showing 20 items per page