Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Japan

Rss Feed Group items tagged

Weiye Loh

Roger Pielke Jr.'s Blog: Japan's New Emissions Math - 0 views

  • Japan currently gets 24% of it energy needs from nuclear power.  To replace that additional 26% that was supposed to come from nuclear (to get to 50%) implies 78,000 (!) 2.5 MW wind turbines (see TCF, p. 144, Table 4.4). The Japanese Wind Energy Association optimistically foresees 11.1 GW of capacity by 2020, or less than half that would have been needed to reach the 5% reduction target.  Abandoning nuclear does not make the emissions reduction targets easier, but far, far more difficult. 
  • I have argued that Japan's 2020 emissions reduction target of a 25% reduction was always far out of reach.  I don't think that the phrase "even more impossible" makes much sense, but perhaps Japan's new political context will at least make its emissions reductions commitments "even more obviously impossible." 
  •  
    Earlier this week the Japanese Prime Minister Naoto Kan announced that Japan was no longer seeking to source 50% of its energy needs from nuclear power and terminated plans for 14 new nuclear facilities.  What might this decision mean for Japan's ability to meet its current carbon dioxide emissions reduction target of 25% below 1990 levels?
Meenatchi

RIAJ push for mobile phone DRM across Japan - 2 views

Article Summary: http://www.geek.com/articles/mobile/riaj-push-for-mobile-phone-drm-across-japan-20090915/ The article talks about the Recording Industry Association of Japan (RIAJ) attempting t...

Digital Rights DRM

started by Meenatchi on 16 Sep 09 no follow-up yet
Weiye Loh

Breakthrough Europe: Emerging Economies Reassert Commitment to Nuclear Power - 0 views

  • Nearly half a billion of India's 1.2 billion citizens continue to live in energy poverty. According to the Chairman of the Indian Atomic Energy Commission, Srikumar Banerjee, "ours is a very power-hungry country. It is essential for us to have further electricity generation." The Chinese have cited similar concerns in sticking to their major expansion plans of its nuclear energy sector. At its current GDP growth, China's electricity demands rise an average of 12 percent per year.
  • the Japanese nuclear crisis demonstrates the vast chasm in political priorities between the developing world and the post-material West.
  • Other regions that have reiterated their plans to stick to nuclear energy are Eastern Europe and the Middle East. The Prime Minister of Poland, the fastest growing country in the EU, has said that "fears of a nuclear disaster in Japan following last Friday's earthquake and tsunami would not disturb Poland's own plans to develop two nuclear plants." Russia and the Czech Republic have also restated their commitment to further nuclear development, while the Times reports that "across the Middle East, countries have been racing to build up nuclear power, as a growth and population boom has created unprecedented demand for energy." The United Arab Emirates is building four plants that will generate roughly a quarter of its power by 2020.
  • ...1 more annotation...
  • Some European leaders, including Angela Merkel, may be backtracking fast on their commitment to nuclear power, but despite yesterday's escalation of the ongoing crisis in Fukushima, Japan, there appear to be no signs that India, China and other emerging economies will succumb to a similar backlash. For the emerging economies, overcoming poverty and insecurity of supply remain the overriding priorities of energy policy.
  •  
    As the New York Times reports: The Japanese disaster has led some energy officials in the United States and in industrialized European nations to think twice about nuclear expansion. And if a huge release of radiation worsens the crisis, even big developing nations might reconsider their ambitious plans. But for now, while acknowledging the need for safety, they say their unmet energy needs give them little choice but to continue investing in nuclear power.
Weiye Loh

Diary of A Singaporean Mind: Nuclear Crisis : Separating Hyperbole from Reality.... - 0 views

  • the media and pundits stepped on the "fear creation accelerator" focussing on the possibility of disastrous outcomes while ignoring possible solutions and options.
  • Nobody can say for sure how this crisis is headed. As of today, the risk of a total meltdown has been reduced. However, if one was listening to some segments of the media earlier this week, disaster was the only possible outcome. Fear and panic itself would have caused a disaster Imagine the mess created by millions fleeing Tokyo in a haphazard manner - the sick, old and invalid left behind, food & water distribution disrupted - would have led to more deaths much worse than the worst case meltdown that would have led to reactors being entombed. It also shows us the importance of leadership we can trust - the Japanese Minister Yukio Edano held 5 press conference every day[Link] to update the nation on the dynamic situation (compare that with the initial handling of SARS outbreak).
  • I hope the Japanese succeed in getting the nuclear reactors under control. Extraordinary crisis requires extraordinary leadership, extraordinary sacrifice and extraordinary courage. In the confusion and fear, it is hard for people not to panic and flee but most of the Japanese in Tokyo stayed calm despite all sorts of scares. If another group of people are put through a crisis, the response may be completely different
  • ...1 more annotation...
  • there is a tendency to conclude that govts with the best expert advice have made this decision because there is a real danger of something sinister happening. But remember govts are also under pressure to act because they are made up of politicians - also they may be making precautionary moves because they have little to lose and have to be seen as being pro-active. How real is the danger of harmful radiation reaching Tokyo and should you leave if you're in Tokyo? There were many people doing a "wait and see" before Wednesday but once the US & UK govt called for a pull-out, the fear factor rose several notches and if you're a Japanese in Tokyo watching all the foreigners "abandoning" your city, you start to feel some anxiety and later panic. One EU official used the word "apocalypse"[Link] to describe the situation in Japan and the fear index hit the roof....then a whole herd of experts came out to paint more dire scenarios saying the Japanese have lost all control of the nuclear plants. All this lead the public to think that calamity is the most likely outcome of the unfolding saga and if make a decision from all this, you will just run for the exits if you're in Tokyo. All this is happening while the Japanese govt is trying to calm the people and prevent a pandemonium after the triple disaster hit the country. In China, people have emptied the supermarket shelves of iodized salt because of media reports that the consumption of iodine can block radioactive iodine from being absorbed by thyroid glands causing thyroid cancer. There are also reports of people getting ill after ingesting iodine pills out of fear of radiation.
Weiye Loh

In Japan, a Culture That Promotes Nuclear Dependency - NYTimes.com - 0 views

  • look no further than the Fukada Sports Park, which serves the 7,500 mostly older residents here with a baseball diamond, lighted tennis courts, a soccer field and a $35 million gymnasium with indoor pool and Olympic-size volleyball arena. The gym is just one of several big public works projects paid for with the hundreds of millions of dollars this community is receiving for acce
  • the aid has enriched rural communities that were rapidly losing jobs and people to the cities. With no substantial reserves of oil or coal, Japan relies on nuclear power for the energy needed to drive its economic machine. But critics contend that the largess has also made communities dependent on central government spending — and thus unwilling to rock the boat by pushing for robust safety measures.
  • Tsuneyoshi Adachi, a 63-year-old fisherman, joined the huge protests in the 1970s and 1980s against the plant’s No. 2 reactor. He said many fishermen were angry then because chlorine from the pumps of the plant’s No. 1 reactor, which began operating in 1974, was killing seaweed and fish in local fishing grounds. However, Mr. Adachi said, once compensation payments from the No. 2 reactor began to flow in, neighbors began to give him cold looks and then ignore him. By the time the No. 3 reactor was proposed in the early 1990s, no one, including Mr. Adachi, was willing to speak out against the plant. He said that there was the same peer pressure even after the accident at Fukushima, which scared many here because they live within a few miles of the Shimane plant. “Sure, we are all worried in our hearts about whether the same disaster could happen at the Shimane nuclear plant,” Mr. Adachi said. However, “the town knows it can no longer survive economically without the nuclear plant.”
  • ...1 more annotation...
  • Much of this flow of cash was the product of the Three Power Source Development Laws, a sophisticated system of government subsidies created in 1974 by Kakuei Tanaka, the powerful prime minister who shaped Japan’s nuclear power landscape and used big public works projects to build postwar Japan’s most formidable political machine. The law required all Japanese power consumers to pay, as part of their utility bills, a tax that was funneled to communities with nuclear plants. Officials at the Ministry of Economy, Trade and Industry, which regulates the nuclear industry, and oversees the subsidies, refused to specify how much communities have come to rely on those subsidies. “This is money to promote the locality’s acceptance of a nuclear plant,” said Tatsumi Nakano of the ministry’s Agency for Natural Resources and Energy.
Weiye Loh

Fukushima: The End of the Nuclear Renaissance? - Ecocentric - TIME.com - 0 views

  •  
    The environmental movement has a strange historical relationship with nuclear power. In many countries, opposition to nuclear reactors in the wake of Chernobyl gave rise to major Green political parties. Many environmentalists still oppose nuclear power--Greenpeace, for example, still calls for the phase out of all reactors. But as climate change has taken over the Green agenda, other environmentalists have come to favor nuclear  as part of a low-carbon energy mix.  It was this confluence of factors-fading memories of Chernobyl and increased concern about greenhouse gases--that gave the nuclear industry such confidence just a few years ago. That confidence will have been deeply shaken by events in Japan.
Weiye Loh

After Egypt, now with tsunami news, CNA again a disgrace « Yawning Bread on W... - 0 views

  • icking from one channel to another, I often had to go past Channel NewsAsia (CNA). On two occasions, I stopped for a while to see for myself how they were reporting the Egyptian uprising compared to the others. It was pathetic.  Their reports were not timely, nor had they depth. Where Al Jazeera and the BBC had leading figures like Mohamed El Baradei and Amr Moussa on camera, together with regular on-scene interviews or phone interviews with the protestors themselves, and even CNN had the Facebook organiser Wael Ghonim, all CNA had was an unknown lecturer in Middle Eastern Studies from some institute or other in Singapore giving a thoroughly theoretical take, not on unfolding events, but on the background. And in a stiff studio setting.
  • This weekend, the bad news is the Richter 8.9 earthquake off the coast of Miyagi prefecture of Japan that produced a tsunami that was 10 metres high in places.
  • when I was at my father’s place, I wanted an update. All we had was CNA an so I turned to it for the eleven o’clock news. They had a reporter reporting from Tokyo about how transport systems in the capital city was paralysed last night and people walked for hours to get home. This topic was already covered on last night’s news; it is being covered again tonight. No other news agency with any self-respect is making “walking home” such a big news story (or any news story at all) when people are dying. CNA then followed that up with reports from Changi airport about flights cancelled and how passengers were inconvenienced. Thirdly, they had an earth scientist on air to explain what causes tsunamis. To soak up the time, he then had to field about four questions from the host repeatedly asking him whether tsunamis could be predicted — as if this was the burning issue at the moment.
  • ...1 more annotation...
  • In the entire news bulletin, almost nothing was mentioned about the areas where the earthquake was most severe and the tsunami most devastating (i.e. the Sendai area). There was hardly any footage, no on-the-spot reporting, no casualty figures, nothing about how victims are putting up. OK, to be fair there were a few seconds showing people queuing up to get food and drinking water at one shop. Not a word about 10,000 people missing from Minamisanriku. Not even about rescue teams struggling to get to the worst areas. Amazingly, not a word too was said about the nuclear plants with overheating cores, or the hurried evacuations (that I learnt about online), at first 3 km radius, then 10 km, and now 20 km. . .  suggesting that the situation is probably out of control and may be becoming critical. To CNA, it is apparently not news. What was news was how horrid it was that middle-class Singaporeans were stuck at the airport unable to go on holiday.
Weiye Loh

TODAYonline | Commentary | Science, shaken, must take stock - 0 views

  • Japan's part-natural, part-human disaster is an extraordinary event. As well as dealing with the consequences of an earthquake and tsunami, rescuers are having to evacuate thousands of people from the danger zone around Fukushima. In addition, the country is blighted by blackouts from the shutting of 10 or more nuclear plants. It is a textbook case of how technology can increase our vulnerability through unintended side-effects.
  • Yet there had been early warnings from scientists. In 2006, Professor Katsuhiko Ishibashi resigned from a Japanese nuclear power advisory panel, saying the policy of building in earthquake zones could lead to catastrophe, and that design standards for proofing them against damage were too lax. Further back, the seminal study of accidents in complex technologies was Professor Charles Perrow's Normal Accidents, published in 1984
  • Things can go wrong with design, equipment, procedures, operators, supplies and the environment. Occasionally two or more will have problems simultaneously; in a complex technology such as a nuclear plant, the potential for this is ever-present.
  • ...9 more annotations...
  • in complex systems, "no matter how effective conventional safety devices are, there is a form of accident that is inevitable" - hence the term "normal accidents".
  • system accidents occur with many technologies: Take the example of a highway blow-out leading to a pile-up. This may have disastrous consequences for those involved but cannot be described as a disaster. The latter only happens when the technologies involved have the potential to affect many innocent bystanders. This "dread factor" is why the nuclear aspect of Japan's ordeal has come to dominate headlines, even though the tsunami has had much greater immediate impact on lives.
  • It is simply too early to say what precisely went wrong at Fukushima, and it has been surprising to see commentators speak with such speed and certainty. Most people accept that they will only ever have a rough understanding of the facts. But they instinctively ask if they can trust those in charge and wonder why governments support particular technologies so strongly.
  • Industry and governments need to be more straightforward with the public. The pretence of knowledge is deeply unscientific; a more humble approach where officials are frank about the unknowns would paradoxically engender greater trust.
  • Likewise, nuclear's opponents need to adopt a measured approach. We need a fuller democratic debate about the choices we are making. Catastrophic potential needs to be a central criterion in decisions about technology. Advice from experts is useful but the most significant questions are ethical in character.
  • If technologies can potentially have disastrous effects on large numbers of innocent bystanders, someone needs to represent their interests. We might expect this to be the role of governments, yet they have generally become advocates of nuclear power because it is a relatively low-carbon technology that reduces reliance on fossil fuels. Unfortunately, this commitment seems to have reduced their ability to be seen to act as honest brokers, something acutely felt at times like these, especially since there have been repeated scandals in Japan over the covering-up of information relating to accidents at reactors.
  • Post Fukushima, governments in Germany, Switzerland and Austria already appear to be shifting their policies. Rational voices, such as the Britain's chief scientific adviser John Beddington, are saying quite logically that we should not compare the events in Japan with the situation in Britain, which does not have the same earthquake risk. Unfortunately, such arguments are unlikely to prevail in the politics of risky technologies.
  • firms and investors involved in nuclear power have often failed to take regulatory and political risk into account; history shows that nuclear accidents can lead to tighter regulations, which in turn can increase nuclear costs. Further ahead, the proponents of hazardous technologies need to bear the full costs of their products, including insurance liabilities and the cost of independent monitoring of environmental and health effects. As it currently stands, taxpayers would pay for any future nuclear incident.
  • Critics of technology are often dubbed in policy circles as anti-science. Yet critical thinking is central to any rational decision-making process - it is less scientific to support a technology uncritically. Accidents happen with all technologies, and are regrettable but not disastrous so long as the technology does not have catastrophic potential; this raises significant questions about whether we want to adopt technologies that do have such potential.
Weiye Loh

Japan's devastation goes viral - Japan Earthquake - Salon.com - 0 views

  • Why is there such an appetite for such terrible images? There is, after all, very little satisfaction to be gained in watching a wall of water cut a swath across the coastland.
  • There may be a fair portion of the population that can't separate a truly ruinous tragedy from the kind of explosive spectacle you'd normally pay $11 a ticket for
  • but as videos of the frantic shock of the earthquake continue to roll in to YouTube, it's clear that horror and horrible entertainment value don't stand in tidy opposition to each other. There's too much that's real and human suddenly at stake.
  • ...1 more annotation...
  • "I demand slow motion footage of this atrocity," it's not just gruesome curiosity, or an unempathetic OMG factor, at work here. There is something profoundly humbling about seeing how fragile we truly are, how swiftly and easily everything can be wiped out. The footage from Japan is indeed spectacular. It's also a wrenching memento mori, a reminder of our mortality. Because with each breath we take, all of us are just little boats in the whirlpool, hoping to hang on through the storm. 
Weiye Loh

nanopolitan: From the latest issue of Current Science: Scientometric Analysis of Indian... - 0 views

  • We have carried out a three-part study comparing the research performance of Indian institutions with that of other international institutions. In the first part, the publication profiles of various Indian institutions were examined and ranked based on the h-index and p-index. We found that the institutions of national importance contributed the highest in terms of publications and citations per institution. In the second part of the study, we looked at the publication profiles of various Indian institutions in the high-impact journals and compared these profiles against that of the top Asian and US universities. We found that the number of papers in these journals from India was miniscule compared to the US universities. Recognizing that the publication profiles of various institutions depend on the field/departments, we studied [in Part III] the publication profiles of many science and engineering departments at the Indian Institute of Science (IISc), Bangalore, the Indian Institutes of Technology, as well as top Indian universities. Because the number of faculty in each department varies widely, we have computed the publications and citations per faculty per year for each department. We have also compared this with other departments in various Asian and US universities. We found that the top Indian institution based on various parameters in various disciplines was IISc, but overall even the top Indian institutions do not compare favourably with the top US or Asian universities.
  • The comparison groups of institutions include MIT, UMinn, Purdue, PSU, MSU, OSU, Caltech, UCB, UTexas (all from the US), National University of Singapore, Tsing Hua Univerrsity (China), Seoul National University (South Korea), National Taiwan University (Taiwan), Kyushu University (Japan) and Chinese Academy of Sciences.
  • ... [T]he number of papers in these [high impact] journals from India was miniscule compared to [that from] the US universities. ... [O]verall even the top Indian institutions do not compare favourably with the top US or Asian universities.
  •  
    Scientometric analysis of some disciplines: Comparison of Indian institutions with other international institutions
Weiye Loh

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

The Breakthrough Institute: ANALYSIS: Nuclear Moratorium in Germany Could Cause Spike i... - 0 views

  • The German government announced today that it will shut down seven of the country's seventeen nuclear power plants for an indefinite period, a decision taken in response to widespread protests and a German public increasingly fearful of nuclear power after a nuclear emergency in Japan. The decision places a moratorium on a law that would extend the lifespan of these plants, and is uncharacteristic of Angela Merkel, whose government previously overturned its predecessor's decision to phase nuclear out of Germany's energy supply.
  • The seven plants, each built before 1980, represent 30% of Germany's nuclear electricity generation and 24% of its gross installed nuclear capacity. Shutting down these plants, or even just placing an indefinite hold on their operation, would be a major loss of zero-emissions generation capacity for Germany. The country currently relies on nuclear power from its seventeen nuclear power plants for about a quarter of its electricity supply.
  • The long-term closure of these plants would therefore seriously challenge Germany's carbon emissions efforts, as they try to meet the goal of 40% reduction below 1990 carbon emissions rates by 2020.
  • ...4 more annotations...
  • The moratorium could cause a spike in CO2 emissions as Germany turns to its other, more carbon-intensive sources to supply its energy demand. Already, the country has been engaged in a "dash for coal", building dozens of new coal plants in response to the perverse incentives and intense lobbying from the coal industries made possible by the European Emissions Trading Scheme. (As previously reported by Breakthrough Europe).
  • if lost generation were made up for entirely by coal-fired plants, carbon emissions would increase annually by as much as 33 million tons. This would represent an overall 4% annual increase in carbon emissions for the country and an 8% increase in carbon emissions for the power sector alone.
  • Alternatively, should the country try to replace lost generation entirely with power from renewables, it would need to more than double generation of renewable energy, from where it currently stands at 97 billion kWh to about 237 billion kWh. As part of the country's low-carbon strategy, Germany has planned to deploy at least 20% renewable energy sources by 2020. If the nation now chooses to meet this goal by displacing nuclear plants, 2020 emissions levels would be higher than had the country otherwise phased out its carbon-intensive coal or natural gas plants.
  • *Carbon emissions factors used are those estimated by the World Bank in 2009 for new coal-fired power plants (0.795 t C02/MWh) and new gas-fired power plants (0.398 t C02/MWh) **Data from Carbon Monitoring For Action, European Nuclear Society Data, and US Energy Information Administration
  •  
    Carbon dioxide emissions in Germany may increase by 4 percent annually in response to a moratorium on seven of the country's oldest nuclear power plants, as power generation is shifted from nuclear power, a zero carbon source, to the other carbon-intensive energy sources that currently make up the country's energy supply.
Weiye Loh

Camera prettifies subjects, even adds makeup | Reuters - 0 views

  •  
    The LUMIX FX77, released last Friday, has a "beauty re-touch" function that will whiten your teeth, increase the translucency of your skin, remove dark eye circles, make your face look smaller and even magnify the size of your eyes. For the final touch, it will apply rouge, lipstick and even eye shadow.
Weiye Loh

Radiation Chart « xkcd - 0 views

  • I figured a broad comparison of different types of dosages might be good anyway. I don’t include too much about the Fukushima reactor because the situation seems to be changing by the hour, but I hope the chart provides some helpful context. (Click to view full)
Weiye Loh

Let There Be More Efficient Light - NYTimes.com - 0 views

  • LAST week Michele Bachmann, a Republican representative from Minnesota, introduced a bill to roll back efficiency standards for light bulbs, which include a phasing out of incandescent bulbs in favor of more energy-efficient bulbs. The “government has no business telling an individual what kind of light bulb to buy,” she declared.
  • But this opposition ignores another, more important bit of American history: the critical role that government-mandated standards have played in scientific and industrial innovation.
  • inventions alone weren’t enough to guarantee progress. Indeed, at the time the lack of standards for everything from weights and measures to electricity — even the gallon, for example, had eight definitions — threatened to overwhelm industry and consumers with a confusing array of incompatible choices.
  • ...5 more annotations...
  • This wasn’t the case everywhere. Germany’s standards agency, established in 1887, was busy setting rules for everything from the content of dyes to the process for making porcelain; other European countries soon followed suit. Higher-quality products, in turn, helped the growth in Germany’s trade exceed that of the United States in the 1890s. America finally got its act together in 1894, when Congress standardized the meaning of what are today common scientific measures, including the ohm, the volt, the watt and the henry, in line with international metrics. And, in 1901, the United States became the last major economic power to establish an agency to set technological standards. The result was a boom in product innovation in all aspects of life during the 20th century. Today we can go to our hardware store and choose from hundreds of light bulbs that all conform to government-mandated quality and performance standards.
  • Technological standards not only promote innovation — they also can help protect one country’s industries from falling behind those of other countries. Today China, India and other rapidly growing nations are adopting standards that speed the deployment of new technologies. Without similar requirements to manufacture more technologically advanced products, American companies risk seeing the overseas markets for their products shrink while innovative goods from other countries flood the domestic market. To prevent that from happening, America needs not only to continue developing standards, but also to devise a strategy to apply them consistently and quickly.
  • The best approach would be to borrow from Japan, whose Top Runner program sets energy-efficiency standards by identifying technological leaders in a particular industry — say, washing machines — and mandating that the rest of the industry keep up. As technologies improve, the standards change as well, enabling a virtuous cycle of improvement. At the same time, the government should work with businesses to devise multidimensional standards, so that consumers don’t balk at products because they sacrifice, say, brightness and cost for energy efficiency.
  • This is not to say that innovation doesn’t bring disruption, and American policymakers can’t ignore the jobs that are lost when government standards sweep older technologies into the dustbin of history. An effective way forward on light bulbs, then, would be to apply standards only to those manufacturers that produce or import in large volume. Meanwhile, smaller, legacy light-bulb producers could remain, cushioning the blow to workers and meeting consumer demand.
  • Technologies and the standards that guide their deployment have revolutionized American society. They’ve been so successful, in fact, that the role of government has become invisible — so much so that even members of Congress should be excused for believing the government has no business mandating your choice of light bulbs.
Weiye Loh

Have you heard of the Koch Brothers? | the kent ridge common - 0 views

  • I return to the Guardian online site expressly to search for those elusive articles on Wisconsin. The main page has none. I click on News – US, and there are none. I click on ‘Commentary is Free’- US, and find one article on protests in Ohio. I go to the New York Times online site. Earlier, on my phone, I had seen one article at the bottom of the main page on Wisconsin. By the time I managed to get on my computer to find it again however, the NYT main page was quite devoid of any articles on the protests at all. I am stumped; clearly, I have to reconfigure my daily news sources and reading diet.
  • It is not that the media is not covering the protests in Wisconsin at all – but effective media coverage in the US at least, in my view, is as much about volume as it is about substantive coverage. That week, more prime-time slots and the bulk of the US national attention were given to Charlie Sheen and his crazy antics (whatever they were about, I am still not too sure) than to Libya and the rest of the Middle East, or more significantly, to a pertinent domestic issue, the teacher protests  - not just in Wisconsin but also in other cities in the north-eastern part of the US.
  • In the March 2nd episode of The Colbert Report, it was shown that the Fox News coverage of the Wisconsin protests had re-used footage from more violent protests in California (the palm trees in the background gave Fox News away). Bill O’Reilly at Fox News had apparently issued an apology – but how many viewers who had seen the footage and believed it to be on-the-ground footage of Wisconsin would have followed-up on the report and the apology? And anyway, why portray the teacher protests as violent?
  • ...12 more annotations...
  • In this New York Times’ article, “Teachers Wonder, Why the scorn?“, the writer notes the often scathing comments from counter-demonstrators – “Oh you pathetic teachers, read the online comments and placards of counterdemonstrators. You are glorified baby sitters who leave work at 3 p.m. You deserve minimum wage.” What had begun as an ostensibly ‘economic reform’ targeted at teachers’ unions has gradually transmogrified into a kind of “character attack” to this section of American society – teachers are people who wage violent protests (thanks to borrowed footage from the West Coast) and they are undeserving of their economic benefits, and indeed treat these privileges as ‘rights’. The ‘war’ is waged on multiple fronts, economic, political, social, psychological even — or at least one gets this sort of picture from reading these articles.
  • as Singaporeans with a uniquely Singaporean work ethic, we may perceive functioning ‘trade unions’ as those institutions in the so-called “West” where they amass lots of membership, then hold the government ‘hostage’ in order to negotiate higher wages and benefits. Think of trade unions in the Singaporean context, and I think of SIA pilots. And of LKY’s various firm and stern comments on those issues. Think of trade unions and I think of strikes in France, in South Korea, when I was younger, and of my mum saying, “How irresponsible!” before flipping the TV channel.
  • The reason why I think the teachers’ protests should not be seen solely as an issue about trade-unions, and evaluated myopically and naively in terms of whether trade unions are ‘good’ or ‘bad’ is because the protests feature in a larger political context with the billionaire Koch brothers at the helm, financing and directing much of what has transpired in recent weeks. Or at least according to certain articles which I present here.
  • In this NYT article entitled “Billionaire Brothers’ Money Plays Role in Wisconsin Dispute“, the writer noted that Koch Industries had been “one of the biggest contributors to the election campaign of Gov. Scott Walker of Wisconsin, a Republican who has championed the proposed cuts.” Further, the president of Americans for Prosperity, a nonprofit group financed by the Koch brothers, had reportedly addressed counter-demonstrators last Saturday saying that “the cuts were not only necessary, but they also represented the start of a much-needed nationwide move to slash public-sector union benefits.” and in his own words -“ ‘We are going to bring fiscal sanity back to this great nation’ ”. All this rhetoric would be more convincing to me if they weren’t funded by the same two billionaires who financially enabled Walker’s governorship.
  • I now refer you to a long piece by Jane Mayer for The New Yorker titled, “Covert Operations: The billionaire brothers who are waging a war against Obama“. According to her, “The Kochs are longtime libertarians who believe in drastically lower personal and corporate taxes, minimal social services for the needy, and much less oversight of industry—especially environmental regulation. These views dovetail with the brothers’ corporate interests.”
  • Their libertarian modus operandi involves great expenses in lobbying, in political contributions and in setting up think tanks. From 2006-2010, Koch Industries have led energy companies in political contributions; “[i]n the second quarter of 2010, David Koch was the biggest individual contributor to the Republican Governors Association, with a million-dollar donation.” More statistics, or at least those of the non-anonymous donation records, can be found on page 5 of Mayer’s piece.
  • Naturally, the Democrats also have their billionaire donors, most notably in the form of George Soros. Mayer writes that he has made ‘generous private contributions to various Democratic campaigns, including Obama’s.” Yet what distinguishes him from the Koch brothers here is, as Michael Vachon, his spokesman, argued, ‘that Soros’s giving is transparent, and that “none of his contributions are in the service of his own economic interests.” ‘ Of course, this must be taken with a healthy dose of salt, but I will note here that in Charles Ferguson’s documentary Inside Job, which was about the 2008 financial crisis, George Soros was one of those interviewed who was not portrayed negatively. (My review of it is here.)
  • Of the Koch brothers’ political investments, what interested me more was the US’ “first libertarian thinktank”, the Cato Institute. Mayer writes, ‘When President Obama, in a 2008 speech, described the science on global warming as “beyond dispute,” the Cato Institute took out a full-page ad in the Times to contradict him. Cato’s resident scholars have relentlessly criticized political attempts to stop global warming as expensive, ineffective, and unnecessary. Ed Crane, the Cato Institute’s founder and president, told [Mayer] that “global-warming theories give the government more control of the economy.” ‘
  • K Street refers to a major street in Washington, D.C. where major think tanks, lobbyists and advocacy groups are located.
  • with recent developments as the Citizens United case where corporations are now ‘persons’ and have no caps in political contributions, the Koch brothers are ever better-positioned to take down their perceived big, bad government and carry out their ideological agenda as sketched in Mayer’s piece
  • with much important news around the world jostling for our attention – earthquake in Japan, Middle East revolutions – the passing of an anti-union bill (which finally happened today, for better or for worse) in an American state is unlikely to make a headline able to compete with natural disasters and revolutions. Then, to quote Wisconsin Governor Scott Walker during that prank call conversation, “Sooner or later the media stops finding it [the teacher protests] interesting.”
  • What remains more puzzling for me is why the American public seems to buy into the Koch-funded libertarian rhetoric. Mayer writes, ‘ “Income inequality in America is greater than it has been since the nineteen-twenties, and since the seventies the tax rates of the wealthiest have fallen more than those of the middle class. Yet the brothers’ message has evidently resonated with voters: a recent poll found that fifty-five per cent of Americans agreed that Obama is a socialist.” I suppose that not knowing who is funding the political rhetoric makes it easier for the public to imbibe it.
Weiye Loh

Roger Pielke Jr.'s Blog: The Guardian on Difficult Energy Choices - 0 views

  • For all the emotive force of events in Japan, though, this is one issue where there is a pressing need to listen to what our heads say about the needs of the future, as opposed to subjecting ourselves to jittery whims of the heart. One of the few solid lessons to emerge from the aged Fukushima plant is that the tendency in Britain and elsewhere to postpone politically painful choices about building new nuclear stations by extending the life-spans of existing ones is dangerous. Beyond that, with or without Fukushima, the undisputed nastiness of nuclear – the costs, the risks and the waste – still need to be carefully weighed in the balance against the different poisons pumped out by coal, which remains the chief economic alternative. Most of the easy third ways are illusions. Energy efficiency has been improving for over 200 years, but it has worked to increase not curb demand. Off-shore wind remains so costly that market forces would simply push pollution overseas if it were taken up in a big way. A massive expansion of shale gas may yet pave the way to a plausible non-nuclear future, and it certainly warrants close examination. The fundamentals of the difficult decisions ahead, however, have not moved with the Earth.
  •  
    The Guardian hits the right note on energy policy choices in the aftermath of the still unfolding Japanese nuclear crisis:
Weiye Loh

Skepticblog » Litigation gone wild! A geologist's take on the Italian seismol... - 0 views

  • Apparently, an Italian lab technician named Giampaolo Giuliani made a prediction about a month before the quake, based on elevated levels of radon gas. However, seismologists have known for a long time that radon levels, like any other “magic bullet” precursor, are unreliable because no two quakes are alike, and no two quakes give the same precursors. Nevertheless, his prediction caused a furor before the quake actually happened. The Director of the Civil Defence, Guido Bertolaso, forced him to remove his findings from the Internet (old versions are still on line). Giuliani was also reported to the police for “causing fear” with his predictions about a quake near Sulmona, which was far from where the quake actually struck. Enzo Boschi, the head of the Italian National Geophysics Institute declared: “Every time there is an earthquake there are people who claim to have predicted it. As far as I know nobody predicted this earthquake with precision. It is not possible to predict earthquakes.” Most of the geological and geophysical organizations around the world made similar statements in support of the proper scientific procedures adopted by the Italian geophysical community. They condemned Giuliani for scaring people using a method that has not shown to be reliable.
  • most the of press coverage I have read (including many cited above) took the sensationalist approach, and cast Guiliani as the little “David” fighting against the “Goliath” of “Big Science”
  • none of the reporters bothered to do any real background research, or consult with any other legitimate seismologist who would confirm that there is no reliable way to predict earthquakes in the short term and Giuliani is misleading people when he says so. Giulian’s “prediction” was sheer luck, and if he had failed, no one would have mentioned it again.
  • ...4 more annotations...
  • Even though he believes in his method, he ignores the huge body of evidence that shows radon gas is no more reliable than any other “predictor”.
  • If the victims insist on suing someone, they should leave the seismologists alone and look into the construction of some of those buildings. The stories out of L’Aquila suggest that the death toll was much higher because of official corruption and shoddy construction, as happens in many countries both before and after big quakes.
  • much of the construction is apparently Mafia-controlled in that area—good luck suing them! Sadly, the ancient medieval buildings that crumbled were the most vulnerable because they were made of unreinforced masonry, the worst possible construction material in earthquake country
  • what does this imply for scientists who are working in a field that might have predictive power? In a litigious society like Italy or the U.S., this is a serious question. If a reputable seismologist does make a prediction and fails, he’s liable, because people will panic and make foolish decisions and then blame the seismologist for their losses. Now the Italian courts are saying that (despite world scientific consensus) seismologists are liable if they don’t predict quakes. They’re damned if they do, and damned if they don’t. In some societies where seismologists work hard at prediction and preparation (such as China and Japan), there is no precedent for suing scientists for doing their jobs properly, and the society and court system does not encourage people to file frivolous suits. But in litigious societies, the system is counterproductive, and stifles research that we would like to see developed. What seismologist would want to work on earthquake prediction if they can be sued? I know of many earth scientists with brilliant ideas not only about earthquake prediction but even ways to defuse earthquakes, slow down global warming, or many other incredible but risky brainstorms—but they dare not propose the idea seriously or begin to implement it for fear of being sued.
  •  
    In the case of most natural disasters, people usually regard such events as "acts of God" and  try to get on with their lives as best they can. No human cause is responsible for great earthquakes, tsunamis, volcanic eruptions, tornadoes, hurricanes, or floods. But in the bizarre world of the Italian legal system, six seismologists and a public official have been charged with manslaughter for NOT predicting the quake! My colleagues in the earth science community were incredulous and staggered at this news. Seismologists and geologists have been saying for decades (at least since the 1970s) that short-term earthquake prediction (within minutes to hours of the event) is impossible, and anyone who claims otherwise is lying. As Charles Richter himself said, "Only fools, liars, and charlatans predict earthquakes." How could anyone then go to court and sue seismologists for following proper scientific procedures?
Weiye Loh

English: Who speaks English? | The Economist - 0 views

  • This was not a statistically controlled study: the subjects took a free test online and of their own accord.  They were by definition connected to the internet and interested in testing their English; they will also be younger and more urban than the population at large.
  • But Philip Hult, the boss of EF, says that his sample shows results similar to a more scientifically controlled but smaller study by the British Council.
  • Wealthy countries do better overall. But smaller wealthy countries do better still: the larger the number of speakers of a country’s main language, the worse that country tends to be at English. This is one reason Scandinavians do so well: what use is Swedish outside Sweden?  It may also explain why Spain was the worst performer in western Europe, and why Latin America was the worst-performing region: Spanish’s role as an international language in a big region dampens incentives to learn English.
  • ...4 more annotations...
  • Export dependency is another correlate with English. Countries that export more are better at English (though it’s not clear which factor causes which).  Malaysia, the best English-performer in Asia, is also the sixth-most export-dependent country in the world.  (Singapore was too small to make the list, or it probably would have ranked similarly.) This is perhaps surprising, given a recent trend towards anti-colonial and anti-Western sentiment in Malaysia’s politics. The study’s authors surmise that English has become seen as a mere tool, divorced in many minds from its associations with Britain and America.
  • Teaching plays a role, too. Starting young, while it seems a good idea, may not pay off: children between eight and 12 learn foreign languages faster than younger ones, so each class hour on English is better spent on a 10-year-old than on a six-year-old.
  • Between 1984 and 2000, the study's authors say, the Netherlands and Denmark began English-teaching between 10 and 12, while Spain and Italy began between eight and 11, with considerably worse results. Mr Hult reckons that poor methods, particularly the rote learning he sees in Japan, can be responsible for poor results despite strenuous efforts.
  • one surprising result is that China and India are next to each other (29th and 30th of 44) in the rankings, despite India’s reputation as more Anglophone. Mr Hult says that the Chinese have made a broad push for English (they're "practically obsessed with it”). But efforts like this take time to marinade through entire economies, and so may have avoided notice by outsiders. India, by contrast, has long had well-known Anglophone elites, but this is a narrow slice of the population in a country considerably poorer and less educated than China. English has helped India out-compete China in services, while China has excelled in manufacturing. But if China keeps up the push for English, the subcontinental neighbour's advantage may not last.
1 - 20 of 25 Next ›
Showing 20 items per page