Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged times

Rss Feed Group items tagged

Weiye Loh

Have you heard of the Koch Brothers? | the kent ridge common - 0 views

  • I return to the Guardian online site expressly to search for those elusive articles on Wisconsin. The main page has none. I click on News – US, and there are none. I click on ‘Commentary is Free’- US, and find one article on protests in Ohio. I go to the New York Times online site. Earlier, on my phone, I had seen one article at the bottom of the main page on Wisconsin. By the time I managed to get on my computer to find it again however, the NYT main page was quite devoid of any articles on the protests at all. I am stumped; clearly, I have to reconfigure my daily news sources and reading diet.
  • It is not that the media is not covering the protests in Wisconsin at all – but effective media coverage in the US at least, in my view, is as much about volume as it is about substantive coverage. That week, more prime-time slots and the bulk of the US national attention were given to Charlie Sheen and his crazy antics (whatever they were about, I am still not too sure) than to Libya and the rest of the Middle East, or more significantly, to a pertinent domestic issue, the teacher protests  - not just in Wisconsin but also in other cities in the north-eastern part of the US.
  • In the March 2nd episode of The Colbert Report, it was shown that the Fox News coverage of the Wisconsin protests had re-used footage from more violent protests in California (the palm trees in the background gave Fox News away). Bill O’Reilly at Fox News had apparently issued an apology – but how many viewers who had seen the footage and believed it to be on-the-ground footage of Wisconsin would have followed-up on the report and the apology? And anyway, why portray the teacher protests as violent?
  • ...12 more annotations...
  • In this New York Times’ article, “Teachers Wonder, Why the scorn?“, the writer notes the often scathing comments from counter-demonstrators – “Oh you pathetic teachers, read the online comments and placards of counterdemonstrators. You are glorified baby sitters who leave work at 3 p.m. You deserve minimum wage.” What had begun as an ostensibly ‘economic reform’ targeted at teachers’ unions has gradually transmogrified into a kind of “character attack” to this section of American society – teachers are people who wage violent protests (thanks to borrowed footage from the West Coast) and they are undeserving of their economic benefits, and indeed treat these privileges as ‘rights’. The ‘war’ is waged on multiple fronts, economic, political, social, psychological even — or at least one gets this sort of picture from reading these articles.
  • as Singaporeans with a uniquely Singaporean work ethic, we may perceive functioning ‘trade unions’ as those institutions in the so-called “West” where they amass lots of membership, then hold the government ‘hostage’ in order to negotiate higher wages and benefits. Think of trade unions in the Singaporean context, and I think of SIA pilots. And of LKY’s various firm and stern comments on those issues. Think of trade unions and I think of strikes in France, in South Korea, when I was younger, and of my mum saying, “How irresponsible!” before flipping the TV channel.
  • The reason why I think the teachers’ protests should not be seen solely as an issue about trade-unions, and evaluated myopically and naively in terms of whether trade unions are ‘good’ or ‘bad’ is because the protests feature in a larger political context with the billionaire Koch brothers at the helm, financing and directing much of what has transpired in recent weeks. Or at least according to certain articles which I present here.
  • In this NYT article entitled “Billionaire Brothers’ Money Plays Role in Wisconsin Dispute“, the writer noted that Koch Industries had been “one of the biggest contributors to the election campaign of Gov. Scott Walker of Wisconsin, a Republican who has championed the proposed cuts.” Further, the president of Americans for Prosperity, a nonprofit group financed by the Koch brothers, had reportedly addressed counter-demonstrators last Saturday saying that “the cuts were not only necessary, but they also represented the start of a much-needed nationwide move to slash public-sector union benefits.” and in his own words -“ ‘We are going to bring fiscal sanity back to this great nation’ ”. All this rhetoric would be more convincing to me if they weren’t funded by the same two billionaires who financially enabled Walker’s governorship.
  • I now refer you to a long piece by Jane Mayer for The New Yorker titled, “Covert Operations: The billionaire brothers who are waging a war against Obama“. According to her, “The Kochs are longtime libertarians who believe in drastically lower personal and corporate taxes, minimal social services for the needy, and much less oversight of industry—especially environmental regulation. These views dovetail with the brothers’ corporate interests.”
  • Their libertarian modus operandi involves great expenses in lobbying, in political contributions and in setting up think tanks. From 2006-2010, Koch Industries have led energy companies in political contributions; “[i]n the second quarter of 2010, David Koch was the biggest individual contributor to the Republican Governors Association, with a million-dollar donation.” More statistics, or at least those of the non-anonymous donation records, can be found on page 5 of Mayer’s piece.
  • Naturally, the Democrats also have their billionaire donors, most notably in the form of George Soros. Mayer writes that he has made ‘generous private contributions to various Democratic campaigns, including Obama’s.” Yet what distinguishes him from the Koch brothers here is, as Michael Vachon, his spokesman, argued, ‘that Soros’s giving is transparent, and that “none of his contributions are in the service of his own economic interests.” ‘ Of course, this must be taken with a healthy dose of salt, but I will note here that in Charles Ferguson’s documentary Inside Job, which was about the 2008 financial crisis, George Soros was one of those interviewed who was not portrayed negatively. (My review of it is here.)
  • Of the Koch brothers’ political investments, what interested me more was the US’ “first libertarian thinktank”, the Cato Institute. Mayer writes, ‘When President Obama, in a 2008 speech, described the science on global warming as “beyond dispute,” the Cato Institute took out a full-page ad in the Times to contradict him. Cato’s resident scholars have relentlessly criticized political attempts to stop global warming as expensive, ineffective, and unnecessary. Ed Crane, the Cato Institute’s founder and president, told [Mayer] that “global-warming theories give the government more control of the economy.” ‘
  • K Street refers to a major street in Washington, D.C. where major think tanks, lobbyists and advocacy groups are located.
  • with recent developments as the Citizens United case where corporations are now ‘persons’ and have no caps in political contributions, the Koch brothers are ever better-positioned to take down their perceived big, bad government and carry out their ideological agenda as sketched in Mayer’s piece
  • with much important news around the world jostling for our attention – earthquake in Japan, Middle East revolutions – the passing of an anti-union bill (which finally happened today, for better or for worse) in an American state is unlikely to make a headline able to compete with natural disasters and revolutions. Then, to quote Wisconsin Governor Scott Walker during that prank call conversation, “Sooner or later the media stops finding it [the teacher protests] interesting.”
  • What remains more puzzling for me is why the American public seems to buy into the Koch-funded libertarian rhetoric. Mayer writes, ‘ “Income inequality in America is greater than it has been since the nineteen-twenties, and since the seventies the tax rates of the wealthiest have fallen more than those of the middle class. Yet the brothers’ message has evidently resonated with voters: a recent poll found that fifty-five per cent of Americans agreed that Obama is a socialist.” I suppose that not knowing who is funding the political rhetoric makes it easier for the public to imbibe it.
Weiye Loh

Do peer reviewers get worse with experience? Plus a poll « Retraction Watch - 0 views

  • We’re not here to defend peer review against its many critics. We have the same feelings about it that Churchill did about democracy, aka the worst form of government except for all those others that have been tried. Of course, a good number of the retractions we write about are due to misconduct, and it’s not clear how peer review, no matter how good, would detect out-and-out fraud.
  • With that in mind, a paper published last week in the Annals of Emergency Medicine caught our eye. Over 14 years, 84 editors at the journal rated close to 15,000 reviews by about 1,500 reviewers. Highlights of their findings: …92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately correlated with their mean quality score, with better-than average reviewers decreasing at about half the rate of those below average). Only 8% improved, and those by very small amount.
  • The average reviewer in our study would have taken 12.5 years to reach this threshold; only 3% of reviewers whose quality decreased would have reached it in less than 5 years, and even the worst would take 3.2 years. Another 35% of all reviewers would reach the threshold in 5 to 10 years, 28% in 10 to 15 years, 12% in 15 to 20 years, and 22% in 20 years or more. So the decline was slow. Still, the results, note the authors, were surprising: Such a negative overall trend is contrary to most editors’ and reviewers’ intuitive expectations and beliefs about reviewer skills and the benefits of experience.
  • ...4 more annotations...
  • What could account for this decline? The study’s authors say it might be the same sort of decline you generally see as people get older. This is well-documented in doctors, so why shouldn’t it be true of doctors — and others — who peer review?
  • Other than the well-documented cognitive decline of humans as they age, there are other important possible causes of deterioration of performance that may play a role among scientific reviewers. Examples include premature closure of decisionmaking, less compliance with formal structural review requirements, and decay of knowledge base with time (ie, with aging more of the original knowledge base acquired in training becomes out of date). Most peer reviewers say their reviews have changed with experience, becoming shorter and focusing more on methods and larger issues; only 25% think they have improved.
  • Decreased cognitive performance capability may not be the only or even chief explanation. Competing career activities and loss of motivation as tasks become too familiar may contribute as well, by decreasing the time and effort spent on the task. Some research has concluded that the decreased productivity of scientists as they age is due not to different attributes or access to resources but to “investment motivation.” This is another way of saying that competition for the reviewer’s time (which is usually uncompensated) increases with seniority, as they develop (more enticing) opportunities for additional peer review, research, administrative, and leadership responsibilities and rewards. However, from the standpoint of editors and authors (or patients), whether the cause of the decrease is decreasing intrinsic cognitive ability or diminished motivation and effort does not matter. The result is the same: a less rigorous review by which to judge articles
  • What can be done? The authors recommend “deliberate practice,” which involves assessing one’s skills, accurately identifying areas of relative weakness, performing specific exercises designed to improve and extend those weaker skills, and investing high levels of concentration and hundreds or thousands of hours in the process. A key component of deliberate practice is immediate feedback on one’s performance. There’s a problem: But acting on prompt feedback (to guide deliberate practice) would be almost impossible for peer reviewers, who typically get no feedback (and qualitative research reveals this is one of their chief complaints).
  •  
    92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately corre
Olivia Chang

Boys in blue raid Perth's Sunday Times - 2 views

Story URL: http://www.somebodythinkofthechildren.com/boys-in-blue-raid-perths-sunday-times/ Summary of the article: In February, a Sunday Times reporter Paul Lampathakis wrote an article about a ...

censorship free press

started by Olivia Chang on 02 Sep 09 no follow-up yet
Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

Our conflicted relationship with animals - Pets. Animals. - Salon.com - 0 views

  • In his fascinating new book, "Some We Love, Some We Hate, Some We Eat," Hal Herzog looks at the wild, tortured paradoxes in our relationship with the weaker, if sometimes more adorable, species.
  • it's the human-meat relationship. The fact is, very few people are vegetarians; even most vegetarians eat meat. There have been several studies, including a very large one by the Department of Agriculture, where they asked people one day: Describe your diet. And 5 percent said they were vegetarians. Well, then they called the same people back a couple of days later and asked them about what they ate in the last 24 hours. And over 60 percent of these vegetarians had eaten meat. And so, the fact is, the campaign for moralized meat has been a failure. We actually kill three times as many animals for their flesh as we did when Peter Singer wrote "Animal Liberation" [in 1975]. We eat probably 20 percent more meat than we did when he wrote that book. Even though people are more concerned about animals, it seems like that's been occurring. The question is, why?
  • What was it about the two giant viral videos of the past few weeks -- the London woman, Mary Bale, who tried to trash that cat; the Bosnian woman who threw puppies from a bridge
  • ...8 more annotations...
  • The bigger thing is they're both pet species, though. I've been thinking about this. I just went back this morning, and I uncovered a piece in the New York Times from 1877. And it's actually fascinating. They had a stray dog population, so what they did is they rounded up 750 stray dogs. They took them to the East River, and they had a large metal cage -- it took them all day to do this -- they would put 50 dogs at a time, 48 dogs at a time in this metal, iron cage, and lower it into the East River with a crane.
  • they both involved women. And this is a little bit of an anomaly, because if you look at animal cruelty trials and (data), I think it's that 90 to 95 percent are men behind them. So that's one reason why this went viral; it's the surprising idea of women being cruel in this way.
  • drowning animals was actually an acceptable way of dealing with pet overpopulation in 1877. Now it seems horrifying. I watched that girl toss those puppies into the river, and it was just horrifying.
  • rooster fighters had a fairly intricate set of moral logical framework in which cockfighting not only becomes not bad, it becomes actually good for the moral model for your children, something to be desired.
  • the most common rationale is the same one that you hear from chicken eaters: It's natural. It's really funny, I was telling a woman one time about these cockfighters, and she was telling me how disgusting it was and somehow it came around to eating chicken. I said, "Whoa, you eat chicken, how do you feel about that?" and she said, "Well, that's different because that's natural." That's exactly what the rooster fighters told me.
  • the cockfighters take good care of them, as opposed to the chicken we eat, which usually live very short, very miserable lives.
  • the fact is, there is actually less harm done by rooster fighting than there is by eating chicken.
  • There's a number of people that are bitten by pets every year. There's a shocking number of people that trip over their pet and wind up in the hospital. There's the fact that pets are the biggest source of conflict between neighbors
  •  
    Our conflicted relationship with animals Why do we get so angry with animal abusers, but eat more animals than ever before? An expert provides some clues
Weiye Loh

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
Weiye Loh

The Data-Driven Life - NYTimes.com - 0 views

  • Humans make errors. We make errors of fact and errors of judgment. We have blind spots in our field of vision and gaps in our stream of attention.
  • These weaknesses put us at a disadvantage. We make decisions with partial information. We are forced to steer by guesswork. We go with our gut.
  • Others use data.
  • ...3 more annotations...
  • Others use data. A timer running on Robin Barooah’s computer tells him that he has been living in the United States for 8 years, 2 months and 10 days. At various times in his life, Barooah — a 38-year-old self-employed software designer from England who now lives in Oakland, Calif. — has also made careful records of his work, his sleep and his diet.
  • A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration. Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.
  • “People have such very poor sense of time,” Barooah says, and without good time calibration, it is much harder to see the consequences of your actions. If you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them.
Weiye Loh

Did Mark Zuckerberg Deserve to Be Named Person of the Year? No - 0 views

  • First, Time carried out a reader poll, in which individuals got the chance to vote and rate their favorite nominees. Zuckerberg ended up in 10th place with 18,353 votes and an average rating of 52 behind renowned individuals such as Lady Gaga, Julian Assange, John Stewart and Stephen Colbert, Barack Obama, Steve Jobs, et cetera. On the other end of the spectrum, Julian Assange managed to grab the first place with a whopping 382,026 votes and an average rating of 92.It turns out that the poll had no point or purpose at all. Time clearly did not take into account its readers’ opinion on the matter.
  • ulian Assange should have been named Person of the Year. His contribution to the world and history — whether you see it as positive or negative — has been more controversial and life-changing that those of Zuckerberg. Assange and his non-profit organization has changed the way we look at various governments around the world. Specially, the U.S. government. There’s a reason why hundreds of thousands of individuals voted for Assange and not Zuckerberg.
  • even other nominees deserve the title more than Zuckerberg. For instance, Lady Gaga has become a huge influence in the music scene. She’s also done a lot of charitable work for LGBT [lesbian, gay, bisexual, and transgender] individuals and support equality rights. Even though I’m not a fan, Apple CEO Steve Jobs has also done more than Zuckerberg. His opinion and mandate at Apple has completely revolutionize the tech industry.
  • ...1 more annotation...
  • Facebook as a company and social network deserve the title more than its CEO
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • When you read the enormous list of sites with Penney links, the landscape of the Internet acquires a whole new topography. It starts to seem like a city with a few familiar, well-kept buildings, surrounded by millions of hovels kept upright for no purpose other than the ads that are painted on their walls.
  • Exploiting those hovels for links is a Google no-no. The company’s guidelines warn against using tricks to improve search engine rankings, including what it refers to as “link schemes.” The penalty for getting caught is a pair of virtual concrete shoes: the company sinks in Google’s results.
  • In 2006, Google announced that it had caught BMW using a black-hat strategy to bolster the company’s German Web site, BMW.de. That site was temporarily given what the BBC at the time called “the death penalty,” stating that it was “removed from search results.”
  • ...9 more annotations...
  • BMW acknowledged that it had set up “doorway pages,” which exist just to attract search engines and then redirect traffic to a different site. The company at the time said it had no intention of deceiving users, adding “if Google says all doorway pages are illegal, we have to take this into consideration.”
  • The Times sent Google the evidence it had collected about the links to JCPenney.com. Google promptly set up an interview with Matt Cutts, the head of the Webspam team at Google, and a man whose every speech, blog post and Twitter update is parsed like papal encyclicals by players in the search engine world.
  • He said Google had detected previous guidelines violations related to JCPenney.com on three occasions, most recently last November. Each time, steps were taken that reduced Penney’s search results — Mr. Cutts avoids the word “punished” — but Google did not later “circle back” to the company to see if it was still breaking the rules, he said.
  • He and his team had missed this recent campaign of paid links, which he said had been up and running for the last three to four months. “Do I wish our system had detected things sooner? I do,” he said. “But given the one billion queries that Google handles each day, I think we do an amazing job.”
  • You get the sense that Mr. Cutts and his colleagues are acutely aware of the singular power they wield as judge, jury and appeals panel, and they’re eager to project an air of maturity and judiciousness.
  • Mr. Cutts sounded remarkably upbeat and unperturbed during this conversation, which was a surprise given that we were discussing a large, sustained effort to snooker his employer. Asked about his zenlike calm, he said the company strives not to act out of anger.
  • PENNEY reacted to this instant reversal of fortune by, among other things, firing its search engine consulting firm, SearchDex. Executives there did not return e-mail or phone calls.
  • “Am I happy this happened?” he later asked. “Absolutely not. Is Google going to take strong corrective action? We absolutely will.” And the company did. On Wednesday evening, Google began what it calls a “manual action” against Penney, essentially demotions specifically aimed at the company.
  • At 7 p.m. Eastern time on Wednesday, J. C. Penney was still the No. 1 result for “Samsonite carry on luggage.” Two hours later, it was at No. 71.
Weiye Loh

Climate change and extreme flooding linked by new evidence | George Monbiot | Environme... - 0 views

  • Two studies suggest for the first time a clear link between global warming and extreme precipitation
  • There's a sound rule for reporting weather events that may be related to climate change. You can't say that a particular heatwave or a particular downpour – or even a particular freeze – was definitely caused by human emissions of greenhouse gases. But you can say whether these events are consistent with predictions, or that their likelihood rises or falls in a warming world.
  • Weather is a complex system. Long-running trends, natural fluctuations and random patterns are fed into the global weather machine, and it spews out a series of events. All these events will be influenced to some degree by global temperatures, but it's impossible to say with certainty that any of them would not have happened in the absence of man-made global warming.
  • ...5 more annotations...
  • over time, as the data build up, we begin to see trends which suggest that rising temperatures are making a particular kind of weather more likely to occur. One such trend has now become clearer. Two new papers, published by Nature, should make us sit up, as they suggest for the first time a clear link between global warming and extreme precipitation (precipitation means water falling out of the sky in any form: rain, hail or snow).
  • We still can't say that any given weather event is definitely caused by man-made global warming. But we can say, with an even higher degree of confidence than before, that climate change makes extreme events more likely to happen.
  • One paper, by Seung-Ki Min and others, shows that rising concentrations of greenhouse gases in the atmosphere have caused an intensification of heavy rainfall events over some two-thirds of the weather stations on land in the northern hemisphere. The climate models appear to have underestimated the contribution of global warming on extreme rainfall: it's worse than we thought it would be.
  • The other paper, by Pardeep Pall and others, shows that man-made global warming is very likely to have increased the probability of severe flooding in England and Wales, and could well have been behind the extreme events in 2000. The researchers ran thousands of simulations of the weather in autumn 2000 (using idle time on computers made available by a network of volunteers) with and without the temperature rises caused by man-made global warming. They found that, in nine out of 10 cases, man-made greenhouse gases increased the risks of flooding. This is probably as solid a signal as simulations can produce, and it gives us a clear warning that more global heating is likely to cause more floods here.
  • As Richard Allan points out, also in Nature, the warmer the atmosphere is, the more water vapour it can carry. There's even a formula which quantifies this: 6-7% more moisture in the air for every degree of warming near the Earth's surface. But both models and observations also show changes in the distribution of rainfall, with moisture concentrating in some parts of the world and fleeing from others: climate change is likely to produce both more floods and more droughts.
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Leong Sze Hian stands corrected? | The Online Citizen - 0 views

  • In your article, you make the argument that “Straits Times Forum Editor, was merely amending his (my) letter to cite the correct statistics. “For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”.” Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf I look forward to your reply, Mr Leong. Thank you for taking the time to read this message.
  • we should, wherever possible, try to agree to disagree, as it is healthy to have and to encourage different viewpoints.
    • Weiye Loh
       
      Does that mean that every viewpoint can and should be accepted as correct to encourage differences? 
  • ...4 more annotations...
  • If I say I think it is fair in Singapore, because half of the bottom one-third of the people make it to the top two-thirds, it does not mean that someone can quote me and say that I said what I said because half the bottom one-third of people did not make it. I think it is alright to say that I do not agree entirely with what was said, because does it also mean on the flip side that half of the bottom one-third of the people did not make it? This is what I mean by quoting one out of context, by using statistics that I did not say, and implying that I did, or by innuendo.
  • Moreover, depending on the methodology, definition, sampling, etc, half of the  bottom one-third of the people making it, does not necessary mean that half did not make it, because some may not be in the population because of various reasons, like emigration, not turning up, transfer, whether adjustments are made  for the mobility of people up or down the social strata over time, etc. If I did not use a particular statistic to state my case, for example, I don’t think it is appropriate to quote me and say that you agree with me by citing statistics from a third party source, like the MOE chart in the Straits Times article, instead of quoting the statistics that I said.
  • I cannot find anything in any of the media reports to say with certainty that the Minister backed up his remarks with direct reference to the MOE chart. There is also nothing in the narrative that only 10 per cent  of children from one-to-three room flats make it to university – which is to say, 90 per cent  of them don’t. The ’90 per cent’ cannot be attributed to what the minister said, as at best it is the writer’s interpretation of the MOE chart.
  • Interesting exchange of letters. Samuel’s interpretation of the statistics provided by Ng Eng Hen and ST is correct. There is little doubt about it. While I can see where Leong Sze Hian is coming from, I don’t totally agree with him. Specifically, Samuel’s first statement (only ~10% of students living in 1-3 room flat make it to university) is directed at ST’s report that education is a good social leveller but not at Ng. It is therefore a valid point to make.
Weiye Loh

Basqueresearch.com: News - PhD thesis warns of risk of delegating to just a few teacher... - 0 views

  • the incorporation of Information and Communication Technologies into Primary Education brought with it positive changes in the role of the teacher and the student. Teachers and students stopped being mere transmitters and receptors, respectively. The first became mediators of information and the second opted for learning through investigating, discovering and presenting ideas to classmates and teachers. In this way they have, at the same time, the opportunity of getting to know the work of other students, too. Thus, the use of Internet and ICTs reinforce participation and collaboration in the school. According to Dr Altuna, it also helps to boost learning models that are more constructivist, socio-constructivist and even connectivist.
  • Despite its educational possibilities the researcher warns that there are numerous factors that limit the incorporation of Internet into the teaching of the curricular subject in question. These involve aspects such as the time dedicated weekly, technological and computer facilities, accessibility and connection to Internet, the school curriculum and, above all, the knowledge, training and involvement of the teaching staff.
  • the thesis observed a tendency to delegate responsibility for ICT in the school to those teachers who were considered to be “computer experts”. Dr Altuna warns of the risks that this practice runs, as thereby the rest of the staff continues to be untrained and unable to apply ICT and Internet in activities undertaken within their curricular subject. It has to be stressed, therefore, that all should be responsible for the educational measures to be taken so that students acquire digital skills. Also observed was the need for a pedagogic approach to ICT which advises the teaching staff on knowledge about and putting into practice activities in educational innovation.
  • ...2 more annotations...
  • Dr Altuna not only includes the lack of involvement of teaching staff amongst the limitations for incorporating ICT, but also that of the involvement of the families. It was explained that families showed interest in the use of Internet and ICTs as educational tools for their children, but that these, too, excessively delegate to the schools. The researcher stressed that the families also need guidance, as they are concerned about the use by their children of Internet but do not know the best way to go about the problem.
  • Educational psychologist Dr Jon Altuna has carried out a thorough study of the phenomenon of the school 2.0. Concretely, he has looked into the use and level of incorporation of Internet and of Information and Communication Technologies (ICT) into the third cycle of Primary Education, observing at the same time the attitudes of the teaching staff, and of the students and the families of the children in this regard. His PhD, defended at the University of the Basque Country (UPV/EHU), is entitled, Incorporation of Internet into the teaching of the subject Knowledge of the Environment during the third cycle of Primary Education: possibilities and analysis of the situation of a school. Dr Altuna’s research is based on a study of cases undertaken over eight years at a school where new activities involving ICT had been introduced into the curricular subject of Knowledge of the Environment, taught in the fifth and sixth year of Primary Education. The researcher gathered data from 837 students, 134 teachers and 190 families of this school. This study was completed with the experiences of ICT teachers from 21 schools.
  •  
    Despite its educational possibilities the researcher warns that there are numerous factors that limit the incorporation of Internet into the teaching of the curricular subject in question. These involve aspects such as the time dedicated weekly, technological and computer facilities, accessibility and connection to Internet, the school curriculum and, above all, the knowledge, training and involvement of the teaching staff.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

There Is Such A Thing As A Free Coffee | The Utopianist - Think Bigger - 0 views

  • Overall, the ratio of people taking versus giving is 2-1. Stark has a truly grand vision: “It’s literally giving people hope. Ultimately the goal is for more people to do this kind of thing. I admit it seems a little frivolous to give away coffee to people with iPhones. But imagine if you had a CVS card and you could give someone $10 for their Alzheimer’s medication. The concept of frictionless social giving is very attractive. And this is just the beginning of that.” It’s easy enough to text a number to make a donation during times of disaster, and many do it, but the concern may still exist over “where” the money is going; systems with re-loadable cards are straightforward and in some way more transparent (after all, the users probably have their own, personal, cards), serving to spur people into donating even more. I say let’s expand this — I cannot wait to see it act elsewhere — some sort of school card, perhaps? Download the full-sized card here; before you go, check the balance on Twitter — updated every couple of minutes, Stark wrote the program himself. “Like” Jonathan’s Starbucks Card on Facebook to spread the word; and when you want to donate, simply log on to the Starbucks website and reload card number 6061006913522430.
  •  
    Programmer Jonathan Stark, vice president of Mobiquity, has begun a truly cool experiment: sharing his Starbucks card with the world. While researching ways one can pay-by-mobile, Stark took an interesting perspective on Starbucks' system. He realized there was (at the time) no app for Android users, so he simply took a picture of his card and posted it online. He loaded it with $30 and then encouraged others to use it - and reload it, if they see fit. Not surprisingly, people took him up on it. Since those $30, the card has seen over $9,000 worth of anonymous donations. Stark says that "every time the balance gets really high, it brings out the worst in people: Someone goes down to Starbucks and makes a huge purchase. I don't know if they are buying coffee beans or mugs, or transferring money to their own card or what. But as long as the balance stays low, say $20 to $30, it seems like it manages itself. I haven't put any money on it in a while. All the money going through the card right now is the kindness of strangers."
Weiye Loh

Do Androids Dream of Origami Unicorns? | Institute For The Future - 0 views

  • rep.licants is the work that I did for my master thesis. During my studies, I developed an interest about the way most of people are using social networks but also the differences in between someone real identity and his digital one.
  • Back to rep.licants - when I began to think about a project for my master thesis, I really wanted to work on those two thematics (mix in between digital and real identity and a kind of study about how users are using social networks). With the aim to raise discussions about those two thematics.
  • the negative responses are mainly from people who were thinking rep.licants is a real and serious webservice which is giving for free performant bots who are able to almost perfectly replicate the user. And if they are expecting that I understand their disappointment because my bot is far from being performant ! Some were negatives because people were thinking it is kind of scary asking a bot to manage your own digital identity so they rejected the idea.
  • ...6 more annotations...
  • For the positive responses it's mainly people who understood that rep.licants is not about giving performant bots but is more like an experiment (and also a kind of critics about how most of the users are using social networks) where users can mix themselves with a bot and see what is happening. Because even if my bots are crap they can be, sometimes, surprising.
  • But I was kind of surprised that so many people would really expect to have a real bot to manage their social networks account. Twitter never responded and Facebook responded by banning, three times already, my Facebook applications which is managing and running all the Facebook's bots.
  • some people use the bot: a. Just as an experiment, they want to see what the bot can do and if the bot can really improve their virtual social influences. Or users experimenting how long they could keep a bot on their account without their friends noticing it's runt by a bot. b. I saw few time inside my database which stores informations about the users that some of them have a twitter name like "renthouseUSA", so I guess they are using rep.licants for getting a presence on social networks without managing anything and as a commercial goal. c. This is a feedback that I had a lot of time and it is the reason why I am using rep.licants on my own twitter account: If you are precise with the keywords that you give to the bot, it will sometimes find very interesting content related to your interest. My bot made me discover a lot of interesting things, by posting them on Twitter, that I wouldn't never find without him. New informations are coming so fast and in so big quantities that it becomes really difficult to deal with that. For example just on Twitter I follow 80 persons (which is not a lot) all of those persons that I follow is because I know that they might tweet interesting stuffs related to my interests. But I have maybe 10 of those 80 followers who are tweeting quiet a lot (maybe 1-2 tweet per hour) and as I check my twitter feed only one time per day I sometimes loose more than one hour to find interesting tweets in the amount of tweets that my 80 persons posted. And this is only for Twitter ! I really think that we need more and more personal robots for filtering information for us. And this is a very positive point I found about having a bot that I could never imagine when I was beginning my project.
  • One surprising bugs was when the Twitter's bots began to speak to themselves. It's maybe boring for some users to see their own account speak to itself one time per day but when I discovered the bug I found it very funny. So I decided to keep that bug !
  • this video of a chatbot having a conversation with itself went viral – perhaps in part because the conversation immediately turned towards more existentialist questions and responses.  The conversation was recorded at the Cornell Creative Machines Lab, where the faculty are researching how to make helper bots. 

     


  • The questions that rep.licants poses are deep human and social ones – laced with uncertainties about the kinds of interactions we count as normal and the responsibilities we owe to ourselves and each other.  Seeing these bots carry out conversations with themselves and with human counterparts (much less other non-human counterparts) allows us to take tradition social and technological research into a different territory – asking not only what it means to be human – but also what it means to be non-human.
Weiye Loh

Freakonomics » The Economics of Happiness, Part 1: Reassessing the Easterlin ... - 0 views

  • Arguably the most important finding from the emerging economics of happiness has been the Easterlin Paradox. What is this paradox? It is the juxtaposition of three observations: 1) Within a society, rich people tend to be much happier than poor people. 2) But, rich societies tend not to be happier than poor societies (or not by much). 3) As countries get richer, they do not get happier.
  • Easterlin offered an appealing resolution to his paradox, arguing that only relative income matters to happiness. Other explanations suggest a “hedonic treadmill,” in which we must keep consuming more just to stay at the same level of happiness.
  • We have re-analyzed all of the relevant post-war data, and also analyzed the particularly interesting new data from the Gallup World Poll. Last Thursday we presented our research at the latest Brookings Panel on Economic Activity, and we have arrived at a rather surprising conclusion: There is no Easterlin Paradox. The facts about income and happiness turn out to be much simpler than first realized: 1) Rich people are happier than poor people. 2) Richer countries are happier than poorer countries. 3) As countries get richer, they tend to get happier.
  • ...4 more annotations...
  • What explains these new findings? The key turns out to be an accumulation of data over recent decades. Thirty years ago it was difficult to make convincing international comparisons because there were few datasets comparing rich and poor countries. Instead, researchers were forced to make comparisons based on a handful of moderately-rich and very-rich countries. These data just didn’t lend themselves to strong conclusions.
  • Moreover, repeated happiness surveys around the world have allowed us to observe the evolution of G.D.P. and happiness through time — both over a longer period, and for more countries. On balance, G.D.P. and happiness have tended to move together.
  • There is a second issue here that has led to mistaken inferences: a tendency to confuse absence of evidence for a proposition as evidence of its absence. Thus, when early researchers could not isolate a statistically reliable association between G.D.P. and happiness, they inferred that this meant the two were unrelated, and a paradox was born.
  • Our complete analysis is available here. An excellent summary is available in today’s New York Times, here, with a very cool graphic, and readers’ comments. Other commentary is available in the F.T. (here and here), and Time Magazine.
Weiye Loh

TODAYonline | Commentary | For the info-rich and time-poor, digital curators to the res... - 0 views

  • digital "curators" choose and present things related to a specific topic and context. They "curate", as opposed to "aggregate", which implies plain collecting with little or no value add. Viewed in this context, Google search does the latter, not the former. So, who curates? The Huffington Post, or HuffPo, is one high-profile example and, it appears, a highly-valued one too, going by AOL numbers-crunchers who forked out US$315 million (S$396.9 million) to acquire it. Accolades have also come in for Arianna Huffington's team of contributors and more than 3,000 bloggers - from politicians to celebrities to think-tankers. The website was named second among the 25 best blogs of 2009 by Time magazine, and most powerful blog in the world by The Observer.
  • By sifting, sorting and presenting news and views - yes, "curating" - HuffPo makes itself useful in an age of too much information and too many opinions. (Strictly speaking, HuffPo is both a creator and curator.) If what HuffPo is doing seems deja vu, it is hardly surprising. Remember the good old "curated" news of the pre-Internet days when newspapers decided what news was published and what we read? Then, the Editor was the Curator with the capital "C".
  • But with the arrival of the Internet and the uploading of news and views by organisations and netizens, the bits and bytes have turned into a tsunami. Aggregators like Google search threw us some life buoys, using text and popularity to filter the content. But with millions of new articles and videos added to the Internet daily, the "right" content has become that proverbial needle in the haystack. Hence the need for curation.
  •  
    Inundated by the deluge of information, and with little time on our hands, some of us turn to social media networks. Sometimes, postings by friends are useful. But often, the typically self-indulgent musings are not. It's "curators" to the rescue.
Weiye Loh

Read Aubrey McClendon's response to "misleading" New York Times article (1) - 0 views

  • Since the shale gas revolution and resulting confirmation of enormous domestic gas reserves, there has been a relatively small group of analysts and geologists who have doubted the future of shale gas.  Their doubts have become very convenient to the environmental activists I mentioned earlier. This particular NYT reporter has apparently sought out a few of the doubters to fashion together a negative view of the U.S. natural gas industry. We also believe certain media outlets, especially the once venerable NYT, are being manipulated by those whose environmental or economic interests are being threatened by abundant natural gas supplies. We have seen for example today an email from a leader of a group called the Environmental Working Group who claimed today’s articles as this NYT reporter’s "second great story" (the first one declaring that produced water disposal from shale gas wells was unsafe) and that “we've been working with him for over 8 months. Much more to come. . .”
  • this reporter’s claim of impending scarcity of natural gas supply contradicts the facts and the scientific extrapolation of those facts by the most sophisticated reservoir engineers and geoscientists in the world. Not just at Chesapeake, but by experts at many of the world’s leading energy companies that have made multi-billion-dollar, long-term investments in U.S. shale gas plays, with us and many other companies. Notable examples of these companies, besides the leading independents such as Chesapeake, Devon, Anadarko, EOG, EnCana, Talisman and others, include these leading global energy giants:  Exxon, Shell, BP, Chevron, Conoco, Statoil, BHP, Total, CNOOC, Marathon, BG, KNOC, Reliance, PetroChina, Mitsui, Mitsubishi and ENI, among others.  Is it really possible that all of these companies, with a combined market cap of almost $2 trillion, know less about shale gas than a NYT reporter, a few environmental activists and a handful of shale gas doubters?
  •  
    Administrator's Note: This email was sent to all Chesapeake employees from CEO Aubrey McClendon, in response to a Sunday New York Times piece by Ian Urbina entitled "Insiders Sound an Alarm Amid a Natural Gas Rush."   FW: CHK's response to 6.26.11 NYT article on shale gas   From: Aubrey McClendon Sent: Sunday, June 26, 2011 8:37 PM To: All Employees   Dear CHK Employees:  By now many of you may have read or heard about a story in today's New York Times (NYT) that questioned the productive capacity and economic quality of U.S. natural gas shale reserves, as well as energy reserve accounting practices used by E&P companies, including Chesapeake.  The story is misleading, at best, and is the latest in a series of articles produced by this publication that obviously have an anti-industry bias.  We know for a fact that today's NYT story is the handiwork of the same group of environmental activists who have been the driving force behind the NYT's ongoing series of negative articles about the use of fracking and its importance to the US natural gas supply growth revolution - which is changing the future of our nation for the better in multiple areas.  It is not clear to me exactly what these environmental activists are seeking to offer as their alternative energy plan, but most that I have talked to continue to naively presume that our great country need only rely on wind and solar energy to meet our current and future energy needs. They always seem to forget that wind and solar produce less than 2% of America electricity today and are completely non-economic without ongoing government and ratepayer subsidies.
Weiye Loh

FleetStreetBlues: Independent columnist Johann Hari admits copying and pasting intervie... - 0 views

  • this isn't just a case of referencing something the interviewee has written previously - 'As XXX has written before...', or such like. No, Hari adds dramatic context to quotes which were never said - the following paragraph, for instance, is one of the quotes from the Levy interview which seems to have appeared elsewhere before. After saying this, he falls silent, and we stare at each other for a while. Then he says, in a quieter voice: “The facts are clear. Israel has no real intention of quitting the territories or allowing the Palestinian people to exercise their rights. No change will come to pass in the complacent, belligerent, and condescending Israel of today. This is the time to come up with a rehabilitation programme for Israel.”
  • So how does Hari justify it? Well, his post on 'Interview etiquette', as he calls it, is so stunningly brazen about playing fast-and-loose with quotes
  • When I’ve interviewed a writer, it’s quite common that they will express an idea or sentiment to me that they have expressed before in their writing – and, almost always, they’ve said it more clearly in writing than in speech. (I know I write much more clearly than I speak – whenever I read a transcript of what I’ve said, or it always seems less clear and more clotted. I think we’ve all had that sensation in one form or another). So occasionally, at the point in the interview where the subject has expressed an idea, I’ve quoted the idea as they expressed it in writing, rather than how they expressed it in speech. It’s a way of making sure the reader understands the point that (say) Gideon Levy wants to make as clearly as possible, while retaining the directness of the interview. Since my interviews are intellectual portraits that I hope explain how a person thinks, it seemed the most thorough way of doing it...
  • ...3 more annotations...
  • ...I’m a bit bemused to find one blogger considers this “plagiarism”. Who’s being plagiarized? Plagiarism is passing off somebody else’s intellectual work as your own – whereas I’m always making it clear that (say) Gideon Levy’s thought is Gideon Levy’s thought. I’m also a bit bemused to find that some people consider this “churnalism”. Churnalism is a journalist taking a press release and mindlessly recycling it – not a journalist carefully reading over all a writer’s books and selecting parts of it to accurately quote at certain key moments to best reflect how they think.
  • I called round a few other interviewers for British newspapers and they said what I did was normal practice and they had done it themselves from time to time. My test for journalism is always – would the readers mind you did this, or prefer it? Would they rather I quoted an unclear sentence expressing a thought, or a clear sentence expressing the same thought by the same person very recently? Both give an accurate sense of what a person is like, but one makes their ideas as accessible as possible for the reader while also being an accurate portrait of the person.
  • The Independent's top columnist and interviewer has just admitted that he routinely adds things his interviewees have written at some point in the past to their quotes, and then deliberately passes these statements off as though they were said to him in the course of an interview. The main art of being an interviewer is to be skilled at eliciting the right quotes from your subject. If Johann Hari wants to write 'intellectual portraits', he should go and write fiction. Do his editors really know that the copy they're printing ('we stare at each other for a while. Then he says in a quieter voice...') is essentially made up? What would Jayson Blair make of it all? Astonishing.
  •  
    In the last few days, a couple of blogs have been scrutinising the work of Johann Hari, the multiple award-winning Independent columnist and interviewer. A week ago on Friday the political DSG blog pointed out an eerie series of similarities between the quotes in Hari's interview with Toni Negri in 2004, and quotes in the book Negri on Negri, published in 2003. Brian Whelan, an editor with Yahoo! Ireland and a regular FleetStreetBlues contributor, spotted this and got in touch to suggest perhaps this wasn't the only time quotes in Hari's interviews had appeared elsewhere before. We ummed and ahhed slightly about running the piece based on one analysis from a self-proclaimed leftist blog - so Brian went away and did some analysis of his own. And found that a number of quotes in Hari's interview with Gideon Levy in the Independent last year had also been copied from elsewhere. So far, so scurrilous. But what's really astonishing is that Johann Hari has now responded to the blog accusations. And cheerfully admitted that he regularly includes in interviews quotes which the interviewee never actually said to him.
‹ Previous 21 - 40 of 446 Next › Last »
Showing 20 items per page