Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Public Relations

Rss Feed Group items tagged

Weiye Loh

True Enough : CJR - 0 views

  • The dangers are clear. As PR becomes ascendant, private and government interests become more able to generate, filter, distort, and dominate the public debate, and to do so without the public knowing it. “What we are seeing now is the demise of journalism at the same time we have an increasing level of public relations and propaganda,” McChesney said. “We are entering a zone that has never been seen before in this country.”
  • Michael Schudson, a journalism professor at Columbia University, cjr contributor, and author of Discovering the News, said modern public relations started when Ivy Lee, a minister’s son and a former reporter at the New York World, tipped reporters to an accident on the Pennsylvania Railroad. Before then, railroads had done everything they could to cover up accidents. But Lee figured that crashes, which tend to leave visible wreckage, were hard to hide. So it was better to get out in front of the inevitable story. The press release was born. Schudson said the rise of the “publicity agent” created deep concern among the nation’s leaders, who distrusted a middleman inserting itself and shaping messages between government and the public. Congress was so concerned that it attached amendments to bills in 1908 and 1913 that said no money could be appropriated for preparing newspaper articles or hiring publicity agents.
  • But World War I pushed those concerns to the side. The government needed to rally the public behind a deeply unpopular war. Suddenly, publicity agents did not seem so bad.
  • ...7 more annotations...
  • “After the war, PR becomes a very big deal,” Schudson said. “It was partly stimulated by the war and the idea of journalists and others being employed by the government as propagandists.” Many who worked for the massive wartime propaganda apparatus found an easy transition into civilian life.
  • People “became more conscious that they were not getting direct access, that it was being screened for them by somebody else,” Schudson said. But there was no turning back. PR had become a fixture of public life. Concern about the invisible filter of public relations became a steady drumbeat in the press
  • When public relations began its ascent in the early twentieth century, journalism was rising alongside it. The period saw the ferocious work of the muckrakers, the development of the great newspaper chains, and the dawn of radio and, later, television. Journalism of the day was not perfect; sometimes it was not even good. But it was an era of expansion that eventually led to the powerful press of the mid to late century.
  • Now, during a second rise of public relations, we are in an era of massive contraction in traditional journalism. Bureaus have closed, thousands of reporters have been laid off, once-great newspapers like the Rocky Mountain News have died. The Pew Center took a look at the impact of these changes last year in a study of the Baltimore news market. The report, “How News Happens,” found that while new online outlets had increased the demand for news, the number of original stories spread out among those outlets had declined. In one example, Pew found that area newspapers wrote one-third the number of stories about state budget cuts as they did the last time the state made similar cuts in 1991. In 2009, Pew said, The Baltimore Sun produced 32 percent fewer stories than it did in 1999.
  • even original reporting often bore the fingerprints of government and private public relations. Mark Jurkowitz, associate director the Pew Center, said the Baltimore report concentrated on six major story lines: state budget cuts, shootings of police officers, the University of Maryland’s efforts to develop a vaccine, the auction of the Senator Theater, the installation of listening devices on public busses, and developments in juvenile justice. It found that 63 percent of the news about those subjects was generated by the government, 23 percent came from interest groups or public relations, and 14 percent started with reporters.
  • The Internet makes it easy for public relations people to reach out directly to the audience and bypass the press, via websites and blogs, social media and videos on YouTube, and targeted e-mail.
  • Some experts have argued that in the digital age, new forms of reporting will eventually fill the void left by traditional newsrooms. But few would argue that such a point has arrived, or is close to arriving. “There is the overwhelming sense that the void that is created by the collapse of traditional journalism is not being filled by new media, but by public relations,” said John Nichols, a Nation correspondent and McChesney’s co-author. Nichols said reporters usually make some calls and check facts. But the ability of government or private public relations to generate stories grows as reporters have less time to seek out stories on their own. That gives outside groups more power to set the agenda.
  •  
    In their recent book, The Death and Life of American Journalism, Robert McChesney and John Nichols tracked the number of people working in journalism since 1980 and compared it to the numbers for public relations. Using data from the US Bureau of Labor Statistics, they found that the number of journalists has fallen drastically while public relations people have multiplied at an even faster rate. In 1980, there were about .45 PR workers per one hundred thousand population compared with .36 journalists. In 2008, there were .90 PR people per one hundred thousand compared to .25 journalists. That's a ratio of more than three-to-one, better equipped, better financed.
Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

How drug companies' PR tactics skew the presentation of medical research | Science | gu... - 0 views

  • Drug companies exert this hold on knowledge through publication planning agencies, an obscure subsection of the pharmaceutical industry that has ballooned in size in recent years, and is now a key lever in the commercial machinery that gets drugs sold.The planning companies are paid to implement high-impact publication strategies for specific drugs. They target the most influential academics to act as authors, draft the articles, and ensure that these include clearly-defined branding messages and appear in the most prestigious journals.
  • In selling their services to drug companies, the agencies' explain their work in frank language. Current Medical Directions, a medical communications company based in New York, promises to create "scientific content in support of our clients' messages". A rival firm from Macclesfield, Complete HealthVizion, describes what it does as "a fusion of evidence and inspiration."
  • There are now at least 250 different companies engaged in the business of planning clinical publications for the pharmaceutical industry, according to the International Society for Medical Publication Professionals, which said it has over 1000 individual members.Many firms are based in the UK and the east coast of the United States in traditional "pharma" centres like Pennsylvania and New Jersey.Precise figures are hard to pin down because publication planning is widely dispersed and is only beginning to be recognized as something like a discrete profession.
  • ...6 more annotations...
  • the standard approach to article preparation is for planners to work hand-in-glove with drug companies to create a first draft. "Key messages" laid out by the drug companies are accommodated to the extent that they can be supported by available data.Planners combine scientific information about a drug with two kinds of message that help create a "drug narrative". "Environmental" messages are intended to forge the sense of a gap in available medicine within a specific clinical field, while "product" messages show how the new drug meets this need.
  • In a flow-chart drawn up by Eric Crown, publications manager at Merck (the company that sold the controversial painkiller Vioxx), the determination of authorship appears as the fourth stage of the article preparation procedure. That is, only after company employees have presented clinical study data, discussed the findings, finalised "tactical plans" and identified where the article should be published.Perhaps surprisingly to the casual observer, under guidelines tightened up in recent years by the International Committee of Journal Editors (ICMJE), Crown's approach, typical among pharmaceutical companies, does not constitute ghostwriting.
  • What publication planners understand by the term is precise but it is also quite distinct from the popular interpretation.
  • "We may have written a paper, but the people we work with have to have some input and approve it."
  • "I feel that we're doing something good for mankind in the long-run," said Kimberly Goldin, head of the International Society for Medical Publication Professionals (ISMPP). "We want to influence healthcare in a very positive, scientifically sound way.""The profession grew out of a marketing umbrella, but has moved under the science umbrella," she said.But without the window of court documents to show how publication planning is being carried out today, the public simply cannot know if reforms the industry says it has made are genuine.
  • Dr Leemon McHenry, a medical ethicist at California State University, says nothing has changed. "They've just found more clever ways of concealing their activities. There's a whole army of hidden scribes. It's an epistemological morass where you can't trust anything."Alastair Matheson is a British medical writer who has worked extensively for medical communication agencies. He dismisses the planners' claims to having reformed as "bullshit"."The new guidelines work very nicely to permit the current system to continue as it has been", he said. "The whole thing is a big lie. They are promoting a product."
Weiye Loh

When Insurers Put Profits Before People - NYTimes.com - 0 views

  • Late in 2007
  • A 17-year-old girl named Nataline Sarkisyan was in desperate need of a transplant after receiving aggressive treatment that cured her recurrent leukemia but caused her liver to fail. Without a new organ, she would die in a matter of a days; with one, she had a 65 percent chance of surviving. Her doctors placed her on the liver transplant waiting list.
  • She was critically ill, as close to death as one could possibly be while technically still alive, and her fate was inextricably linked to another’s. Somewhere, someone with a compatible organ had to die in time for Nataline to live.
  • ...9 more annotations...
  • But even when the perfect liver became available a few days after she was put on the list, doctors could not operate. What made Nataline different from most transplant patients, and what eventually brought her case to the attention of much of the country, was that her survival did not depend on the availability of an organ or her clinicians or even the quality of care she received. It rested on her health insurance company.
  • Cigna had denied the initial request to cover the costs of the liver transplant. And the insurer persisted in its refusal, claiming that the treatment was “experimental” and unproven, and despite numerous pleas from Nataline’s physicians to the contrary.
  • But as relatives and friends organized campaigns to draw public attention to Nataline’s plight, the insurance conglomerate found itself embroiled in a public relations nightmare, one that could jeopardize its very existence. The company reversed its decision. But the change came too late. Nataline died just a few hours after Cigna authorized the transplant.
  • Mr. Potter was the head of corporate communications at two major insurers, first at Humana and then at Cigna. Now Mr. Potter has written a fascinating book that details the methods he and his colleagues used to manipulate public opinion
  • Mr. Potter goes on to describe the myth-making he did, interspersing descriptions of front groups, paid spies and jiggered studies with a deft retelling of the convoluted (and usually eye-glazing) history of health care insurance policies.
  • We learn that executives at Cigna worried that Nataline’s situation would only add fire to the growing public discontent with a health care system anchored by private insurance. As the case drew more national attention, the threat of a legislative overhaul that would ban for-profit insurers became real, and Mr. Potter found himself working on the biggest P.R. campaign of his career.
  • Cigna hired a large international law firm and a P.R. firm already well known to them from previous work aimed at discrediting Michael Moore and his film “Sicko.” Together with Cigna, these outside firms waged a campaign that would eventually include the aggressive placement of articles with friendly “third party” reporters, editors and producers who would “disabuse the media, politicians and the public of the notion that Nataline would have gotten the transplant if she had lived in Canada or France or England or any other developed country.” A “spy” was dispatched to Nataline’s funeral; and when the Sarkisyan family filed a lawsuit against the insurer, a team of lawyers was assigned to keep track of actions and comments by the family’s lawyer.
  • In the end, however, Nataline’s death proved to be the final straw for Mr. Potter. “It became clearer to me than ever that I was part of an industry that would do whatever it took to perpetuate its extraordinarily profitable existence,” he writes. “I had sold my soul.” He left corporate public relations for good less than six months after her death.
  • “I don’t mean to imply that all people who work for health insurance companies are greedier or more evil than other Americans,” he writes. “In fact, many of them feel — and justifiably so — that they are helping millions of people get they care they need.” The real problem, he says, lies in the fact that the United States “has entrusted one of the most important societal functions, providing health care, to private health insurance companies.” Therefore, the top executives of these companies become beholden not to the patients they have pledged to cover, but to the shareholders who hold them responsible for the bottom line.
Weiye Loh

Before Assange there was Jayakumar: Context, realpolitik, and the public inte... - 0 views

  • Singapore Ministry of Foreign Affairs spokesman’s remarks in the Wall Street Journal Asia piece, “Leaked cable spooks some U.S. sources” dated 3 Dec 2010. The paragraph in question went like this: “Others laid blame not on working U.S. diplomats, but on Wikileaks. Singapore’s Ministry of Foreign Affairs said it had “deep concerns about the damaging action of Wikileaks.” It added, ‘it is critical to protect the confidentiality of diplomatic and official correspondence.’” (emphasis my own)
  • on 25 Jan 2003, the then Singapore Minister of Foreign Affairs and current Senior Minister without portfolio, Professor S Jayakumar, in an unprecedented move, unilaterally released all diplomatic and official correspondence relating to confidential discussions on water negotiations between Singapore and Malaysia from the year 2000. In a parliamentary speech that would have had Julian Assange smiling from ear to ear, Jayakumar said, “We therefore have no choice but to set the record straight by releasing these documents for people to judge for themselves the truth of the matter.” The parliamentary reason for the unprecedented release of information was the misrepresentations made by Malaysia over the price of water, amongst others.
  • The then Malaysian Prime Minister, Mahathir’s response to Singapore’s pre-Wikileak wikileak was equally quote-worthy, “I don’t feel nice. You write a letter to your girlfriend. And your girlfriend circulates it to all her boyfriends. I don’t think I’ll get involved with that girl.”
  • ...9 more annotations...
  • Mahathir did not leave it at that. He foreshadowed the Wikileak-chastised countries of today saying what William, the Singapore Ministry of Foreign Affairs, the US and Iran today, amongst others, must agree with, “It’s very difficult now for us to write letters at all because we might as well negotiate through the media.”
  • I proceeded to the Ministry of Foreign Affairs homepage to search for the full press release. As I anticipated, there was a caveat. This is the press release in full: In response to media queries on the WikiLeaks release of confidential and secret-graded US diplomatic correspondence, the MFA Spokesman expressed deep concerns about the damaging action of WikiLeaks. It is critical to protect the confidentiality of diplomatic and official correspondence, which is why Singapore has the Officials Secrets Act. In particular, the selective release of documents, especially when taken out of context, will only serve to sow confusion and fail to provide a complete picture of the important issues that were being discussed amongst leaders in the strictest of confidentiality.
  • The sentence in red seems to posit that the selective release of documents can be legitimised if released documents are not taken out of context. If this interpretation is true, then one can account for the political decision to release confidential correspondence covering the Singapore and Malaysia water talks referred to above. In parallel, one can imagine Assange or his supporters arguing that lies of weapons of mass destruction in Iraq and the advent of abject two-faced politics today to be sufficient grounds to justify the actions of Wikileaks. As for the arguments about confidentiality and official correspondence, the events in parliament in 2003 tell us no one should underestimate the ability of nation-states to do an Assange if it befits their purpose – be it directly, as Jayakumar did, or indirectly, through the media or some other medium of influence.
  • Timothy Garton Ash put out the dilemma perfectly when he said, “There is a public interest in understanding how the world works and what is done in our name. There is a public interest in the confidential conduct of foreign policy. The two public interests conflict.”
  • the advent of technology will only further blur the lines between these two public interests, if it has not already. Quite apart from technology, the absence of transparent and accountable institutions may also serve to guarantee the prospect of more of such embarrassing leaks in future.
  • In August 2009, there was considerable interest in Singapore about the circumstances behind the departure of Chip Goodyear, former CEO of the Australian mining giant BHP Billiton, from the national sovereign wealth fund, Temasek Holdings. Before that, all the public knew was – in the name of leadership renewal – Chip Goodyear had been carefully chosen and apparently hand-picked to replace Ho Ching as CEO of Temasek Holdings. In response to Chip’s untimely departure, Finance Minister Tharman Shanmugaratnam was quoted, “People do want to know, there is curiosity, it is a matter of public interest. That is not sufficient reason to disclose information. It is not sufficient that there be curiosity and interest that you want to disclose information.”
  • Overly secretive and furtive politicians operating in a parliamentary democracy are unlikely to inspire confidence among an educated citizenry either, only serving to paradoxically fuel public cynicism and conspiracy theories.
  • I believe that government officials and politicians who perform their jobs honourably have nothing to fear from Wikileaks. I would admit that there is an inherent naivety and idealism in this position. But if the lesson from the Wikileaks episode portends a higher standard of ethical conduct, encourages transparency and accountability – all of which promote good governance, realpolitik notwithstanding – then it is perhaps a lesson all politicians and government officials should pay keen attention to.
  • Post-script: “These disclosures are largely of analysis and high-grade gossip. Insofar as they are sensational, it is in showing the corruption and mendacity of those in power, and the mismatch between what they claim and what they do….If American spies are breaking United Nations rules by seeking the DNA biometrics of the UN director general, he is entitled to hear of it. British voters should know what Afghan leaders thought of British troops. American (and British) taxpayers might question, too, how most of the billions of dollars going in aid to Afghanistan simply exits the country at Kabul airport.” –Simon Jenkins, Guardian
Weiye Loh

Censorship of War News Undermines Public Trust - 20 views

I posted a bookmark on something related to this issue. http://www.todayonline.com/World/EDC090907-0000047/The-photo-thats-caused-a-stir AP decided to publish a photo of a fatally wounded young ...

censorship PR

Weiye Loh

Religion as a catalyst of rationalization « The Immanent Frame - 0 views

  • For Habermas, religion has been a continuous concern precisely because it is related to both the emergence of reason and the development of a public space of reason-giving. Religious ideas, according to Habermas, are never mere irrational speculation. Rather, they possess a form, a grammar or syntax, that unleashes rational insights, even arguments; they contain, not just specific semantic contents about God, but also a particular structure that catalyzes rational argumentation.
  • in his earliest, anthropological-philosophical stage, Habermas approaches religion from a predominantly philosophical perspective. But as he undertakes the task of “transforming historical materialism” that will culminate in his magnum opus, The Theory of Communicative Action, there is a shift from philosophy to sociology and, more generally, social theory. With this shift, religion is treated, not as a germinal for philosophical concepts, but instead as the source of the social order.
  • What is noteworthy about this juncture in Habermas’s writings is that secularization is explained as “pressure for rationalization” from “above,” which meets the force of rationalization from below, from the realm of technical and practical action oriented to instrumentalization. Additionally, secularization here is not simply the process of the profanation of the world—that is, the withdrawal of religious perspectives as worldviews and the privatization of belief—but, perhaps most importantly, religion itself becomes the means for the translation and appropriation of the rational impetus released by its secularization.
  • ...6 more annotations...
  • religion becomes its own secular catalyst, or, rather, secularization itself is the result of religion. This approach will mature in the most elaborate formulation of what Habermas calls the “linguistification of the sacred,” in volume two of The Theory of Communicative Action. There, basing himself on Durkheim and Mead, Habermas shows how ritual practices and religious worldviews release rational imperatives through the establishment of a communicative grammar that conditions how believers can and should interact with each other, and how they relate to the idea of a supreme being. Habermas writes: worldviews function as a kind of drive belt that transforms the basic religious consensus into the energy of social solidarity and passes it on to social institutions, thus giving them a moral authority. [. . .] Whereas ritual actions take place at a pregrammatical level, religious worldviews are connected with full-fledged communicative actions.
  • The thrust of Habermas’s argumentation in this section of The Theory of Communicative Action is to show that religion is the source of the normative binding power of ethical and moral commandments. Yet there is an ambiguity here. While the contents of worldviews may be sublimated into the normative, binding of social systems, it is not entirely clear that the structure, or the grammar, of religious worldviews is itself exhausted. Indeed, in “A Genealogical Analysis of the Cognitive Content of Morality,” Habermas resolves this ambiguity by claiming that the horizontal relationship among believers and the vertical relationship between each believer and God shape the structure of our moral relationship to our neighbour, but now under two corresponding aspects: that of solidarity and that of justice. Here, the grammar of one’s religious relationship to God and the corresponding community of believers are like the exoskeleton of a magnificent species, which, once the religious worldviews contained in them have desiccated under the impact of the forces of secularization, leave behind a casing to be used as a structuring shape for other contents.
  • Metaphysical thinking, which for Habermas has become untenable by the very logic of philosophical development, is characterized by three aspects: identity thinking, or the philosophy of origins that postulates the correspondence between being and thought; the doctrine of ideas, which becomes the foundation for idealism, which in turn postulates a tension between what is perceived and what can be conceptualized; and a concomitant strong concept of theory, where the bios theoretikos takes on a quasi-sacred character, and where philosophy becomes the path to salvation through dedication to a life of contemplation. By “postmetaphysical” Habermas means the new self-understanding of reason that we are able to obtain after the collapse of the Hegelian idealist system—the historicization of reason, or the de-substantivation that turns it into a procedural rationality, and, above all, its humbling. It is noteworthy that one of the main aspects of the new postmetaphysical constellation is that in the wake of the collapse of metaphysics, philosophy is forced to recognize that it must co-exist with religious practices and language: Philosophy, even in its postmetaphysical form, will be able neither to replace nor to repress religion as long as religious language is the bearer of semantic content that is inspiring and even indispensable, for this content eludes (for the time being?) the explanatory force of philosophical language and continues to resist translation into reasoning discourses.
  • metaphysical thinking either surrendered philosophy to religion or sought to eliminate religion altogether. In contrast, postmetaphysical thinking recognizes that philosophy can neither replace nor dismissively reject religion, for religion continues to articulate a language whose syntax and content elude philosophy, but from which philosophy continues to derive insights into the universal dimensions of human existence.
  • Habermas claims that even moral discourse cannot translate religious language without something being lost: “Secular languages which only eliminate the substance once intended leave irritations. When sin was converted to culpability, and the breaking of divine commands to an offence against human laws, something was lost.” Still, Habermas’s concern with religion is no longer solely philosophical, nor merely socio-theoretical, but has taken on political urgency. Indeed, he now asks whether modern rule of law and constitutional democracies can generate the motivational resources that nourish them and make them durable. In a series of essays, now gathered in Between Naturalism and Religion, as well as in his Europe: The Faltering Project, Habermas argues that as we have become members of a world society (Weltgesellschaft), we have also been forced to adopt a societal “post-secular self-consciousness.” By this term Habermas does not mean that secularization has come to an end, and even less that it has to be reversed. Instead, he now clarifies that secularization refers very specifically to the secularization of state power and to the general dissolution of metaphysical, overarching worldviews (among which religious views are to be counted). Additionally, as members of a world society that has, if not a fully operational, at least an incipient global public sphere, we have been forced to witness the endurance and vitality of religion. As members of this emergent global public sphere, we are also forced to recognize the plurality of forms of secularization. Secularization did not occur in one form, but in a variety of forms and according to different chronologies.
  • through a critical reading of Rawls, Habermas has begun to translate the postmetaphysical orientation of modern philosophy into a postsecular self-understanding of modern rule of law societies in such a way that religious citizens as well as secular citizens can co-exist, not just by force of a modus vivendi, but out of a sincere mutual respect. “Mutual recognition implies, among other things, that religious and secular citizens are willing to listen and to learn from each other in public debates. The political virtue of treating each other civilly is an expression of distinctive cognitive attitudes.” The cognitive attitudes Habermas is referring to here are the very cognitive competencies that are distinctive of modern, postconventional social agents. Habermas’s recent work on religion, then, is primarily concerned with rescuing for the modern liberal state those motivational and moral resources that it cannot generate or provide itself. At the same time, his recent work is concerned with foregrounding the kind of ethical and moral concerns, preoccupations, and values that can guide us between the Scylla of a society administered from above by the system imperatives of a global economy and political power and the Charybdis of a technological frenzy that places us on the slippery slope of a liberally sanctioned eugenics.
  •  
    Religion in the public sphere: Religion as a catalyst of rationalization posted by Eduardo Mendieta
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

If climate scientists are in it for the money, they're doing it wrong - 0 views

  • Since it doesn't have a lot of commercial appeal, most of the people working in the area, and the vast majority of those publishing the scientific literature, work in academic departments or at government agencies. Penn State, home of noted climatologists Richard Alley and Michael Mann, has a strong geosciences department and, conveniently, makes the department's salary information available. It's easy to check, and find that the average tenured professor earned about $120,000 last year, and a new hire a bit less than $70,000.
  • That's a pretty healthy salary by many standards, but it's hardly a racket. Penn State appears to be on the low end of similar institutions, and is outdone by two other institutions in its own state (based on this report). But, more significantly for the question at hand, we can see that Earth Sciences faculty aren't paid especially well. Sure, they do much better than the Arts faculty, but they're somewhere in the middle of the pack, and get stomped on by professors in the Business and IT departments.
  • This is all, of course, ignoring what someone who can do the sort of data analysis or modeling of complex systems that climatologists perform might make if they went to Wall Street.
  • ...10 more annotations...
  • It's also worth pointing out what they get that money for, as exemplified by a fairly typical program announcement for NSF grants. Note that it calls for studies of past climate change and its impact on the weather. This sort of research could support the current consensus view, but it just as easily might not. And here's the thing: it's impossible to tell before the work's done. Even a study looking at the flow of carbon into and out of the atmosphere, which would seem to be destined to focus on anthropogenic climate influences, might identify a previously unknown or underestimated sink or feedback. So, even if the granting process were biased (and there's been no indication that it is), there is no way for it to prevent people from obtaining contrary data. The granting system is also set up to induce people to publish it, since a grant that doesn't produce scientific papers can make it impossible for a professor to obtain future funding.
  • Maybe the money is in the perks that come with grants, which provide for travel and lab toys. Unfortunately, there's no indication that there's lots of money out there for the taking, either from the public or private sector. For the US government, spending on climate research across 13 different agencies (from the Department of State to NASA) is tracked by the US Climate Change Science Program. The group has tracked the research budget since 1989, but not everything was brought under its umbrella until 1991. That year, according to CCSP figures, about $1.45 billion was spent on climate research (all figures are in 2007 dollars). Funding peaked back in 1995 at $2.4 billion, then bottomed out in 2006 at only $1.7 billion.
  • Funding has gone up a bit over the last couple of years, and some stimulus money went into related programs. But, in general, the trend has been a downward one for 15 years; it's not an area you'd want to go into if you were looking for a rich source of grant money. If you were, you would target medical research, for which the NIH had a $31 billion budget plus another $10 billion in stimulus money.
  • Not all of this money went to researchers anyway; part of the budget goes to NASA, and includes some of that agency's (rather pricey) hardware. For example, the Orbiting Carbon Observatory cost roughly $200 million, but failed to go into orbit; its replacement is costing another $170 million.
  • Might the private sector make up for the lack of government money? Pretty unlikely. For starters, it's tough to identify many companies that have a vested interest in the scientific consensus. Renewable energy companies would seem to be the biggest winners, but they're still relatively tiny. Neither the largest wind or photovoltaic manufacturers (Vestas and First Solar) appear in the Financial Times' list of the world's 500 largest companies. In contrast, there are 16 oil companies in the of the top 100, and they occupy the top two spots. Exxon's profits in 2010 were nearly enough to buy both Vestas and First Solar, given their market valuations in late February.
  • climate researchers are scrambling for a piece of a smaller piece of the government-funded pie, and the resources of the private sector are far, far more likely to go to groups that oppose their conclusions.
  • If you were paying careful attention to that last section, you would have noticed something funny: the industry that seems most likely to benefit from taking climate change seriously produces renewable energy products. However, those companies don't employ any climatologists. They probably have plenty of space for engineers, materials scientists, and maybe a quantum physicist or two, but there's not much that a photovoltaic company would do with a climatologist. Even by convincing the public of their findings—namely, climate change is real, and could have serious impacts—the scientists are not doing themselves any favors in terms of job security or alternative careers.
  • But, surely, by convincing the public, or at least the politicians, that there's something serious here, they ensure their own funding? That's arguably not true either, and the stimulus package demonstrates that nicely. The US CCSP programs, in total, got a few hundred million dollars from the stimulus. In contrast, the Department of Energy got a few billion. Carbon capture and sequestration alone received $2.4 billion, more than the entire CCSP budget.
  • The problem is that climatologists are well equipped to identify potential problems, but very poorly equipped to solve them; it would be a bit like expecting an astronomer to know how to destroy a threatening asteroid.
  • The solutions to problems related to climate change are going to come in areas like renewable energy, carbon sequestration, and efficiency measures; that's where most of the current administration's efforts have focused. None of these are areas where someone studying the climate is likely to have a whole lot to add. So, when they advocate that the public take them seriously, they're essentially asking the public to send money to someone else.
Weiye Loh

Should technical science journals have plain language translation? - Capital Weather Ga... - 0 views

  • Given that the future of the Earth depends on the public have a clearer understanding of Earth science, it seems to me there is something unethical in our insular behavior as scientists. Here is my proposal. I suggest authors must submit for review, and scientific societies be obliged to publish two versions of every journal. One would be the standard journal in scientific English for their scientific club. The second would be a parallel open-access summary translation into plain English of the relevance and significance of each paper for everyone else. A translation that educated citizens,businesses and law-makers can understand. Remember that they are funding this research, and some really want to understand what is happening to the Earth
  • A short essay in the Bulletin of the American Meteorological Society , entitled “A Proposal for Communicating Science” caught my attention today. Written by atmospheric scientist Alan Betts, it advocates technical journal articles related to Earth science be complemented by a mandatory non-technical version for the lay public. What a refreshing idea!
  •  
    A short essay in the Bulletin of the American Meteorological Society , entitled "A Proposal for Communicating Science" caught my attention today. Written by atmospheric scientist Alan Betts, it advocates technical journal articles related to Earth science be complemented by a mandatory non-technical version for the lay public.
Jody Poh

Subtitles, Lip Synching and Covers on YouTube - 13 views

I think that companies concerned over this issue due to the loss of potential income constitutes egoism. They mainly want to defend their interests without considering the beneficial impact of the ...

copyright youtube parody

Weiye Loh

Skepticblog » Investing in Basic Science - 0 views

  • A recent editorial in the New York Times by Nicholas Wade raises some interesting points about the nature of basic science research – primarily that its’ risky.
  • As I have pointed out about the medical literature, researcher John Ioaniddis has explained why most published studies turn out in retrospect to be wrong. The same is true of most basic science research – and the underlying reason is the same. The world is complex, and most of our guesses about how it might work turn out to be either flat-out wrong, incomplete, or superficial. And so most of our probing and prodding of the natural world, looking for the path to the actual answer, turn out to miss the target.
  • research costs considerable resources of time, space, money, opportunity, and people-hours. There may also be some risk involved (such as to subjects in the clinical trial). Further, negative studies are actually valuable (more so than terrible pictures). They still teach us something about the world – they teach us what is not true. At the very least this narrows the field of possibilities. But the analogy holds in so far as the goal of scientific research is to improve our understanding of the world and to provide practical applications that make our lives better. Wade writes mostly about how we fund research, and this relates to our objectives. Most of the corporate research money is interested in the latter – practical (and profitable) applications. If this is your goal, than basic science research is a bad bet. Most investments will be losers, and for most companies this will not be offset by the big payoffs of the rare winners. So many companies will allow others to do the basic science (government, universities, start up companies) then raid the winners by using their resources to buy them out, and then bring them the final steps to a marketable application. There is nothing wrong or unethical about this. It’s a good business model.
  • ...8 more annotations...
  • What, then, is the role of public (government) funding of research? Primarily, Wade argues (and I agree), to provide infrastructure for expensive research programs, such as building large colliders.
  • the more the government invests in basic science and infrastructure, the more winners will emerge that private industry can then capitalize on. This is a good way to build a competitive dynamic economy.
  • But there is a pitfall – prematurely picking winners and losers. Wade give the example of California investing specifically into developing stem cell treatments. He argues that stem cells, while promising, do not hold a guarantee of eventual success, and perhaps there are other technologies that will work and are being neglected. The history of science and technology has clearly demonstrated that it is wickedly difficult to predict the future (and all those who try are destined to be mocked by future generations with the benefit of perfect hindsight). Prematurely committing to one technology therefore contains a high risk of wasting a great deal of limited resources, and missing other perhaps more fruitful opportunities.
  • The underlying concept is that science research is a long-term game. Many avenues of research will not pan out, and those that do will take time to inspire specific applications. The media, however, likes catchy headlines. That means when they are reporting on basic science research journalists ask themselves – why should people care? What is the application of this that the average person can relate to? This seems reasonable from a journalistic point of view, but with basic science reporting it leads to wild speculation about a distant possible future application. The public is then left with the impression that we are on the verge of curing the common cold or cancer, or developing invisibility cloaks or flying cars, or replacing organs and having household robot servants. When a few years go by and we don’t have our personal android butlers, the public then thinks that the basic science was a bust, when in fact there was never a reasonable expectation that it would lead to a specific application anytime soon. But it still may be on track for interesting applications in a decade or two.
  • this also means that the government, generally, should not be in the game of picking winners an losers – putting their thumb on the scale, as it were. Rather, they will get the most bang for the research buck if they simply invest in science infrastructure, and also fund scientists in broad areas.
  • The same is true of technology – don’t pick winners and losers. The much-hyped “hydrogen economy” comes to mind. Let industry and the free market sort out what will work. If you have to invest in infrastructure before a technology is mature, then at least hedge your bets and keep funding flexible. Fund “alternative fuel” as a general category, and reassess on a regular basis how funds should be allocated. But don’t get too specific.
  • Funding research but leaving the details to scientists may be optimal
  • The scientific community can do their part by getting better at communicating with the media and the public. Try to avoid the temptation to overhype your own research, just because it is the most interesting thing in the world to you personally and you feel hype will help your funding. Don’t make it easy for the media to sensationalize your research – you should be the ones trying to hold back the reigns. Perhaps this is too much to hope for – market forces conspire too much to promote sensationalism.
Weiye Loh

Roger Pielke Jr.'s Blog: Wanted: Less Spin, More Informed Debate - 0 views

  • , the rejection of proposals that suggest starting with a low carbon price is thus a pretty good guarantee against any carbon pricing at all.  It is rather remarkable to see advocates for climate action arguing against a policy that recommends implementing a carbon price, simply because it does not start high enough for their tastes.  For some, idealism trumps pragmatism, even if it means no action at all.
  • Ward writes: . . . climate change is the result of a number of market failures, the largest of which arises from the fact that the prices of products and services involving emissions of greenhouse gases do not reflect the true costs of the damage caused through impacts on the climate. . . All serious economic analyses of how to tackle climate change identify the need to correct this market failure through a carbon price, which can be implemented, for instance, through cap and trade schemes or carbon taxes. . . A carbon price can be usefully supplemented by improvements in innovation policies, but it needs to be at the core of action on climate change, as this paper by Carolyn Fischer and Richard Newell points out.
  • First, the criticism is off target. A low and rising carbon price is in fact a central element to the policy recommendations advanced by the Hartwell Group in Climate Pragmatism, the Hartwell Paper, and as well, in my book The Climate Fix.  In Climate Pragmatism, we approvingly cite Japan's low-but-rising fossil fuels tax and discuss a range of possible fees or taxes on fossil fuels, implemented, not to penalize energy use or price fossil fuels out of the market, but rather to ensure that as we benefit from today’s energy resources we are setting aside the funds necessary to accelerate energy innovation and secure the nation’s energy future.
  • ...3 more annotations...
  • Here is another debating lesson -- before engaging in public not only should one read the materials that they are critiquing, they should also read the materials that they cite in support of their own arguments. This is not the first time that Bob Ward has put out misleading information related to my work.  Ever since we debated in public at the Royal Institution, Bob has adopted guerrilla tactics, lobbing nonsense into the public arena and then hiding when challenged to support or defend his views.  As readers here know, I am all for open and respectful debate over these important topics.  Why is that instead, all we get is poorly informed misdirection and spin? Despite the attempts at spin, I'd welcome Bob's informed engagement on this topic. Perhaps he might start by explaining which of the 10 statements that I put up on the mathematics and logic underlying climate pragmatism is incorrect.
  • In comments to another blog, I've identified Bob as a PR flack. I see no reason to change that assessment. In fact, his actions only confirm it. Where does he fit into a scientific debate?
  • Thanks for the comment, but I'll take the other side ;-)First, this is a policy debate that involves various scientific, economic, political analyses coupled with various values commitments including monied interests -- and as such, PR guys are as welcome as anyone else.That said, the problem here is not that Ward is a PR guy, but that he is trying to make his case via spin and misrepresentation. That gets noticed pretty quickly by anyone paying attention and is easily shot down.
Weiye Loh

Spinning the News of the World Scandal at Fox News » Sociological Images - 0 views

  •  
    So how does Fox report on this scandal? Rob Beschizza, writing for BoingBoing, highlighted a segment on Fox News in which the host and guest agree that "hacking scandals" are a "serious… problem" and imply that, in this instance, News of the World was the victim, not the perpetrator.  More, the guest "expert" is not a politician, scholar, or even a pundit, he's actually a public relations professional who specializes in spinning scandals to obviate the negative consequences for corporations. Says James Fallows at The Atlantic: He is Robert Dilenschneider, former head of Hill and Knowlton and now head of the Dilenschneider Group, who recently was featured in an interview, "How to Manage a PR Disaster."
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

Measuring the Unmeasurable (Internet) and Why It Matters « Gurstein's Communi... - 0 views

  • it appears that there is a quite significant hole in the National Accounting (and thus the GDP statistics) around Internet related activities since most of this accounting is concerned with measuring the production and distribution of tangible products and the associated services. For the most part the available numbers don’t include many Internet (or “social capital” e.g. in health and education) related activities as they are linked to intangible outputs. The significance of not including social capital components in the GDP has been widely discussed elsewhere. The significance (and potential remediation) of the absence of much of the Internet related activities was the subject of the workshop.
  • there had been a series of critiques of GDP statistics from Civil Society (CS) over the last few years—each associated with a CS “movements—the Woman’s Movement and the absence of measurement of “women’s (and particularly domestic) work”; the Environmental Movement and the absence of the longer term and environmental costs of the production of the goods that the GDP so blithely counts as a measure of national economic well-being; and most recently with the Sustainability Movement, and the absence of measures reflective of the longer term negative effects/costs of resource depletion and environmental degradation. What I didn’t see anywhere apart from the background discussions to the OECD workshop itself were critiques reflecting issues related to the Internet or ICTs.
  • the implications of the limitations in the Internet accounting went beyond a simple technical glitch and had potentially quite profound implications from a national policy and particularly a CS and community based development perspective. The possible distortions in economic measurement arising from the absence of Internet associated numbers in the SNA (there may be some $750 BILLION a year in “value’ being generated by Internet based search alone!) lead to the very real possibility that macro-economic analysis and related policy making may be operating on the basis of inadequate and even fallacious assumptions.
  • ...2 more annotations...
  • perhaps of greatest significance from the perspective of Civil Society and of communities is the overall absence of measurement and thus inclusion in the economic accounting of the value of the contributions provided to, through and on the Internet of various voluntary and not-for-profit initiatives and activities. Thus for example, the millions of hours of labour contributed to Wikipedia, or to the development of Free or Open Source software, or to providing support for public Internet access and training is not included as a net contribution or benefit to the economy (as measured through the GDP). Rather, this is measured as a negative effect since, as some would argue, those who are making this contribution could be using their time and talents in more “productive” (and “economically measurable”) activities. Thus for example, a region or country that chooses to go with free or open source software as the basis for its in-school computing is not only “not contributing to ‘economic well being’” it is “statistically” a “cost” to the economy since it is not allowing for expenditures on, for example, suites of Microsoft products.
  • there appears to have been no systematic attention paid to the relationship of the activities and growth of voluntary contributions to the Internet and the volume, range and depth of Internet activity, digital literacy and economic value being derived from the use of the Internet.
Weiye Loh

Adventures in Flay-land: Dealing with Denialists - Delingpole Part III - 0 views

  • This post is about how one should deal with a denialist of Delingpole's ilk.
  • I saw someone I follow on Twitter retweet an update from another Twitter user called @AGW_IS_A_HOAX, which was this: "NZ #Climate Scientists Admit Faking Temperatures http://bit.ly/fHbdPI RT @admrich #AGW #Climategate #Cop16 #ClimateChange #GlobalWarming".
  • So I click on it. And this is how you deal with a denialist claim. You actually look into it. Here is the text of that article reproduced in full: New Zealand Climate Scientists Admit To Faking Temperatures: The Actual Temps Show Little Warming Over Last 50 YearsRead here and here. Climate "scientists" across the world have been blatantly fabricating temperatures in hopes of convincing the public and politicians that modern global warming is unprecedented and accelerating. The scientists doing the fabrication are usually employed by the government agencies or universities, which thrive and exist on taxpayer research dollars dedicated to global warming research. A classic example of this is the New Zealand climate agency, which is now admitting their scientists produced bogus "warming" temperatures for New Zealand. "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century. For all their talk about warming, for all their rushed invention of the “Eleven-Station Series” to prove warming, this new series shows that no warming has occurred here since about 1960. Almost all the warming took place from 1940-60, when the IPCC says that the effect of CO2 concentrations was trivial. Indeed, global temperatures were falling during that period.....Almost all of the 34 adjustments made by Dr Jim Salinger to the 7SS have been abandoned, along with his version of the comparative station methodology."A collection of temperature-fabrication charts.
  • ...10 more annotations...
  • I check out the first link, the first "here" where the article says "Read here and here". I can see that there's been some sort of dispute between two New Zealand groups associated with climate change. One is New Zealand’s Climate Science Coalition (NZCSC) and the other is New Zealand’s National Institute of Water and Atmospheric Research (NIWA), but it doesn't tell me a whole lot more than I already got from the other article.
  • I check the second source behind that article. The second article, I now realize, is published on the website of a person called Andrew Montford with whom I've been speaking recently and who is the author of a book titled The Hockey Stick Illusion. I would not label Andrew a denialist. He makes some good points and seems to be a decent guy and geniune sceptic (This is not to suggest all denialists are outwardly dishonest; however, they do tend to be hard to reason with). Again, this article doesn't give me anything that I haven't already seen, except a link to another background source. I go there.
  • From this piece written up on Scoop NZNEWSUK I discover that a coalition group consisting of the NZCSC and the Climate Conversation Group (CCG) has pressured the NIWA into abandoning a set of temperature record adjustments of which the coalition dispute the validity. This was the culmination of a court proceeding in December 2010, last month. In dispute were 34 adjustments that had been made by Dr Jim Salinger to the 7SS temperature series, though I don't know what that is exactly. I also discover that there is a guy called Richard Treadgold, Convenor of the CCG, who is quoted several times. Some of the statements he makes are quoted in the articles I've already seen. They are of a somewhat snide tenor. The CSC object to the methodology used by the NIWA to adjust temperature measurements (one developed as part of a PhD thesis), which they critique in a paper in November 2009 with the title "Are we feeling warmer yet?", and are concerned about how this public agency is spending its money. I'm going to have to dig a bit deeper if I want to find out more. There is a section with links under the heading "Related Stories on Scoop". I click on a few of those.
  • One of these leads me to more. Of particular interest is a fairly neutral article outlining the progress of the court action. I get some more background: For the last ten years, visitors to NIWA’s official website have been greeted by a graph of the “seven-station series” (7SS), under the bold heading “New Zealand Temperature Record”. The graph covers the period from 1853 to the present, and is adorned by a prominent trend-line sloping sharply upwards. Accompanying text informs the world that “New Zealand has experienced a warming trend of approximately 0.9°C over the past 100 years.” The 7SS has been updated and used in every monthly issue of NIWA’s “Climate Digest” since January 1993. Its 0.9°C (sometimes 1.0°C) of warming has appeared in the Australia/NZ Chapter of the IPCC’s 2001 and 2007 Assessment Reports. It has been offered as sworn evidence in countless tribunals and judicial enquiries, and provides the historical base for all of NIWA’s reports to both Central and Local Governments on climate science issues and future projections.
  • now I can see why this is so important. The temperature record informs the conclusions of the IPCC assessment reports and provides crucial evidence for global warming.
  • Further down we get: NIWA announces that it has now completed a full internal examination of the Salinger adjustments in the 7SS, and has forwarded its “review papers” to its Australian counterpart, the Bureau of Meteorology (BOM) for peer review.and: So the old 7SS has already been repudiated. A replacement NZTR [New Zealand Temperature Record] is being prepared by NIWA – presumably the best effort they are capable of producing. NZCSC is about to receive what it asked for. On the face of it, there’s nothing much left for the Court to adjudicate.
  • NIWA has been forced to withdraw its earlier temperature record and replace it with a new one. Treadgold quite clearly states that "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century" and that "the new temperature record shows no evidence of a connection with global warming." Earlier in the article he also stresses the role of the CSC in achieving these revisions, saying "after 12 months of futile attempts to persuade the public, misleading answers to questions in the Parliament from ACT and reluctant but gradual capitulation from NIWA, their relentless defence of the old temperature series has simply evaporated. They’ve finally given in, but without our efforts the faulty graph would still be there."
  • All this leads me to believe that if I look at the website of NIWA I will see a retraction of the earlier position and a new position that New Zealand has experienced no unusual warming. This is easy enough to check. I go there. Actually, I search for it to find the exact page. Here is the 7SS page on the NIWA site. Am I surprised that NIWA have retracted nothing and that in fact their revised graph shows similar results? Not really. However, I am somewhat surprised by this page on the Climate Conversation Group website which claims that the 7SS temperature record is as dead as the parrot in the Monty Python sketch. It says "On the eve of Christmas, when nobody was looking, NIWA declared that New Zealand had a new official temperature record (the NZT7) and whipped the 7SS off its website." However, I've already seen that this is not true. Perhaps there was once a 7SS graph and information about the temperature record on the site's homepage that can no longer be seen. I don't know. I can only speculate. I know that there is a section on the NIWA site about the 7SS temperature record that contains a number of graphs and figures and discusses recent revisions. It has been updated as recently as December 2010, last month. The NIWA page talks all about the 7SS series and has a heading that reads "Our new analysis confirms the warming trend".
  • The CCG page claims that the new NZT7 is not in fact a revision but rather a replacement. Although it results in a similar curve, the adjustments that were made are very different. Frankly I can't see how that matters at the end of the day. Now, I don't really know whether I can believe that the NIWA analysis is true, but what I am in no doubt of whatsoever is that the statements made by Richard Treadgold that were quoted in so many places are at best misleading. The NIWA has not changed its position in the slightest. The assertion that the NIWA have admitted that New Zealand has not warmed much since 1960 is a politician's careful argument. Both analyses showed the same result. This is a fact that NIWA have not disputed; however, they still maintain a connection to global warming. A document explaining the revisions talks about why the warming has slowed after 1960: The unusually steep warming in the 1940-1960 period is paralleled by an unusually large increase in northerly flow* during this same period. On a longer timeframe, there has been a trend towards less northerly flow (more southerly) since about 1960. However, New Zealand temperatures have continued to increase over this time, albeit at a reduced rate compared with earlier in the 20th century. This is consistent with a warming of the whole region of the southwest Pacific within which New Zealand is situated.
  • Denialists have taken Treadgold's misleading mantra and spread it far and wide including on Twitter and fringe websites, but it is faulty as I've just demonstrated. Why do people do this? Perhaps they are hoping that others won't check the sources. Most people don't. I hope this serves as a lesson for why you always should.
Weiye Loh

Churnalism or news? How PRs have taken over the media | Media | The Guardian - 0 views

  • The website, churnalism.com, created by charity the Media Standards Trust, allows readers to paste press releases into a "churn engine". It then compares the text with a constantly updated database of more than 3m articles. The results, which give articles a "churn rating", show the percentage of any given article that has been reproduced from publicity material.The Guardian was given exclusive access to churnalism.com prior to launch. It revealed how all media organisations are at times simply republishing, verbatim, material sent to them by marketing companies and campaign groups.
  • Meanwhile, an independent film-maker, Chris Atkins, has revealed how he duped the BBC into running an entirely fictitious story about Downing Street's new cat to coincide with the site's launch.

    The director created a Facebook page in the name of a fictitious character, "Tim Sutcliffe", who claimed the cat – which came from Battersea Cats Home – had belonged to his aunt Margaret. The story appeared in the Daily Mail and Metro, before receiving a prominent slot on BBC Radio 5 Live.

    BBC Radio 5 Live's Gaby Logan talks about a fictitious cat story Link to this audio

    Atkins, who was not involved in creating churnalism.com, uses spoof stories to highlight the failure of journalists to corroborate stories. He was behind an infamous prank last year that led to the BBC running a news package on a hoax Youtube video purporting to show urban foxhunters.

  • The creation of churnalism.com is likely to unnerve overworked journalists and the press officers who feed them. "People don't realise how much churn they're being fed every day," said Martin Moore, director of the trust, which seeks to improve standards in news. "Hopefully this will be an eye-opener."
  • ...2 more annotations...
  • Interestingly, all media outlets appear particularly susceptible to PR material disseminated by supermarkets: the Mail appears to have a particular appetite for publicity from Asda and Tesco, while the Guardian favours Waitrose releases.
  • Moore said one unexpected discovery has been that the BBC news website appears particularly prone to churning publicity material."Part of the reason is presumably because they feel a duty to put out so many government pronouncements," Moore said. "But the BBC also has a lot to produce in regions that the newspapers don't cover."
Weiye Loh

Net-Neutrality: The First Amendment of the Internet | LSE Media Policy Project - 0 views

  • debates about the nature, the architecture and the governing principles of the internet are not merely technical or economic discussions.  Above all, these debates have deep political, social, and cultural implications and become a matter of public, national and global interest.
  • In many ways, net neutrality could be considered the first amendment of the internet; no pun intended here. However, just as with freedom of speech the principle of net neutrality cannot be approached as absolute or as a fetish. Even in a democracy we cannot say everything applies all the time in all contexts. Limiting the core principle of freedom of speech in a democracy is only possible in very specific circumstances, such as harm, racism or in view of the public interest. Along the same lines, compromising on the principle of net neutrality should be for very specific and clearly defined reasons that are transparent and do not serve commercial private interests, but rather public interests or are implemented in view of guaranteeing an excellent quality of service for all.
  • One of the only really convincing arguments of those challenging net neutrality is that due to the dramatic increases in streaming activity and data-exchange through peer-to-peer networks, the overall quality of service risks being compromised if we stick to data being treated on a first come first serve basis. We are being told that popular content will need to be stored closer to the consumer, which evidently comes at an extra cost.
  • ...5 more annotations...
  • Implicitly two separate debates are being collapsed here and I would argue that we need to separate both. The first one relates to the stability of the internet as an information and communication infrastructure because of the way we collectively use that infrastructure. The second debate is whether ISPs and telecommunication companies should be allowed to differentiate in their pricing between different levels of quality of access, both towards consumers and content providers.
  • Just as with freedom of speech, circumstances can be found in which the principle while still cherished and upheld, can be adapted and constrained to some extent. To paraphrase Tim Wu (2008), the aspiration should still be ‘to treat all content, sites, and platforms equally’, but maybe some forms of content should be treated more equally than others in order to guarantee an excellent quality of service for all. However, the societal and political implications of this need to be thought through in detail and as with freedom of speech itself, it will, I believe, require strict regulation and conditions.
  • In regards to the first debate on internet stability, a case can be made for allowing internet operators to differentiate between different types of data with different needs – if for any reason the quality of service of the internet as a whole cannot be guaranteed anymore. 
  • Concerning the second debate on differential pricing, it is fair to say that from a public interest and civic liberty perspective the consolidation and institutionalization of a commercially driven two-tiered internet is not acceptable and impossible to legitimate. As is allowing operators to differentiate in the quality of provision of certain kind of content above others.  A core principle such as net neutrality should never be relinquished for the sake of private interests and profit-making strategies – on behalf of industry or for others. If we need to compromise on net neutrality it would always have to be partial, to be circumscribed and only to improve the quality of service for all, not just for the few who can afford it.
  • Separating these two debates exposes the crux of the current net-neutrality debate. In essence, we are being urged to give up on the principle of net-neutrality to guarantee a good quality of service.  However, this argument is actually a pre-text for the telecom industry to make content-providers pay for the facilitation of access to their audiences – the internet subscribers. And this again can be linked to another debate being waged amongst content providers: how do we make internet users pay for the content they access online? I won’t open that can of worms here, but I will make my point clear.  Telecommunication industry efforts to make content providers pay for access to their audiences do not offer legitimate reasons to suspend the first amendment of the internet.
1 - 20 of 53 Next › Last »
Showing 20 items per page