Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Joke

Rss Feed Group items tagged

Weiye Loh

The Free Speech Blog: Official blog of Index on Censorship » A tale of two tw... - 0 views

  • Hopefully you will have heard of the ridiculous case of the unfortunate Paul Chambers the man who now has a criminal record because of a jokey tweet made whilst frustrated with snow related delays at Doncaster Robin Hood Airport. “Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your shit together otherwise I’m blowing the airport sky high!” This was the offending tweet , a clearly flippant comment whose intent, or lack thereof would have been pretty easy to establish
  • At some stage yesterday Gareth Compton , a Tory councillor for Erdington in Birmingham tweeted this : ”Can someone please stone Yasmin Alibhai-Brown to death ? I won’t tell Amnesty if you don’t. It would be a blessing, really.”
  • At any level , this is a thoroughly unpleasant tweet. First of all nobody in any political position should be tweeting or indeed telling ”jokes” that are in such flagrant bad taste. Secondly I am always uncomfortable about a certain type of rightwing (and sometimes leftwing) commentator who gets disproportionately angry when the opponent whose views they disagree with happens to be from a “minority” group. The tweet leaves a nasty taste, and Gareth Compton should think long and hard about his responsibilities as a councillor. But… It was clearly NOT an incitement to murder, in the same way that Paul Chambers was clearly NOT going to blow up Robin Hood Airport. It was a hideous misjudgement yes , but there is an obvious jokiness to the context. A remarkably unpleasant jokiness yes, but nevertheless it is there.
  • ...5 more annotations...
  • Mr Compton has been arrested and bailed for his words , supported by Yasmin Alibhai Brown who has described his tweet as an incitement to murder.
  • Yasmin Alibhai Brown is a journalist I have admired over the years for her ability to get under the skin of both Islamic extremists, and also those who will never accept any form of multiculturalism.
  • Because of what she represents , every time she appears in the media she is the target of vituperative verbal attacks on her character and has been the recipient of numerous death threats. I can’t even begin to imagine what that’s like — I get upset by one bad review. But I would have thought this would have given her more insight into the difference between an actual death threat , and a boorish rightwing councilllor.
  • The context with Gareth Compton is that he is a Tory Councillor trying his hand at Twitter. Having read his tweets thoroughly it is clear that I don’t agree with most of his views. But nevertheless I think it is nonsense to claim that he is inciting murder. The irony is that all over the worldwide web, anonymous internet warriors are only to happy to incite hatred and murder, and surely this is where the appropriate resources should be directed.
  • A joke, however misjudged and offensive, is still a joke. The use of the sledgehammer/walnut analogy can surely never have been more appropriate than it is when describing the use of police resources to act on a poor taste tweet. I sincerely hope that this madness does not continue as the precedent it sets is worrying indeed.
Weiye Loh

BBC News - Stephen Fry prison 'pledge' over 'Twitter joke' trial - 0 views

  • Chambers' case has become a cause celebre on Twitter, with hundreds of people reposting his original comments in protest at the conviction.
  • Speaking generally about the internet and freedom of speech, Linehan told the audience: "We've got this incredible tool and we should fight any attempt to take it out of our hands."
  • The aim of the organisers is that he will not be forced to drop his case because of the possibility he would have to pay the prosecution's legal costs were he to lose.
  • ...2 more annotations...
  • everyone seemed united by a desire to protect freedom of speech or at least the ability to recognise the difference between jokes and menacing terrorist threats.
  • "We should be able to have banter," he concluded. "We should be able to speak freely without the threat of legal coercion." Chambers - who now lives in Northern Ireland but lived in Balby, Doncaster, at the time - sent the message to his 600 followers in the early hours of 6 January 2010. He claimed it was in a moment of frustration after Robin Hood Airport in South Yorkshire was closed by snow. He was found guilty in May 2010 and fined £385 and told to pay £600 costs. His appeal is likely to go before the High Court later this year.
Weiye Loh

Google's in-house philosopher: Technologists need a "moral operating system" | VentureBeat - 0 views

  • technology-makers aren’t supposed to think about the morality of their products — they just build stuff and let other people worry about the ethics. But Horowitz pointed to the Manhattan Project, where physicists developed the nuclear bomb, as an obvious example where technologists should have thought carefully about the moral dimensions of their work. To put it another way, he argued that technology makers should be thinking as much about their “moral operating system” as their mobile operating system.
  • most of the evil in the world comes not from bad intentions, but rather from “not thinking.”
  • “Ethics is hard,” Horowitz said. “Ethics requires thinking.”
  • ...1 more annotation...
  • try to articulate how they decided what was right and wrong. “That’s the first step towards taking responsibility towards what we should do with all of our power,” Horowitz said, later adding, “We have so much power today. It is up to us to figure out what to do.”
  •  
    To illustrate how ethics are getting short-shrift in the tech world, Horowitz asked attendees whether they prefer the iPhone or Android. (When the majority voted for the iPhone, he joked that they were "suckers" who just chose the prettier device.) Then he asked whether it was a good idea to take data from an audience member's phone in order to provide various (and mostly beneficial) services, or whether he should be left alone, and the majority of audience voted to leave him alone. Finally, Horowitz wanted to know whether audience members would use the ideas proposed by John Stuart Mill or by Immanuel Kant to make that decision. Not surprisingly, barely anyone knew what he was talking about. "That's a terrifying result," Horowitz said. "We have stronger opinions about our handheld devices than about the moral framework we should use to guide our decisions."
Weiye Loh

Justice At Last For Paul Chambers! #twitterjoketrial « Quiet Riot Girl - 0 views

  •  
    This morning it was announced that Paul Chambers who had been convicted of making a 'menacing' tweet under the 2003 Communications Act, has had his conviction quashed. He was found innocent of all charges by three Appeal Court judges. To most people reading this the news is not only brilliant for Paul, his partner Sarah (@crazycolours) and their families. It is also a victory for freedom of speech and expression, especially online. So it is with extra joy that the news was first reported and now is being celebrated on our favourite social media platform.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Climate of Hate - NYTimes.com - 0 views

  • When you heard the terrible news from Arizona, were you completely surprised? Or were you, at some level, expecting something like this atrocity to happen?
  • The Department of Homeland Security reached the same conclusion: in April 2009 an internal report warned that right-wing extremism was on the rise, with a growing potential for violence.
  • Conservatives denounced that report. But there has, in fact, been a rising tide of threats and vandalism aimed at elected officials, including both Judge John Roll, who was killed Saturday, and Representative Gabrielle Giffords. One of these days, someone was bound to take it to the next level. And now someone has.
  • ...11 more annotations...
  • It’s true that the shooter in Arizona appears to have been mentally troubled. But that doesn’t mean that his act can or should be treated as an isolated event, having nothing to do with the national climate.
  • Last spring Politico.com reported on a surge in threats against members of Congress, which were already up by 300 percent. A number of the people making those threats had a history of mental illness — but something about the current state of America has been causing far more disturbed people than before to act out their illness by threatening, or actually engaging in, political violence.
  • As Clarence Dupnik, the sheriff responsible for dealing with the Arizona shootings, put it, it’s “the vitriolic rhetoric that we hear day in and day out from people in the radio business and some people in the TV business.” The vast majority of those who listen to that toxic rhetoric stop short of actual violence, but some, inevitably, cross that line.
  • It’s not a general lack of “civility,” the favorite term of pundits who want to wish away fundamental policy disagreements. Politeness may be a virtue, but there’s a big difference between bad manners and calls, explicit or implicit, for violence; insults aren’t the same as incitement.
  • there’s room in a democracy for people who ridicule and denounce those who disagree with them; there isn’t any place for eliminationist rhetoric, for suggestions that those on the other side of a debate must be removed from that debate by whatever means necessary.
  • And it’s the saturation of our political discourse — and especially our airwaves — with eliminationist rhetoric that lies behind the rising tide of violence.
  • Where’s that toxic rhetoric coming from? Let’s not make a false pretense of balance: it’s coming, overwhelmingly, from the right. It’s hard to imagine a Democratic member of Congress urging constituents to be “armed and dangerous” without being ostracized; but Representative Michele Bachmann, who did just that, is a rising star in the G.O.P.
  • And there’s a huge contrast in the media. Listen to Rachel Maddow or Keith Olbermann, and you’ll hear a lot of caustic remarks and mockery aimed at Republicans. But you won’t hear jokes about shooting government officials or beheading a journalist at The Washington Post. Listen to Glenn Beck or Bill O’Reilly, and you will.
  • Of course, the likes of Mr. Beck and Mr. O’Reilly are responding to popular demand.
  • But even if hate is what many want to hear, that doesn’t excuse those who pander to that desire. They should be shunned by all decent people.
  • Unfortunately, that hasn’t been happening: the purveyors of hate have been treated with respect, even deference, by the G.O.P. establishment. As David Frum, the former Bush speechwriter, has put it, “Republicans originally thought that Fox worked for us and now we’re discovering we work for Fox.”
Weiye Loh

nanopolitan: Plagiarism Derails German (Ex) Minister - 0 views

  • The outcry has taken   several   forms, including Guttenberg being dubbed zu Googleberg and, even worse, Germany's Sarah Palin! The most substantive protest is through this letter to Chancellor Merkel, signed by over 20,000 academics, post-docs, and students. Here's an excerpt: ... When it is no longer an important value to protect ideas in our society, then we have gambled away our future. We don't expect thankfulness for our scientific work, but we expect respect, we expect that our work be taken seriously. By handling the case of zu Guttenberg as a trifle, Germany's position in world science, its credibility as the "Land of Ideas", suffers.
  • A second line of attack -- which probably clinched the issue -- targeted his leadership of defence academies, especially since it came from political adversaries partners: "Should he continue to allow the circumstances of his dissertation to remain so unclear, I think that he, as minister and as the top official of two Bundeswehr universities, is no longer acceptable," Martin Neumann, parliamentary spokesman for academic issues for the business-friendly Free Democratic Party (FDP), Merkel's junior coalition partner, told the Financial Times Deutschland newspaper.
Weiye Loh

Hashtags, a New Way for Tweets - Cultural Studies - NYTimes.com - 0 views

  • hashtags have transcended the 140-characters-or-less microblogging platform, and have become a new cultural shorthand, finding their way into chat windows, e-mail and face-to-face conversations.
  • people began using hashtags to add humor, context and interior monologues to their messages — and everyday conversation. As Susan Orlean wrote in a New Yorker blog post titled “Hash,” the symbol can be “a more sophisticated, verbal version of the dread winking emoticon that tweens use to signify that they’re joking.”
  • “Because you have a hashtag embedded in a short message with real language, it starts exhibiting other characteristics of natural language, which means basically that people start playing with it and manipulating it,” said Jacob Eisenstein, a postdoctoral fellow at Carnegie Mellon University in computational linguistics. “You’ll see them used as humor, as sort of meta-commentary, where you’ll write a message and maybe you don’t really believe it, and what you really think is in the hashtag.”
  • ...2 more annotations...
  • Hashtags then began popping up outside of Twitter, in e-mails, chat windows and text messages.
  • Using a hashtag is also a way for someone to convey that they’re part of a certain scene.
Weiye Loh

Meet Science: What is "peer review"? - Boing Boing - 0 views

  • Scientists do complain about peer review. But let me set one thing straight: The biggest complaints scientists have about peer review are not that it stifles unpopular ideas. You've heard this truthy factoid from countless climate-change deniers, and purveyors of quack medicine. And peer review is a convenient scapegoat for their conspiracy theories. There's just enough truth to make the claims sound plausible.
  • Peer review is flawed. Peer review can be biased. In fact, really new, unpopular ideas might well have a hard time getting published in the biggest journals right at first. You saw an example of that in my interview with sociologist Harry Collins. But those sort of findings will often published by smaller, more obscure journals. And, if a scientist keeps finding more evidence to support her claims, and keeps submitting her work to peer review, more often than not she's going to eventually convince people that she's right. Plenty of scientists, including Harry Collins, have seen their once-shunned ideas published widely.
  • So what do scientists complain about? This shouldn't be too much of a surprise. It's the lack of training, the lack of feedback, the time constraints, and the fact that, the more specific your research gets, the fewer people there are with the expertise to accurately and thoroughly review your work.
  • ...5 more annotations...
  • Scientists are frustrated that most journals don't like to publish research that is solid, but not ground-breaking. They're frustrated that most journals don't like to publish studies where the scientist's hypothesis turned out to be wrong.
  • Some scientists would prefer that peer review not be anonymous—though plenty of others like that feature. Journals like the British Medical Journal have started requiring reviewers to sign their comments, and have produced evidence that this practice doesn't diminish the quality of the reviews.
  • There are also scientists who want to see more crowd-sourced, post-publication review of research papers. Because peer review is flawed, they say, it would be helpful to have centralized places where scientists can go to find critiques of papers, written by scientists other than the official peer-reviewers. Maybe the crowd can catch things the reviewers miss. We certainly saw that happen earlier this year, when microbiologist Rosie Redfield took a high-profile peer-reviewed paper about arsenic-based life to task on her blog. The website Faculty of 1000 is attempting to do something like this. You can go to that site, look up a previously published peer-reviewed paper, and see what other scientists are saying about it. And the Astrophysics Archive has been doing this same basic thing for years.
  • you shouldn't canonize everything a peer-reviewed journal article says just because it is a peer-reviewed journal article.
  • at the same time, being peer reviewed is a sign that the paper's author has done some level of due diligence in their work. Peer review is flawed, but it has value. There are improvements that could be made. But, like the old joke about democracy, peer review is the worst possible system except for every other system we've ever come up with.
  •  
    Being peer reviewed doesn't mean your results are accurate. Not being peer reviewed doesn't mean you're a crank. But the fact that peer review exists does weed out a lot of cranks, simply by saying, "There is a standard." Journals that don't have peer review do tend to be ones with an obvious agenda. White papers, which are not peer reviewed, do tend to contain more bias and self-promotion than peer-reviewed journal articles.
1 - 9 of 9
Showing 20 items per page