Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Benefits" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle

Pornography as a living? - 8 views

started by Karin Tan on 02 Sep 09 no follow-up yet
11More

nanopolitan: Medicine, Trials, Conflict of Interest, Disclosures - 0 views

  • Some 1500 documents revealed in litigation provide unprecedented insights into how pharmaceutical companies promote drugs, including the use of vendors to produce ghostwritten manuscripts and place them into medical journals.
  • Dozens of ghostwritten reviews and commentaries published in medical journals and supplements were used to promote unproven benefits and downplay harms of menopausal hormone therapy (HT), and to cast raloxifene and other competing therapies in a negative light.
  • the pharmaceutical company Wyeth used ghostwritten articles to mitigate the perceived risks of breast cancer associated with HT, to defend the unsupported cardiovascular “benefits” of HT, and to promote off-label, unproven uses of HT such as the prevention of dementia, Parkinson's disease, vision problems, and wrinkles.
  • ...7 more annotations...
  • Given the growing evidence that ghostwriting has been used to promote HT and other highly promoted drugs, the medical profession must take steps to ensure that prescribers renounce participation in ghostwriting, and to ensure that unscrupulous relationships between industry and academia are avoided rather than courted.
  • Twenty-five out of 32 highly paid consultants to medical device companies in 2007, or their publishers, failed to reveal the financial connections in journal articles the following year, according to a [recent] study.
  • The study compared major payments to consultants by orthopedic device companies with financial disclosures the consultants later made in medical journal articles, and found them lacking in public transparency. “We found a massive, dramatic system failure,” said David J. Rothman, a professor and president of the Institute on Medicine as a Profession at Columbia University, who wrote the study with two other Columbia researchers, Susan Chimonas and Zachary Frosch.
  • Carl Elliot in The Chronicle of Higher Educations: The Secret Lives of Big Pharma's 'Thought Leaders':
  • See also a related NYTimes report -- Menopause, as Brought to You by Big Pharma by Natasha Singer and Duff Wilson -- from December 2009. Duff Wilson reports in the NYTimes: Medical Industry Ties Often Undisclosed in Journals:
  • Pharmaceutical companies hire KOL's [Key Opinion Leaders] to consult for them, to give lectures, to conduct clinical trials, and occasionally to make presentations on their behalf at regulatory meetings or hearings.
  • KOL's do not exactly endorse drugs, at least not in ways that are too obvious, but their opinions can be used to market them—sometimes by word of mouth, but more often by quasi-academic activities, such as grand-rounds lectures, sponsored symposia, or articles in medical journals (which may be ghostwritten by hired medical writers). While pharmaceutical companies seek out high-status KOL's with impressive academic appointments, status is only one determinant of a KOL's influence. Just as important is the fact that a KOL is, at least in theory, independent. [...]
  •  
    Medicine, Trials, Conflict of Interest, Disclosures Just a bunch of links -- mostly from the US -- that paint give us a troubling picture of the state of ethics in biomedical fields:
10More

Essay - The End of Tenure? - NYTimes.com - 0 views

  • The cost of a college education has risen, in real dollars, by 250 to 300 percent over the past three decades, far above the rate of inflation. Elite private colleges can cost more than $200,000 over four years. Total student-loan debt, at nearly $830 billion, recently surpassed total national credit card debt. Meanwhile, university presidents, who can make upward of $1 million annually, gravely intone that the $50,000 price tag doesn’t even cover the full cost of a year’s education.
  • Then your daughter reports that her history prof is a part-time adjunct, who might be making $1,500 for a semester’s work. There’s something wrong with this picture.
  • The higher-ed jeremiads of the last generation came mainly from the right. But this time, it’s the tenured radicals — or at least the tenured liberals — who are leading the charge. Hacker is a longtime contributor to The New York Review of Books and the author of the acclaimed study “Two Nations: Black and White, Separate, Hostile, Unequal,”
  • ...6 more annotations...
  • And these two books arrive at a time, unlike the early 1990s, when universities are, like many students, backed into a fiscal corner. Taylor writes of walking into a meeting one day and learning that Columbia’s endowment had dropped by “at least” 30 percent. Simply brushing off calls for reform, however strident and scattershot, may no longer be an option.
  • The labor system, for one thing, is clearly unjust. Tenured and tenure-track professors earn most of the money and benefits, but they’re a minority at the top of a pyramid. Nearly two-thirds of all college teachers are non-tenure-track adjuncts like Matt Williams, who told Hacker and Dreifus he had taught a dozen courses at two colleges in the Akron area the previous year, earning the equivalent of about $8.50 an hour by his reckoning. It is foolish that graduate programs are pumping new Ph.D.’s into a world without decent jobs for them. If some programs were phased out, teaching loads might be raised for some on the tenure track, to the benefit of undergraduate education.
  • it might well be time to think about vetoing Olympic-quality athletic ­facilities and trimming the ranks of administrators. At Williams, a small liberal arts college renowned for teaching, 70 percent of employees do something other than teach.
  • But Hacker and Dreifus go much further, all but calling for an end to the role of universities in the production of knowledge. Spin off the med schools and research institutes, they say. University presidents “should be musing about education, not angling for another center on antiterrorist technologies.” As for the humanities, let professors do research after-hours, on top of much heavier teaching schedules. “In other occupations, when people feel there is something they want to write, they do it on their own time and at their own expense,” the authors declare. But it seems doubtful that, say, “Battle Cry of Freedom,” the acclaimed Civil War history by Princeton’s James McPherson, could have been written on the weekends, or without the advance spadework of countless obscure monographs. If it is false that research invariably leads to better teaching, it is equally false to say that it never does.
  • Hacker’s home institution, the public Queens College, which has a spartan budget, commuter students and a three-or-four-course teaching load per semester. Taylor, by contrast, has spent his career on the elite end of higher education, but he is no less disillusioned. He shares Hacker and Dreifus’s concerns about overspecialized research and the unintended effects of tenure, which he believes blocks the way to fresh ideas. Taylor has backed away from some of the most incendiary proposals he made last year in a New York Times Op-Ed article, cheekily headlined “End the University as We Know It” — an article, he reports, that drew near-universal condemnation from academics and near-universal praise from everyone else. Back then, he called for the flat-out abolition of traditional departments, to be replaced by temporary, “problem-centered” programs focusing on issues like Mind, Space, Time, Life and Water. Now, he more realistically suggests the creation of cross-­disciplinary “Emerging Zones.” He thinks professors need to get over their fear of corporate partnerships and embrace efficiency-enhancing technologies.
  • It is not news that America is a land of haves and have-nots. It is news that colleges are themselves dividing into haves and have-nots; they are becoming engines of inequality. And that — not whether some professors can afford to wear Marc Jacobs — is the real scandal.
  •  
    The End of Tenure? By CHRISTOPHER SHEA Published: September 3, 2010
33More

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
9More

FT.com / Business education / Soapbox - Popular fads replace relevant teaching - 0 views

  • There is a great divide in business schools, one that few outsiders are aware of. It is the divide between research and teaching. There is little relation between them. What is being taught in management books and classrooms is usually not based on rigorous research and vice-versa; the research published in prestigious academic journals seldom finds its way into the MBA classroom.
  • none of this research is really intended to be used in the classroom, or to be communicated to managers in some other form, it is not suited to serve that purpose. The goal is publication in a prestigious academic journal, but that does not make it useful or even offer a guarantee that the research findings provide much insight into the workings of business reality.
  • is not a new problem. In 1994, Don Hambrick, then the president of the Academy of Management, said: “We read each others’ papers in our journals and write our own papers so that we may, in turn, have an audience . . . an incestuous, closed loop”. Management research is not required to be relevant. Consequently much of it is not.
  • ...6 more annotations...
  • But business education clearly also suffers. What is being taught in management courses is usually not based on solid scientific evidence. Instead, it concerns the generalisation of individual business cases or the lessons from popular management books. Such books often are based on the appealing formula that they look at several successful companies, see what they have in common and conclude that other companies should strive to do the same thing.
  • how do you know that the advice provided is reasonable, or if it comes from tomorrow’s Enrons, RBSs, Lehmans and WorldComs? How do you know that today’s advice and cases will not later be heralded as the epitome of mismanagement?
  • In the 1990s, ISO9000 (a quality management systems standard) spread through many industries. But research by professors Mary Benner and Mike Tushman showed that its adoption could, in time, lead to a fall in innovation (because ISO9000 does not allow for deviations from a set standard, which innovation requires), making the adopter worse off. This research was overlooked by practitioners, many business schools continued to applaud the benefits of ISO9000 in their courses, while firms continued – and still do – to implement the practice, ignorant of its potential pitfalls. Yet this research offers a clear example of the possible benefits of scientific research methods: rigorous research that reveals unintended consequences to expose the true nature of a business practice.
  • such research with important practical implications unfortunately is the exception rather than the rule. Moreover, even relevant research is largely ignored in business education – as happened to the findings by Benner and Tushman.
  • Of course one should not make the mistake that business cases and business books based on personal observation and opinion are without value. They potentially offer a great source of practical experience. Similarly, it would be naive to assume that scientific research can provide custom-made answers. Rigorous management research could and should provide the basis for skilled managers to make better decisions. However, they cannot do that without the in-depth knowledge of their specific organisation and circumstances.
  • at present, business schools largely fail in providing rigorous, evidence-based teaching.
11More

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
1More

Would Society Benefit from Good Digital Hoaxes? | The Utopianist - Think Bigger - 0 views

  •  
    can such hoaxes be beneficial? If a Western audience was in fact impelled to learn more about the social woes in Syria, is this a net gain for society in general? Should such well-intentioned projects be condoned, even perhaps emulated in certain ways if deemed an effective educational tool? Could we use this format - a narrative-driven account of important far-flung events that allows audience a portal into such events that may be more engaging than typical AP newswire reportage? People tend to connect better to emotion-filled story arcs than recitation of facts, after all. Perhaps instead of merely piling on MacMaster, we can learn something from his communication strategy …
2More

Solar Maps Reveal Exactly How Much Sun Hits Every Inch of a City | The Utopianist - Thi... - 0 views

  • The New York solar map just debuted at the fifth annual Solar Summit. Solvecimate News reports: “The map is an important part of this effort,” said Tria Case, who heads the New York City solar map project as director of sustainability for the university. “It’s a tool that building and homeowners, installers, city officials and Con Ed can use.” The map is exact. During night flights over New Yok in May 2010, a twin-engine plane equipped with lasers captured the architecture of the city. From these images, CUNY’s Center for Advanced Research of Spatial Information created a 3-D model of the city. “It’s as if we shrink-wrapped the entire city in paper lined with a one-meter grid and got the exact elevation and horizontal location of each square meter,” Sean Ahearn, the geographer who directs the center, told SolveClimate News. Ahearn said the site incorporates so many bytes of information that it took a supercomputer with 10 processors some 50 hours to generate the map interface. The website can calculate how much solar radiation hits every square meter of the city — every hour, every day for an entire year. For building owners it means they can size up of the solar energy potential of their rooftops within minutes.
  •  
    cities are turning to advanced, but easy-to-use solar maps that determine exactly how much sunlight falls on every inch of the city. That way, property owners can see upfront and center the clear benefits of installing solar. The latest - and by far the biggest - such initiative is coming to New York City, and well-received efforts have already spurred solar growth in San Francisco and Germany.
4More

Freakonomics » The Revolution Will Not Be Televised. But It Will Be Tweeted - 0 views

  • information alone does not destabilize an oppressive regime. In fact, more information (and the control of that information) is a major source of political strength for any ruling party. The state controlled media of North Korea is a current example of the power of propaganda, much as it was in the Soviet Union and Nazi Germany, where the state heavily subsidized the diffusion of radios during the 1930s to help spread Nazi propaganda.
  • changes in technology do not by themselves weaken the state. While Twitter played a role in the Iranian protests in 2009, the medium was used effectively by the Iranian regime to spread rumors and disinformation. But, if information becomes not just more widespread but more reliable, the regime’s chances of survival are significantly diminished. In this sense, though social media like Twitter and Facebook appear to be a scattered mess, they are more reliable than state controlled messages.
  • The model predicts that a given percentage increase in information reliability has exactly twice as large an effect on the regime’s chances as the same percentage increase in information quantity, so, overall, an information revolution that leads to roughly equal-sized percentage increases in both these characteristics will reduce a regime’s chances of surviving.-
  •  
    If the quantity of information available to citizens is sufficiently high, then the regime has a better chance of surviving. However, an increase in the reliability of information can reduce the regime's chances. These two effects are always in tension: a regime benefits from an increase in information quantity if and only if an increase in information reliability reduces its chances. The model allows for two kinds of information revolutions. In the first, associated with radio and mass newspapers under the totalitarian regimes of the early twentieth century, an increase in information quantity coincides with a shift towards media institutions more accommodative of the regime and, in this sense, a decrease in information reliability. In this case, both effects help the regime. In the second kind, associated with diffuse technologies like modern social media, an increase in information quantity coincides with a shift towards sources of information less accommodative of the regime and an increase in information reliability. This makes the quantity and reliability effects work against each other.
6More

Roger Pielke Jr.'s Blog: Global Temperature Trends - 0 views

  • My concern about the potential effects of human influences on the climate system are not a function of global average warming over a long-period of time or of predictions of continued warming into the future.
  • what maters are the effects of human influences on the climate system on human and ecological scales, not at the global scale. No one experiences global average temperature and it is very poorly correlated with things that we do care about in specific places at specific times.
  • Consider the following thought experiment. Divide the world up into 1,000 grid boxes of equal area. Now imagine that the temperature in each of 500 of those boxes goes up by 20 degrees while the temperature in the other 500 goes down by 20 degrees. The net global change is exactly zero (because I made it so). However, the impacts would be enormous. Let's further say that the changes prescribed in my thought experiment are the direct consequence of human activity. Would we want to address those changes? Or would we say, ho hum, it all averages out globally, so no problem? The answer is obvious and is not a function of what happens at some global average scale, but what happens at human and ecological scales.
  • ...2 more annotations...
  • In the real world, the effects of increasing carbon dioxide on human and ecological scales are well established, and they include a biogechemical effect on land ecosystems with subsequent effects on water and climate, as well as changes to the chemistry of the oceans. Is it possible that these effects are benign? Sure. Is it also possible that these effects have some negatives? Sure. These two factors alone would be sufficient for one to begin to ask questions about the worth of decarbonizing the global energy system. But greenhouse gas emissions also have a radiative effect that, in the real world, is thought to be a net warming, all else equal and over a global scale. However, if this effect were to be a net cooling, or even, no net effect at the global scale, it would not change my views about a need to consider decarbonizing the energy system one bit. There is an effect -- or effects to be more accurate -- and these effects could be negative.
  • The debate over climate change has many people on both sides of the issue wrapped up in discussing global average temperature trends. I understand this as it is an icon with great political symbolism. It has proved a convenient political battleground, but the reality is that it should matter little to the policy case for decarbonization. What matters is that there is a human effect on the climate system and it could be negative with respect to things people care about. That is enough to begin asking whether we want to think about accelerating decarbonization of the global economy.
  •  
    one needs to know only two things about the science of climate change to begin asking whether accelerating decarbonization of the economy might be worth doing: Carbon dioxide has an influence on the climate system. This influence might well be negative for things many people care about. That is it. An actual decision to accelerate decarbonization and at what rate will depend on many other things, like costs and benefits of particular actions unrelated to climate and technological alternatives. In this post I am going to further explain my views, based on an interesting question posed in that earlier thread. What would my position be if it were to be shown, hypothetically, that the global average surface temperature was not warming at all, or in fact even cooling (over any relevant time period)? Would I then change my views on the importance of decarbonizing the global energy system?
4More

Effect of alcohol on risk of coronary heart diseas... [Vasc Health Risk Manag. 2006] - ... - 0 views

  • Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good.
  • . The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real.
  • However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one.
  •  
    Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake.
1More

Arab Spring: Join Slate, the New America Foundation, and Arizona State for a "Future Te... - 0 views

  •  
    Can social media really spur a revolution? Who benefits more from advances in technology-activists or authoritarian governments? What can the rest of the world do when Big Brother turns off the Internet? How did the successful Arab Spring turn into a complicated, bloody summer in Syria, Bahrain, and elsewhere? Can blogging make a difference in Cuba and North Korea?
4More

The Origins of "Basic Research" - 0 views

  • For many scientists, "basic research" means "fundamental" or "pure" research conducted without consideration of practical applications. At the same time, policy makers see "basic research" as that which leads to societal benefits including economic growth and jobs.
  • The mechanism that has allowed such divergent views to coexist is of course the so-called "linear model" of innovation, which holds that investments in "basic research" are but the first step in a sequence that progresses through applied research, development, and application. As recently explained in a major report of the US National Academy of Sciences: "[B]asic research ... has the potential to be transformational to maintain the flow of new ideas that fuel the economy, provide security, and enhance the quality of life" (Rising Above the Gathering Storm).
  • A closer look at the actual history of Google reveals how history becomes mythology. The 1994 NSF project that funded the scientific work underpinning the search engine that became Google (as we know it today) was conducted from the start with commercialization in mind: "The technology developed in this project will provide the 'glue' that will make this worldwide collection usable as a unified entity, in a scalable and economically viable fashion." In this case, the scientist following his curiosity had at least one eye simultaneously on commercialization.
  • ...1 more annotation...
  • In their appeal for more funding for scientific research, Leshner and Cooper argued that: "Across society, we don't have to look far for examples of basic research that paid off." They cite the creation of Google as a prime example of such payoffs: "Larry Page and Sergey Brin, then a National Science Foundation [NSF] fellow, did not intend to invent the Google search engine. Originally, they were intrigued by a mathematical challenge ..." The appealing imagery of a scientist who simply follows his curiosity and then makes a discovery with a large societal payoff is part of the core mythology of post-World War II science policies. The mythology shapes how governments around the world organize, account for, and fund research. A large body of scholarship has critiqued postwar science policies and found that, despite many notable successes, the science policies that may have made sense in the middle of the last century may need updating in the 21st century. In short, investments in "basic research" are not enough. Benoit Godin has asserted (PDF) that: "The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications." Or as Leshner and Cooper declare in The Washington Post: "Federal investments in R&D have fueled half of the nation's economic growth since World War II."
5More

Rod Beckstrom proposes ways to reclaim control over our online selves. - Project Syndicate - 0 views

  • As the virtual world expands, so, too, do breaches of trust and misuse of personal data. Surveillance has increased public unease – and even paranoia – about state agencies. Private companies that trade in personal data have incited the launch of a “reclaim privacy” movement. As one delegate at a recent World Economic Forum debate, noted: “The more connected we have become, the more privacy we have given up.”
  • Now that our personal data have become such a valuable asset, companies are coming under increasing pressure to develop online business models that protect rather than exploit users’ private information. In particular, Internet users want to stop companies befuddling their customers with convoluted and legalistic service agreements in order to extract and sell their data.
  • Hyper-connectivity not only creates new commercial opportunities; it also changes the way ordinary people think about their lives. The so-called FoMo (fear of missing out) syndrome reflects the anxieties of a younger generation whose members feel compelled to capture instantly everything they do and see.CommentsView/Create comment on this paragraphIronically, this hyper-connectivity has increased our insularity, as we increasingly live through our electronic devices. Neuroscientists believe that this may even have altered how we now relate to one another in the real world.
  • ...1 more annotation...
  • At the heart of this debate is the need to ensure that in a world where many, if not all, of the important details of our lives – including our relationships – exist in cyber-perpetuity, people retain, or reclaim, some level of control over their online selves. While the world of forgetting may have vanished, we can reshape the new one in a way that benefits rather than overwhelms us. Our overriding task is to construct a digital way of life that reinforces our existing sense of ethics and values, with security, trust, and fairness at its heart.
  •  
    "We must answer profound questions about the way we live. Should everyone be permanently connected to everything? Who owns which data, and how should information be made public? Can and should data use be regulated, and, if so, how? And what role should government, business, and ordinary Internet users play in addressing these issues?"
7More

Religion's regressive hold on animal rights issues | Peter Singer | Comment is free | g... - 0 views

  • chief minister of Malacca, Mohamad Ali Rustam, was quoted in the Guardian as saying that God created monkeys and rats for experiments to benefit humans.
  • Here is the head of a Malaysian state justifying the establishment of a scientific enterprise with a comment that flies in the face of everything science tells us.
  • Though the chief minister is, presumably, a Muslim, there is nothing specifically Islamic about the claim that God created animals for our sake. Similar remarks have been made repeatedly by Christian religious figures through the millennia, although today some Christian theologians offer a kinder, more compassionate interpretation of the idea of our God-given dominion over the animals. They regard the grant of dominion as a kind of stewardship, with God wanting us to take care of his creatures and treat them well.
  • ...2 more annotations...
  • What are we to say of the Indian company, Vivo Biosciences Inc, which takes advantage of such religious naivety – in which presumably its scientists do not for one moment believe – in order to gain approval for its £97m joint venture with a state-owned Malaysian biotech company?
    • Weiye Loh
       
      Isn't it ironic that scientists rely on religious rhetoric to justify their sciences? 
  • The chief minister's comment is yet another illustration of the generally regressive influence that religion has on ethical issues – whether they are concerned with the status of women, with sexuality, with end-of-life decisions in medicine, with the environment, or with animals.
  •  
    Religion's regressive hold on animal rights issues How are we to promote the need for improved animal welfare when battling religious views formed centuries ago? Peter Singerguardian.co.uk, Tuesday 8 June 2010 14.03 BSTArticle history
9More

Digital Domain - Computers at Home - Educational Hope vs. Teenage Reality - NYTimes.com - 0 views

  • MIDDLE SCHOOL students are champion time-wasters. And the personal computer may be the ultimate time-wasting appliance.
  • there is an automatic inclination to think of the machine in its most idealized form, as the Great Equalizer. In developing countries, computers are outfitted with grand educational hopes, like those that animate the One Laptop Per Child initiative, which was examined in this space in April.
  • Economists are trying to measure a home computer’s educational impact on schoolchildren in low-income households. Taking widely varying routes, they are arriving at similar conclusions: little or no educational benefit is found. Worse, computers seem to have further separated children in low-income households, whose test scores often decline after the machine arrives, from their more privileged counterparts.
  • ...5 more annotations...
  • Professor Malamud and his collaborator, Cristian Pop-Eleches, an assistant professor of economics at Columbia University, did their field work in Romania in 2009, where the government invited low-income families to apply for vouchers worth 200 euros (then about $300) that could be used for buying a home computer. The program provided a control group: the families who applied but did not receive a voucher.
  • the professors report finding “strong evidence that children in households who won a voucher received significantly lower school grades in math, English and Romanian.” The principal positive effect on the students was improved computer skills.
  • few children whose families obtained computers said they used the machines for homework. What they were used for — daily — was playing games.
  • negative effect on test scores was not universal, but was largely confined to lower-income households, in which, the authors hypothesized, parental supervision might be spottier, giving students greater opportunity to use the computer for entertainment unrelated to homework and reducing the amount of time spent studying.
  • The North Carolina study suggests the disconcerting possibility that home computers and Internet access have such a negative effect only on some groups and end up widening achievement gaps between socioeconomic groups. The expansion of broadband service was associated with a pronounced drop in test scores for black students in both reading and math, but no effect on the math scores and little on the reading scores of other students.
  •  
    Computers at Home: Educational Hope vs. Teenage Reality By RANDALL STROSS Published: July 9, 2010
2More

BBC NEWS | Science & Environment | Honesty test 'should be reviewed' - 0 views

  • The study found that 88.5% women believed buying a dress for a special occasion and then returning it to the store and getting a refund was dishonest. Broccoli stalks But just 46.7% took the same view of a care home nurse persuading an elderly patient to change a will in her favour. A large majority of men had a similar attitude, with 82.6% thinking it was morally wrong to "borrow" the dress but only 37.6% disapproving of taking advantage of someone who was elderly and infirm. The findings suggested that if a jury of 12 men and women was asked to pass a verdict on the care home nurse, only four would want to convict. HAVE YOUR SAY There is a wide variation in what people feel is acceptable Peter Symonds Send us your comments A higher proportion of women, 82.2%, thought it was dishonest to lie about age on an internet dating site than believed it was wrong to benefit from the will alteration. In a league table of dishonest acts, conning the elderly care home patient came 13th out of 16 - just one place above snapping off broccoli stalks in a supermarket and weighing the heads. The two actions considered the most dishonest were buying goods online using a colleague's shopping account and setting fire to a garage to make an insurance claim.
    • Weiye Loh
       
      Interesting thing our ethical standpoint can be so inconsistent. =)

Internet campaigning gets a vote of confidence - 3 views

started by Olivia Chang on 16 Sep 09 no follow-up yet
‹ Previous 21 - 40 of 99 Next › Last »
Showing 20 items per page