Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Benefits" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle

China's Green Dam Internet Filter - 6 views

started by Low Yunying on 02 Sep 09 no follow-up yet

Online pirates could lose Net access - 3 views

started by Magdaleine on 19 Aug 09 no follow-up yet

Google Loses German Copyright Cases Over Image-Search Previews - 4 views

started by Li-Ling Gan on 25 Aug 09 no follow-up yet

Police raid 13 shops in Lucky Plaza - 13 views

started by Paul Melissa on 24 Aug 09 no follow-up yet

Go slow with Net law - 4 views

started by juliet huang on 26 Aug 09 no follow-up yet
5More

Is 'More Efficient' Always Better? - NYTimes.com - 1 views

  • Efficiency is the seemingly value-free standard economists use when they make the case for particular policies — say, free trade, more liberal immigration policies, cap-and-trade policies on environmental pollution, the all-volunteer army or congestion tolls. The concept of efficiency is used to justify a reliance on free-market principles, rather than the government, to organize the health care sector, or to make recommendations on taxation, government spending and monetary policy. All of these public policies have one thing in common: They create winners and losers among members of society.
  • can it be said that a more efficient resource allocation is better than a less efficient one, given the changes in the distribution of welfare among members of society that these allocations imply?
  • Suppose a restructuring of the economy has the effect of increasing the growth of average gross domestic product per capita, but that the benefits of that growth accrue disproportionately disproportionately to a minority of citizens, while others are worse off as a result, as appears to have been the case in the United States in the last several decades. Can economists judge this to be a good thing?
  • ...1 more annotation...
  • Indeed, how useful is efficiency as a normative guide to public policy? Can economists legitimately base their advocacy of particular policies on that criterion? That advocacy, especially when supported by mathematical notation and complex graphs, may look like economic science. But when greater efficiency is accompanied by a redistribution of economic privilege in society, subjective ethical dimensions inevitably get baked into the economist’s recommendations.
  •  
    Is 'More Efficient' Always Better?
13More

What is the role of the state? | Martin Wolf's Exchange | FT.com - 0 views

  • This question has concerned western thinkers at least since Plato (5th-4th century BCE). It has also concerned thinkers in other cultural traditions: Confucius (6th-5th century BCE); China’s legalist tradition; and India’s Kautilya (4th-3rd century BCE). The perspective here is that of the contemporary democratic west.
  • The core purpose of the state is protection. This view would be shared by everybody, except anarchists, who believe that the protective role of the state is unnecessary or, more precisely, that people can rely on purely voluntary arrangements.
  • Contemporary Somalia shows the horrors that can befall a stateless society. Yet horrors can also befall a society with an over-mighty state. It is evident, because it is the story of post-tribal humanity that the powers of the state can be abused for the benefit of those who control it.
  • ...9 more annotations...
  • In his final book, Power and Prosperity, the late Mancur Olson argued that the state was a “stationary bandit”. A stationary bandit is better than a “roving bandit”, because the latter has no interest in developing the economy, while the former does. But it may not be much better, because those who control the state will seek to extract the surplus over subsistence generated by those under their control.
  • In the contemporary west, there are three protections against undue exploitation by the stationary bandit: exit, voice (on the first two of these, see this on Albert Hirschman) and restraint. By “exit”, I mean the possibility of escaping from the control of a given jurisdiction, by emigration, capital flight or some form of market exchange. By “voice”, I mean a degree of control over, the state, most obviously by voting. By “restraint”, I mean independent courts, division of powers, federalism and entrenched rights.
  • defining what a democratic state, viewed precisely as such a constrained protective arrangement, is entitled to do.
  • There exists a strand in classical liberal or, in contemporary US parlance, libertarian thought which believes the answer is to define the role of the state so narrowly and the rights of individuals so broadly that many political choices (the income tax or universal health care, for example) would be ruled out a priori. In other words, it seeks to abolish much of politics through constitutional restraints. I view this as a hopeless strategy, both intellectually and politically. It is hopeless intellectually, because the values people hold are many and divergent and some of these values do not merely allow, but demand, government protection of weak, vulnerable or unfortunate people. Moreover, such values are not “wrong”. The reality is that people hold many, often incompatible, core values. Libertarians argue that the only relevant wrong is coercion by the state. Others disagree and are entitled to do so. It is hopeless politically, because democracy necessitates debate among widely divergent opinions. Trying to rule out a vast range of values from the political sphere by constitutional means will fail. Under enough pressure, the constitution itself will be changed, via amendment or reinterpretation.
  • So what ought the protective role of the state to include? Again, in such a discussion, classical liberals would argue for the “night-watchman” role. The government’s responsibilities are limited to protecting individuals from coercion, fraud and theft and to defending the country from foreign aggression. Yet once one has accepted the legitimacy of using coercion (taxation) to provide the goods listed above, there is no reason in principle why one should not accept it for the provision of other goods that cannot be provided as well, or at all, by non-political means.
  • Those other measures would include addressing a range of externalities (e.g. pollution), providing information and supplying insurance against otherwise uninsurable risks, such as unemployment, spousal abandonment and so forth. The subsidisation or public provision of childcare and education is a way to promote equality of opportunity. The subsidisation or public provision of health insurance is a way to preserve life, unquestionably one of the purposes of the state. Safety standards are a way to protect people against the carelessness or malevolence of others or (more controversially) themselves. All these, then, are legitimate protective measures. The more complex the society and economy, the greater the range of the protections that will be sought.
  • What, then, are the objections to such actions? The answers might be: the proposed measures are ineffective, compared with what would happen in the absence of state intervention; the measures are unaffordable and might lead to state bankruptcy; the measures encourage irresponsible behaviour; and, at the limit, the measures restrict individual autonomy to an unacceptable degree. These are all, we should note, questions of consequences.
  • The vote is more evenly distributed than wealth and income. Thus, one would expect the tenor of democratic policymaking to be redistributive and so, indeed, it is. Those with wealth and income to protect will then make political power expensive to acquire and encourage potential supporters to focus on common enemies (inside and outside the country) and on cultural values. The more unequal are incomes and wealth and the more determined are the “haves” to avoid being compelled to support the “have-nots”, the more politics will take on such characteristics.
  • In the 1970s, the view that democracy would collapse under the weight of its excessive promises seemed to me disturbingly true. I am no longer convinced of this: as Adam Smith said, “There is a great deal of ruin in a nation”. Moreover, the capacity for learning by democracies is greater than I had realised. The conservative movements of the 1980s were part of that learning. But they went too far in their confidence in market arrangements and their indifference to the social and political consequences of inequality. I would support state pensions, state-funded health insurance and state regulation of environmental and other externalities. I am happy to debate details. The ancient Athenians called someone who had a purely private life “idiotes”. This is, of course, the origin of our word “idiot”. Individual liberty does indeed matter. But it is not the only thing that matters. The market is a remarkable social institution. But it is far from perfect. Democratic politics can be destructive. But it is much better than the alternatives. Each of us has an obligation, as a citizen, to make politics work as well as he (or she) can and to embrace the debate over a wide range of difficult choices that this entails.
  •  
    What is the role of the state?
5More

The overblown crisis in American education : The New Yorker - 0 views

  • it’s odd that a narrative of crisis, of a systemic failure, in American education is currently so persuasive. This back-to-school season, we have Davis Guggenheim’s documentary about the charter-school movement, “Waiting for ‘Superman’ ”; two short, dyspeptic books about colleges and universities, “Higher Education?,” by Andrew Hacker and Claudia Dreifus, and “Crisis on Campus,” by Mark C. Taylor; and a lot of positive attention to the school-reform movement in the national press. From any of these sources, it would be difficult to reach the conclusion that, over all, the American education system works quite well.
  • In higher education, the reform story isn’t so fully baked yet, but its main elements are emerging. The system is vast: hundreds of small liberal-arts colleges; a new and highly leveraged for-profit sector that offers degrees online; community colleges; state universities whose budgets are being cut because of the recession; and the big-name private universities, which get the most attention. You wouldn’t design a system this way—it’s filled with overlaps and competitive excess. Much of it strives toward an ideal that took shape in nineteenth-century Germany: the university as a small, élite center of pure scholarly research. Research is the rationale for low teaching loads, publication requirements, tenure, tight-knit academic disciplines, and other practices that take it on the chin from Taylor, Hacker, and Dreifus for being of little benefit to students or society.
  • Yet for a system that—according to Taylor, especially—is deeply in crisis, American higher education is not doing badly. The lines of people wanting to get into institutions that the authors say are just waiting to cheat them by overcharging and underteaching grow ever longer and more international, and the people waiting in those lines don’t seem deterred by price increases, even in a terrible recession.
  • ...1 more annotation...
  • There have been attempts in the past to make the system more rational and less redundant, and to shrink the portion of it that undertakes scholarly research, but they have not met with much success, and not just because of bureaucratic resistance by the interested parties. Large-scale, decentralized democratic societies are not very adept at generating neat, rational solutions to messy situations. The story line on education, at this ill-tempered moment in American life, expresses what might be called the Noah’s Ark view of life: a vast territory looks so impossibly corrupted that it must be washed away, so that we can begin its activities anew, on finer, higher, firmer principles. One should treat any perception that something so large is so completely awry with suspicion, and consider that it might not be true—especially before acting on it.
  •  
    mass higher education is one of the great achievements of American democracy. It embodies a faith in the capabilities of ordinary people that the Founders simply didn't have.
7More

Open Letter to Richard Dawkins: Why Are You Still In Denial About Group Selection? : Ev... - 0 views

  • Dear Richard, I do not agree with the cynical adage "science progresses--funeral by funeral", but I fear that it might be true in your case for the subject of group selection.
  • Edward Wilson was misunderstanding kin selection as far back as Sociobiology, where he treated it as a subset of group selection ... Kin selection is not a subset of group selection, it is a logical consequence of gene selection. And gene selection is (everything that Nowak et al ought to mean by) 'standard natural selection' theory: has been ever since the neo-Darwinian synthesis of the 1930s.
  • I do not agree with the Nowak et al. article in every respect and will articulate some of my disagreements in subsequent posts. For the moment, I want to stress how alone you are in your statement about group selection. Your view is essentially pre-1975, a date that is notable not only for the publication of Sociobiology but also a paper by W.D. Hamilton, one of your heroes, who correctly saw the relationship between kin selection and group selection thanks to the work of George Price. Ever since, knowledgeable theoretical biologists have known that inclusive fitness theory includes the logic of multilevel selection, which means that altruism is selectively disadvantageous within kin groups and evolves only by virtue of groups with more altruists contributing more to the gene pool than groups with fewer altruists. The significance of relatedness is that it clusters the genes coding for altruistic and selfish behaviors into different groups.
  • ...3 more annotations...
  • Even the contemporary theoretical biologists most critical of multilevel selection, such as Stuart West and Andy Gardner, acknowledge what you still deny. In an earlier feature on group selection published in Nature, Andy Gardner is quoted as saying "Everyone agrees that group selection occurs"--everyone except you, that is.
  • You correctly say that gene selection is standard natural selection theory. Essentially, it is a popularization of the concept of average effects in population genetics theory, which averages the fitness of alternative genes across all contexts to calculate what evolves in the total population. For that reason, it is an elementary mistake to regard gene selection as an alternative to group selection. Whenever a gene evolves in the total population on the strength of group selection, despite being selectively disadvantageous within groups, it has the highest average effect compared to the genes that it replaced. Please consult the installment of my "Truth and Reconciliation for Group Selection" series titled "Naïve Gene Selectionism" for a refresher course. While you're at it, check out the installment titled "Dawkins Protests--Too Much".
  • The Nowak et al. article includes several critiques of inclusive fitness theory that need to be distinguished from each other. One issue is whether inclusive fitness theory is truly equivalent to explicit models of evolution in multi-group populations, or whether it makes so many simplifying assumptions that it restricts itself to a small region of the parameter space. A second issue is whether benefiting collateral kin is required for the evolution of eusociality and other forms of prosociality. A third issue is whether inclusive fitness theory, as understood by the average evolutionary biologist and the general public, bears any resemblance to inclusive fitness theory as understood by the cognoscenti.
  •  
    Open Letter to Richard Dawkins: Why Are You Still In Denial About Group Selection?
6More

Android software piracy rampant despite Google's efforts to curb - Computerworld - 0 views

  • Some have argued that piracy is rampant in those countries where the online Android Market is not yet available. But a recent KeyesLabs research project suggests that may not be true. KeyesLabs created a rough methodology to track total downloads of its apps, determine which ones were pirated, and the location of the end users. The results were posted in August, along with a “heat map” showing pirate activity. 
  • In July 2010, Google announced the Google Licensing Service, available via Android Market. Applications can include the new License Verification Library (LVL). “At run time, with the inclusion of a set of libraries provided by us, your application can query the Android Market licensing server to determine the license status of your users,” according to a blog post by Android engineer Eric Chu. “It returns information on whether your users are authorized to use the app based on stored sales records.”
  • Justin Case, at the Android Police Web site, dissected the LVL. “A minor patch to an application employing this official, Google-recommended protection system will render it completely worthless,” he concluded.
  • ...2 more annotations...
  • In response, Google has promised continued improvements and outlined a multipronged strategy around the new licensing service to make piracy much harder. “A determined attacker who’s willing to disassemble and reassemble code can eventually hack around the service,” acknowledged Android engineer Trevor Johns in a recent blog post.  But developers can make their work much harder by combining a cluster of techniques, he counsels: obfuscating code, modifying the licensing library to protect against common cracking techniques, designing the app to be tamper-resistant, and offloading license validation to a trusted server.
  • Gareau isn’t quite as convinced of the benefits of code obfuscation, though he does make use of it. He’s taken several other steps to protect his software work. One is providing a free trial version, which allows only a limited amount of data but is otherwise fully-featured. The idea: Let customers prove that the app will do everything they want, and they may be more willing to pay for it. He also provides a way to detect whether the app has been tampered with, for example, by removing the licensing checks. If yes, the app can be structured to stop working or behave erratically.
  •  
    Android software piracy rampant despite Google's efforts to curb
11More

Skepticblog » Investing in Basic Science - 0 views

  • A recent editorial in the New York Times by Nicholas Wade raises some interesting points about the nature of basic science research – primarily that its’ risky.
  • As I have pointed out about the medical literature, researcher John Ioaniddis has explained why most published studies turn out in retrospect to be wrong. The same is true of most basic science research – and the underlying reason is the same. The world is complex, and most of our guesses about how it might work turn out to be either flat-out wrong, incomplete, or superficial. And so most of our probing and prodding of the natural world, looking for the path to the actual answer, turn out to miss the target.
  • research costs considerable resources of time, space, money, opportunity, and people-hours. There may also be some risk involved (such as to subjects in the clinical trial). Further, negative studies are actually valuable (more so than terrible pictures). They still teach us something about the world – they teach us what is not true. At the very least this narrows the field of possibilities. But the analogy holds in so far as the goal of scientific research is to improve our understanding of the world and to provide practical applications that make our lives better. Wade writes mostly about how we fund research, and this relates to our objectives. Most of the corporate research money is interested in the latter – practical (and profitable) applications. If this is your goal, than basic science research is a bad bet. Most investments will be losers, and for most companies this will not be offset by the big payoffs of the rare winners. So many companies will allow others to do the basic science (government, universities, start up companies) then raid the winners by using their resources to buy them out, and then bring them the final steps to a marketable application. There is nothing wrong or unethical about this. It’s a good business model.
  • ...8 more annotations...
  • What, then, is the role of public (government) funding of research? Primarily, Wade argues (and I agree), to provide infrastructure for expensive research programs, such as building large colliders.
  • the more the government invests in basic science and infrastructure, the more winners will emerge that private industry can then capitalize on. This is a good way to build a competitive dynamic economy.
  • But there is a pitfall – prematurely picking winners and losers. Wade give the example of California investing specifically into developing stem cell treatments. He argues that stem cells, while promising, do not hold a guarantee of eventual success, and perhaps there are other technologies that will work and are being neglected. The history of science and technology has clearly demonstrated that it is wickedly difficult to predict the future (and all those who try are destined to be mocked by future generations with the benefit of perfect hindsight). Prematurely committing to one technology therefore contains a high risk of wasting a great deal of limited resources, and missing other perhaps more fruitful opportunities.
  • The underlying concept is that science research is a long-term game. Many avenues of research will not pan out, and those that do will take time to inspire specific applications. The media, however, likes catchy headlines. That means when they are reporting on basic science research journalists ask themselves – why should people care? What is the application of this that the average person can relate to? This seems reasonable from a journalistic point of view, but with basic science reporting it leads to wild speculation about a distant possible future application. The public is then left with the impression that we are on the verge of curing the common cold or cancer, or developing invisibility cloaks or flying cars, or replacing organs and having household robot servants. When a few years go by and we don’t have our personal android butlers, the public then thinks that the basic science was a bust, when in fact there was never a reasonable expectation that it would lead to a specific application anytime soon. But it still may be on track for interesting applications in a decade or two.
  • this also means that the government, generally, should not be in the game of picking winners an losers – putting their thumb on the scale, as it were. Rather, they will get the most bang for the research buck if they simply invest in science infrastructure, and also fund scientists in broad areas.
  • The same is true of technology – don’t pick winners and losers. The much-hyped “hydrogen economy” comes to mind. Let industry and the free market sort out what will work. If you have to invest in infrastructure before a technology is mature, then at least hedge your bets and keep funding flexible. Fund “alternative fuel” as a general category, and reassess on a regular basis how funds should be allocated. But don’t get too specific.
  • Funding research but leaving the details to scientists may be optimal
  • The scientific community can do their part by getting better at communicating with the media and the public. Try to avoid the temptation to overhype your own research, just because it is the most interesting thing in the world to you personally and you feel hype will help your funding. Don’t make it easy for the media to sensationalize your research – you should be the ones trying to hold back the reigns. Perhaps this is too much to hope for – market forces conspire too much to promote sensationalism.
10More

Genome Biology | Full text | A Faustian bargain - 0 views

  • on October 1st, you announced that the departments of French, Italian, Classics, Russian and Theater Arts were being eliminated. You gave several reasons for your decision, including that 'there are comparatively fewer students enrolled in these degree programs.' Of course, your decision was also, perhaps chiefly, a cost-cutting measure - in fact, you stated that this decision might not have been necessary had the state legislature passed a bill that would have allowed your university to set its own tuition rates. Finally, you asserted that the humanities were a drain on the institution financially, as opposed to the sciences, which bring in money in the form of grants and contracts.
  • I'm sure that relatively few students take classes in these subjects nowadays, just as you say. There wouldn't have been many in my day, either, if universities hadn't required students to take a distribution of courses in many different parts of the academy: humanities, social sciences, the fine arts, the physical and natural sciences, and to attain minimal proficiency in at least one foreign language. You see, the reason that humanities classes have low enrollment is not because students these days are clamoring for more relevant courses; it's because administrators like you, and spineless faculty, have stopped setting distribution requirements and started allowing students to choose their own academic programs - something I feel is a complete abrogation of the duty of university faculty as teachers and mentors. You could fix the enrollment problem tomorrow by instituting a mandatory core curriculum that included a wide range of courses.
  • the vast majority of humanity cannot handle freedom. In giving humans the freedom to choose, Christ has doomed humanity to a life of suffering.
  • ...7 more annotations...
  • in Dostoyevsky's parable of the Grand Inquisitor, which is told in Chapter Five of his great novel, The Brothers Karamazov. In the parable, Christ comes back to earth in Seville at the time of the Spanish Inquisition. He performs several miracles but is arrested by Inquisition leaders and sentenced to be burned at the stake. The Grand Inquisitor visits Him in his cell to tell Him that the Church no longer needs Him. The main portion of the text is the Inquisitor explaining why. The Inquisitor says that Jesus rejected the three temptations of Satan in the desert in favor of freedom, but he believes that Jesus has misjudged human nature.
  • I'm sure the budgetary problems you have to deal with are serious. They certainly are at Brandeis University, where I work. And we, too, faced critical strategic decisions because our income was no longer enough to meet our expenses. But we eschewed your draconian - and authoritarian - solution, and a team of faculty, with input from all parts of the university, came up with a plan to do more with fewer resources. I'm not saying that all the specifics of our solution would fit your institution, but the process sure would have. You did call a town meeting, but it was to discuss your plan, not let the university craft its own. And you called that meeting for Friday afternoon on October 1st, when few of your students or faculty would be around to attend. In your defense, you called the timing 'unfortunate', but pleaded that there was a 'limited availability of appropriate large venue options.' I find that rather surprising. If the President of Brandeis needed a lecture hall on short notice, he would get one. I guess you don't have much clout at your university.
  • As for the argument that the humanities don't pay their own way, well, I guess that's true, but it seems to me that there's a fallacy in assuming that a university should be run like a business. I'm not saying it shouldn't be managed prudently, but the notion that every part of it needs to be self-supporting is simply at variance with what a university is all about.
  • You seem to value entrepreneurial programs and practical subjects that might generate intellectual property more than you do 'old-fashioned' courses of study. But universities aren't just about discovering and capitalizing on new knowledge; they are also about preserving knowledge from being lost over time, and that requires a financial investment.
  • what seems to be archaic today can become vital in the future. I'll give you two examples of that. The first is the science of virology, which in the 1970s was dying out because people felt that infectious diseases were no longer a serious health problem in the developed world and other subjects, such as molecular biology, were much sexier. Then, in the early 1990s, a little problem called AIDS became the world's number 1 health concern. The virus that causes AIDS was first isolated and characterized at the National Institutes of Health in the USA and the Institute Pasteur in France, because these were among the few institutions that still had thriving virology programs. My second example you will probably be more familiar with. Middle Eastern Studies, including the study of foreign languages such as Arabic and Persian, was hardly a hot subject on most campuses in the 1990s. Then came September 11, 2001. Suddenly we realized that we needed a lot more people who understood something about that part of the world, especially its Muslim culture. Those universities that had preserved their Middle Eastern Studies departments, even in the face of declining enrollment, suddenly became very important places. Those that hadn't - well, I'm sure you get the picture.
  • one of your arguments is that not every place should try to do everything. Let other institutions have great programs in classics or theater arts, you say; we will focus on preparing students for jobs in the real world. Well, I hope I've just shown you that the real world is pretty fickle about what it wants. The best way for people to be prepared for the inevitable shock of change is to be as broadly educated as possible, because today's backwater is often tomorrow's hot field. And interdisciplinary research, which is all the rage these days, is only possible if people aren't too narrowly trained. If none of that convinces you, then I'm willing to let you turn your institution into a place that focuses on the practical, but only if you stop calling it a university and yourself the President of one. You see, the word 'university' derives from the Latin 'universitas', meaning 'the whole'. You can't be a university without having a thriving humanities program. You will need to call SUNY Albany a trade school, or perhaps a vocational college, but not a university. Not anymore.
  • I started out as a classics major. I'm now Professor of Biochemistry and Chemistry. Of all the courses I took in college and graduate school, the ones that have benefited me the most in my career as a scientist are the courses in classics, art history, sociology, and English literature. These courses didn't just give me a much better appreciation for my own culture; they taught me how to think, to analyze, and to write clearly. None of my sciences courses did any of that.
11More

The Matthew Effect § SEEDMAGAZINE.COM - 0 views

  • For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. —Matthew 25:29
  • Sociologist Robert K. Merton was the first to publish a paper on the similarity between this phrase in the Gospel of Matthew and the realities of how scientific research is rewarded
  • Even if two researchers do similar work, the most eminent of the pair will get more acclaim, Merton observed—more praise within the community, more or better job offers, better opportunities. And it goes without saying that even if a graduate student publishes stellar work in a prestigious journal, their well-known advisor is likely to get more of the credit. 
  • ...7 more annotations...
  • Merton published his theory, called the “Matthew Effect,” in 1968. At that time, the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger. That meant that researchers who had little in terms of fame (at 35, they would have completed a PhD and a post-doc and would be just starting out on their own) could still get funded if they wrote interesting proposals. So Merton’s observation about getting credit for one’s work, however true in terms of prestige, wasn’t adversely affecting the funding of new ideas.
  • Over the last 40 years, the importance of fame in science has increased. The effect has compounded because famous researchers have gathered the smartest and most ambitious graduate students and post-docs around them, so that each notable paper from a high-wattage group bootstraps their collective power. The famous grow more famous, and the younger researchers in their coterie are able to use that fame to their benefit. The effect of this concentration of power has finally trickled down to the level of funding: The average age on first receipt of the most common “starter” grants at the NIH is now almost 42. This means younger researchers without the strength of a fame-based community are cut out of the funding process, and their ideas, separate from an older researcher’s sphere of influence, don’t get pursued. This causes a founder effect in modern science, where the prestigious few dictate the direction of research. It’s not only unfair—it’s also actively dangerous to science’s progress.
  • How can we fund science in a way that is fair? By judging researchers independently of their fame—in other words, not by how many times their papers have been cited. By judging them instead via new measures, measures that until recently have been too ephemeral to use.
  • Right now, the gold standard worldwide for measuring a scientist’s worth is the number of times his or her papers are cited, along with the importance of the journal where the papers were published. Decisions of funding, faculty positions, and eminence in the field all derive from a scientist’s citation history. But relying on these measures entrenches the Matthew Effect: Even when the lead author is a graduate student, the majority of the credit accrues to the much older principal investigator. And an influential lab can inflate its citations by referring to its own work in papers that themselves go on to be heavy-hitters.
  • what is most profoundly unbalanced about relying on citations is that the paper-based metric distorts the reality of the scientific enterprise. Scientists make data points, narratives, research tools, inventions, pictures, sounds, videos, and more. Journal articles are a compressed and heavily edited version of what happens in the lab.
  • We have the capacity to measure the quality of a scientist across multiple dimensions, not just in terms of papers and citations. Was the scientist’s data online? Was it comprehensible? Can I replicate the results? Run the code? Access the research tools? Use them to write a new paper? What ideas were examined and discarded along the way, so that I might know the reality of the research? What is the impact of the scientist as an individual, rather than the impact of the paper he or she wrote? When we can see the scientist as a whole, we’re less prone to relying on reputation alone to assess merit.
  • Multidimensionality is one of the only counters to the Matthew Effect we have available. In forums where this kind of meritocracy prevails over seniority, like Linux or Wikipedia, the Matthew Effect is much less pronounced. And we have the capacity to measure each of these individual factors of a scientist’s work, using the basic discourse of the Web: the blog, the wiki, the comment, the trackback. We can find out who is talented in a lab, not just who was smart enough to hire that talent. As we develop the ability to measure multiple dimensions of scientific knowledge creation, dissemination, and re-use, we open up a new way to recognize excellence. What we can measure, we can value.
  •  
    WHEN IT COMES TO SCIENTIFIC PUBLISHING AND FAME, THE RICH GET RICHER AND THE POOR GET POORER. HOW CAN WE BREAK THIS FEEDBACK LOOP?
7More

BBC News - Cleaners 'worth more to society' than bankers - study - 0 views

  • The research, carried out by think tank the New Economics Foundation, says hospital cleaners create £10 of value for every £1 they are paid. It claims bankers are a drain on the country because of the damage they caused to the global economy. They reportedly destroy £7 of value for every £1 they earn. Meanwhile, senior advertising executives are said to "create stress". The study says they are responsible for campaigns which create dissatisfaction and misery, and encourage over-consumption.
  • And tax accountants damage the country by devising schemes to cut the amount of money available to the government, the research suggests. By contrast, child minders and waste recyclers are also doing jobs that create net wealth to the country.
  • a new form of job evaluation to calculate the total contribution various jobs make to society, including for the first time the impact on communities and environment.
  • ...3 more annotations...
  • "Pay levels often don't reflect the true value that is being created. As a society, we need a pay structure which rewards those jobs that create most societal benefit rather than those that generate profits at the expense of society and the environment".
  • "The point we are making is more fundamental - that there should be a relationship between what we are paid and the value our work generates for society. We've found a way to calculate that,"
  • The research also makes a variety of policy recommendations to align pay more closely with the value of work. These include establishing a high pay commission, building social and environmental value into prices, and introducing more progressive taxation.
  •  
    Cleaners 'worth more to society' than bankers - study
29More

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
8More

The Guardian - 0 views

  • We can't expect people to be either as self-denying as conservatives or as altruistic as liberals seem to wantThe question: What can Darwin teach us about morality?
  • to some extent, we are a species with an evolved psychology. Like other animals, we have inherited behavioural tendencies from our ancestors, since these were adaptive for them in the sense that they tended to lead to reproductive success in past environments.
  • It does not follow that we should now do whatever maximises our ability to reproduce and pass down our genes. For example, evolution may have honed us to desire and enjoy sex, through a process in which creatures that did so reproduced more often than their evolutionary competitors. But evolution has not equipped us with an abstract desire to pass down our genes.
  • ...4 more annotations...
  • All other things being equal, we should act in accordance with the desires that we actually have, in this case the desire for sex. We may also desire to have children, but perhaps only one or two: in that case, we should act in such a way as to have as much sex as possible while also producing children in this small number.
  • Generally speaking, it is rational for us to act in ways that accord with our reflectively-endorsed desires or values, rather than in ways that maximise our reproductive chances or in whatever ways we tend to respond without thinking. If we value the benefits of social living, this may require that we support and conform to socially-developed norms of conduct that constrain individuals from acting in ruthless pursuit of self-interest.
  • Admittedly, our evolved nature may affect this, in the sense that any workable system of moral norms must be practical for the needs of beings like us, who are, it seems, naturally inclined to be neither angelically selfless nor utterly uncaring about others. Thus, our evolved psychology may impose limits on what real-world moral systems can realistically demand of human beings, perhaps defeating some of the more extreme ambitions of both conservatives and liberals. It may not be realistic to expect each other to be either as self-denying as moral conservatives seem to want or as altruistic as some liberals seem to want.
  • realistic moral systems will allow considerable scope for individuals to act in accordance with whatever they actually value. However, they will also impose constraints, since truly ruthless competition among individuals would lead to widespread insecurity, suffering, and disorder. Allowing it would be inconsistent with many values that most of us adhere to, on reflection, such as the values of loving and trusting relationships, social survival, and the amelioration of suffering in the world. If, however, we are social animals that already have an evolved sympathetic responsiveness to each other, the yoke of a realistic moral system may be relatively light for most of us most of the time.
  •  
    Morality, with limits | Russell Blackford Russell Blackford guardian.co.uk Comment Thu 18 Mar 2010 09:00 GMT
7More

Roger Pielke Jr.'s Blog: Core Questions in the Governance of Innovation - 0 views

  • Today's NYT has a couple interesting articles about technological innovations that we may not want, and that we may wish to regulate in some manner, formally or informally.  These technologies suggest some core questions that lie at the heart of the management of innovation.
  • The first article discusses Google' Goggles which is an application allows people to search the internet based on an image taken by a smartphone.  Google has decided not to allow this technology to include face recognition in its software, even though people have requested it.
  • Google could have put face recognition into the Goggles application; indeed, many users have asked for it. But Google decided against it because smartphones can be used to take pictures of individuals without their knowledge, and a face match could retrieve all kinds of personal information — name, occupation, address, workplace.
  • ...4 more annotations...
  • “It was just too sensitive, and we didn’t want to go there,” said Eric E. Schmidt, the chief executive of Google. “You want to avoid enabling stalker behavior.”
  • The second article focuses on innovations in high frequency trading in financial markets, which bears some responsibility for the so-called "flash crash" of May 6th last year, in which the DJIA plunged more than 700 points in just minutes.
  • One debate has focused on whether some traders are firing off fake orders thousands of times a second to slow down exchanges and mislead others. Michael Durbin, who helped build high-frequency trading systems for companies like Citadel and is the author of the book “All About High-Frequency Trading,” says that most of the industry is legitimate and benefits investors. But, he says, the rules need to be strengthened to curb some disturbing practices.
  • This situation raises what I see to be core questions in the governance of innovation -- to what degree can innovation be shaped for achieving intended purposes? and, To what degree can the consequences of innovation be anticipated?
9More

Evidence: A Seductive but Slippery Concept - The Scientist - Magazine of the Life Sciences - 0 views

  • Much of what we know is wrong—or at least not definitively established to be right.
  • there were different schools of evidence-based medicine, reminding me of the feuding schools of psychoanalysis. For some it meant systematic reviews of well-conducted trials. For others it meant systematically searching for all evidence and then combining the evidence that passed a predefined quality hurdle. Quantification was essential for some but unimportant for others, and the importance of “clinical experience” was disputed.
  • There was also a backlash. Many doctors resented bitterly the implication that medicine had not always been based on evidence, while others saw unworthy people like statisticians and epidemiologists replacing the magnificence of clinicians. Many doctors thought evidence-based medicine a plot driven by insurance companies, politicians, and administrators in order to cut costs.
  • ...6 more annotations...
  • The discomfort of many clinicians comes from the fact that the data are derived mainly from clinical trials, which exclude the elderly and people with multiple problems. Yet in the “real world” of medicine, particularly general practice, most patients are elderly and most have multiple problems. So can the “evidence” be applied to these patients? Unthinking application of multiple evidence-based guidelines may cause serious problems, says Mike Rawlins, chairman of NICE.
  • There has always been anxiety that the zealots would insist evidence was all that was needed to make a decision, and in its early days NICE seemed to take this line. Critics quickly pointed out, however, that patients had things called values, as did clinicians, and that clinicians and patients needed to blend their values with the evidence in a way that was often a compromise.
  • Social scientists have tended to be wary of the reductionist approach of evidence-based medicine and have wanted a much broader range of information to be admissible.
  • Evidence-based medicine has been at its most confident when evaluating drug treatments, but many interventions in health care are far more complex than simply prescribing a drug. Insisting on randomized trials to evaluate these interventions may not only be inappropriate, but also misleading. Interventions may be stamped “ineffective” by the hardliners when they actually might offer substantial benefits. Then there is the constant confusion between “evidence of absence of effectiveness” with “absence of evidence of effectiveness”—two very different things.
  • even some of the strongest proponents of evidence-based medicine have become uneasy, as we have increasing evidence that drug companies have managed to manipulate data. In the heartland of evidence-based medicine—drug trials—the “evidence” may be unreliable and misleading.
  • All this doesn’t mean that evidence-based medicine should be abandoned. It means, rather, that we must never forget the complex relationship between evidence and truth.
« First ‹ Previous 41 - 60 of 99 Next › Last »
Showing 20 items per page