Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Ranking

Rss Feed Group items tagged

Weiye Loh

New voting methods and fair elections : The New Yorker - 0 views

  • history of voting math comes mainly in two chunks: the period of the French Revolution, when some members of France’s Academy of Sciences tried to deduce a rational way of conducting elections, and the nineteen-fifties onward, when economists and game theorists set out to show that this was impossible
  • The first mathematical account of vote-splitting was given by Jean-Charles de Borda, a French mathematician and a naval hero of the American Revolutionary War. Borda concocted examples in which one knows the order in which each voter would rank the candidates in an election, and then showed how easily the will of the majority could be frustrated in an ordinary vote. Borda’s main suggestion was to require voters to rank candidates, rather than just choose one favorite, so that a winner could be calculated by counting points awarded according to the rankings. The key idea was to find a way of taking lower preferences, as well as first preferences, into account.Unfortunately, this method may fail to elect the majority’s favorite—it could, in theory, elect someone who was nobody’s favorite. It is also easy to manipulate by strategic voting.
  • If the candidate who is your second preference is a strong challenger to your first preference, you may be able to help your favorite by putting the challenger last. Borda’s response was to say that his system was intended only for honest men.
  • ...15 more annotations...
  • After the Academy dropped Borda’s method, it plumped for a simple suggestion by the astronomer and mathematician Pierre-Simon Laplace, who was an important contributor to the theory of probability. Laplace’s rule insisted on an over-all majority: at least half the votes plus one. If no candidate achieved this, nobody was elected to the Academy.
  • Another early advocate of proportional representation was John Stuart Mill, who, in 1861, wrote about the critical distinction between “government of the whole people by the whole people, equally represented,” which was the ideal, and “government of the whole people by a mere majority of the people exclusively represented,” which is what winner-takes-all elections produce. (The minority that Mill was most concerned to protect was the “superior intellects and characters,” who he feared would be swamped as more citizens got the vote.)
  • The key to proportional representation is to enlarge constituencies so that more than one winner is elected in each, and then try to align the share of seats won by a party with the share of votes it receives. These days, a few small countries, including Israel and the Netherlands, treat their entire populations as single constituencies, and thereby get almost perfectly proportional representation. Some places require a party to cross a certain threshold of votes before it gets any seats, in order to filter out extremists.
  • The main criticisms of proportional representation are that it can lead to unstable coalition governments, because more parties are successful in elections, and that it can weaken the local ties between electors and their representatives. Conveniently for its critics, and for its defenders, there are so many flavors of proportional representation around the globe that you can usually find an example of whatever point you want to make. Still, more than three-quarters of the world’s rich countries seem to manage with such schemes.
  • The alternative voting method that will be put to a referendum in Britain is not proportional representation: it would elect a single winner in each constituency, and thus steer clear of what foreigners put up with. Known in the United States as instant-runoff voting, the method was developed around 1870 by William Ware
  • In instant-runoff elections, voters rank all or some of the candidates in order of preference, and votes may be transferred between candidates. The idea is that your vote may count even if your favorite loses. If any candidate gets more than half of all the first-preference votes, he or she wins, and the game is over. But, if there is no majority winner, the candidate with the fewest first-preference votes is eliminated. Then the second-preference votes of his or her supporters are distributed to the other candidates. If there is still nobody with more than half the votes, another candidate is eliminated, and the process is repeated until either someone has a majority or there are only two candidates left, in which case the one with the most votes wins. Third, fourth, and lower preferences will be redistributed if a voter’s higher preferences have already been transferred to candidates who were eliminated earlier.
  • At first glance, this is an appealing approach: it is guaranteed to produce a clear winner, and more voters will have a say in the election’s outcome. Look more closely, though, and you start to see how peculiar the logic behind it is. Although more people’s votes contribute to the result, they do so in strange ways. Some people’s second, third, or even lower preferences count for as much as other people’s first preferences. If you back the loser of the first tally, then in the subsequent tallies your second (and maybe lower) preferences will be added to that candidate’s first preferences. The winner’s pile of votes may well be a jumble of first, second, and third preferences.
  • Such transferrable-vote elections can behave in topsy-turvy ways: they are what mathematicians call “non-monotonic,” which means that something can go up when it should go down, or vice versa. Whether a candidate who gets through the first round of counting will ultimately be elected may depend on which of his rivals he has to face in subsequent rounds, and some votes for a weaker challenger may do a candidate more good than a vote for that candidate himself. In short, a candidate may lose if certain voters back him, and would have won if they hadn’t. Supporters of instant-runoff voting say that the problem is much too rare to worry about in real elections, but recent work by Robert Norman, a mathematician at Dartmouth, suggests otherwise. By Norman’s calculations, it would happen in one in five close contests among three candidates who each have between twenty-five and forty per cent of first-preference votes. With larger numbers of candidates, it would happen even more often. It’s rarely possible to tell whether past instant-runoff elections have gone topsy-turvy in this way, because full ballot data aren’t usually published. But, in Burlington’s 2006 and 2009 mayoral elections, the data were published, and the 2009 election did go topsy-turvy.
  • Kenneth Arrow, an economist at Stanford, examined a set of requirements that you’d think any reasonable voting system could satisfy, and proved that nothing can meet them all when there are more than two candidates. So designing elections is always a matter of choosing a lesser evil. When the Royal Swedish Academy of Sciences awarded Arrow a Nobel Prize, in 1972, it called his result “a rather discouraging one, as regards the dream of a perfect democracy.” Szpiro goes so far as to write that “the democratic world would never be the same again,
  • There is something of a loophole in Arrow’s demonstration. His proof applies only when voters rank candidates; it would not apply if, instead, they rated candidates by giving them grades. First-past-the-post voting is, in effect, a crude ranking method in which voters put one candidate in first place and everyone else last. Similarly, in the standard forms of proportional representation voters rank one party or group of candidates first, and all other parties and candidates last. With rating methods, on the other hand, voters would give all or some candidates a score, to say how much they like them. They would not have to say which is their favorite—though they could in effect do so, by giving only him or her their highest score—and they would not have to decide on an order of preference for the other candidates.
  • One such method is widely used on the Internet—to rate restaurants, movies, books, or other people’s comments or reviews, for example. You give numbers of stars or points to mark how much you like something. To convert this into an election method, count each candidate’s stars or points, and the winner is the one with the highest average score (or the highest total score, if voters are allowed to leave some candidates unrated). This is known as range voting, and it goes back to an idea considered by Laplace at the start of the nineteenth century. It also resembles ancient forms of acclamation in Sparta. The more you like something, the louder you bash your shield with your spear, and the biggest noise wins. A recent variant, developed by two mathematicians in Paris, Michel Balinski and Rida Laraki, uses familiar language rather than numbers for its rating scale. Voters are asked to grade each candidate as, for example, “Excellent,” “Very Good,” “Good,” “Insufficient,” or “Bad.” Judging politicians thus becomes like judging wines, except that you can drive afterward.
  • Range and approval voting deal neatly with the problem of vote-splitting: if a voter likes Nader best, and would rather have Gore than Bush, he or she can approve Nader and Gore but not Bush. Above all, their advocates say, both schemes give voters more options, and would elect the candidate with the most over-all support, rather than the one preferred by the largest minority. Both can be modified to deliver forms of proportional representation.
  • Whether such ideas can work depends on how people use them. If enough people are carelessly generous with their approval votes, for example, there could be some nasty surprises. In an unlikely set of circumstances, the candidate who is the favorite of more than half the voters could lose. Parties in an approval election might spend less time attacking their opponents, in order to pick up positive ratings from rivals’ supporters, and critics worry that it would favor bland politicians who don’t stand for anything much. Defenders insist that such a strategy would backfire in subsequent elections, if not before, and the case of Ronald Reagan suggests that broad appeal and strong views aren’t mutually exclusive.
  • Why are the effects of an unfamiliar electoral system so hard to puzzle out in advance? One reason is that political parties will change their campaign strategies, and voters the way they vote, to adapt to the new rules, and such variables put us in the realm of behavior and culture. Meanwhile, the technical debate about electoral systems generally takes place in a vacuum from which voters’ capriciousness and local circumstances have been pumped out. Although almost any alternative voting scheme now on offer is likely to be better than first past the post, it’s unrealistic to think that one voting method would work equally well for, say, the legislature of a young African republic, the Presidency of an island in Oceania, the school board of a New England town, and the assembly of a country still scarred by civil war. If winner takes all is a poor electoral system, one size fits all is a poor way to pick its replacements.
  • Mathematics can suggest what approaches are worth trying, but it can’t reveal what will suit a particular place, and best deliver what we want from a democratic voting system: to create a government that feels legitimate to people—to reconcile people to being governed, and give them reason to feel that, win or lose (especially lose), the game is fair.
  •  
    WIN OR LOSE No voting system is flawless. But some are less democratic than others. by Anthony Gottlieb
Weiye Loh

Google's Fight Against 'Low-Quality' Sites Continues - Slashdot - 0 views

  •  
    "A couple weeks ago, JC Penney made the news for plummeting in Google rankings for everything from 'area rugs' to 'grommet top curtains.' Turns out the retail site had a number of suspicious links pointing at it that could be traced back to a link network intended to manipulate Google's ranking algorithms. Now, Overstock.com has lost rankings for another type of link that Google finds to be manipulation of their algorithms. This situation has led Google to implement a significant change to their search algorithms, affecting almost 12% of queries in an effort to cull content farms and other webspam. And in the midst of all of this, a company with substantial publicity lately for running a paid link network announces they are getting out of the link business entirely."
Weiye Loh

On newspapers' online comments « Yawning Bread Sampler 2 - 0 views

  • Assistant Professor Mark Cenite of Nanyang Technological University’s Wee Kim Wee School of Communication and Information said: ‘This approach allows users to moderate themselves, and the news site is seen as being sensitive to readers’ values.’
  • But Mr Alex Au, who runs socio-political blog Yawning Bread, cautioned that this could lead to astroturfing. The term, derived from a brand of fake grass, refers to a fake grassroots movement in which a group wishing to push its agenda sends out manipulated and replicated online messages in support of a certain policy or issue. His suggestion: user tiers, in which comments by users with verified identities are displayed visibly and anonymous comments less conspicuously. He said: ‘This approach does not bar people from speaking up, but weighs in by signalling the path towards responsible participation.’
  • what is astroturfing? It is when a few people do one or both of two things: create multiple identities for each of themselves and flood a forum or topic with similar opinions, or get their friends to post boilerplate letters (expressing similar opinions of course) even if they do not totally share them to the same degree. The intent is to create an impression that a certain opinion is more widely held than is actually the case.
  • ...4 more annotations...
  • user-rating will have the tendency of giving prominence to widely-shared opinion. Comments expressing unpopular opinions will get fewer “stars” from other readers and sink in display priority. In theory, it doesn’t have to be so. People may very well give “stars” to well-thought-out comments that argue cogently for a view they don’t agree with, lauding the quality of expression rather than the conclusion, but let’s get real. Most people like to hear what they already believe. That being the case, the effect of such a scheme would be to crowd out unpopular opinion even if they have merit; it produces a majoritarian effect in newspapers’ comments sections.
  • it is open to abuse in that a small group of people wanting to push a particular opinion could repeatedly vote for a certain comment, thereby giving it increased ranking and more prominent display. Such action would be akin to astroturfing.
  • The value of discussion lies not in hearing what we already know or what we already believe in. It lies in hearing alternative arguments and learning new facts. Structuring a discussion forum by giving prominence to merely popular opinion just makes it an echo chamber. The greater public purpose is better served when contrary opinion is aired. That is why I disagree with a scheme whereby users apply ratings and prominence is given to highly-rated comments.
    • Weiye Loh
       
      But the majority of users who participate in online activism/ slacktivism are very much the young, western educated folks. This in itself already make the online social sphere an echo chamber isn't it? 
  • nonymous comments have their uses. Most obviously, there will be times when whistle-blowing serves the public purpose, and so, even if displayed less prominently, they should still be allowed.
  •  
    A popular suggestion among media watchers interviewed is to let users rate the comments and display the highly ranked ones prominently.
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • rather than being forced to wait for parliament to meet to air their dissent, now opposition parties are able to post pre-emptively their criticisms online, shifting the time and space of Singapore's political debate
  • Singapore's People's Action Party (PAP)-dominated politics are increasingly being contested online and over social media like blogs, Facebook and Twitter. Pushed by the perceived pro-PAP bias of the mainstream media, Singapore's opposition parties are using various new media to communicate with voters and express dissenting views. Alternative news websites, including The Online Citizen and Temasek Review, have won strong followings by presenting more opposition views in their news mix.
  • Despite its democratic veneer, Singapore rates poorly in global press freedom rankings due to a deeply entrenched culture of self-censorship and a pro-state bias in the mainstream media. Reporters Without Borders, a France-based press freedom advocacy group, recently ranked Singapore 136th in its global press freedom rankings, scoring below repressive countries like Iraq and Zimbabwe. The country's main media publishing house, Singapore Press Holdings, is owned by the state and its board of directors is made up largely of PAP members or other government-linked executives. Senior newspaper editors, including at the Straits Times, must be vetted and approved by the PAP-led government.
  • ...3 more annotations...
  • The local papers have a long record of publicly endorsing the PAP-led government's position, according to Tan Tarn How, a research fellow at the Institute of Policy Studies (IPS) and himself a former journalist. In his research paper "Singapore's print media policy - a national success?" published last year he quoted Leslie Fong, a former editor of the Straits Times, saying that the press "should resist the temptation to arrogate itself the role of a watchdog, or permanent critic, of the government of the day".
  • With regularly briefed and supportive editors, there is no need for pre-publication censorship, according to Tan. When the editors are perceived to get things "wrong", the government frequently takes to task, either publicly or privately, the newspaper's editors or individual journalists, he said.
  • The country's main newspaper, the Straits Times, has consistently stood by its editorial decision-making. Editor Han Fook Kwang said last year: "Our circulation is 380,000 and we have a readership of 1.4 million - these are people who buy the paper every day. We're aware people say we're a government mouthpiece or that we are biased but the test is if our readers believe in the paper and continue to buy it."
Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

Google's War on Nonsense - NYTimes.com - 0 views

  • As a verbal artifact, farmed content exhibits neither style nor substance.
  • The insultingly vacuous and frankly bizarre prose of the content farms — it seems ripped from Wikipedia and translated from the Romanian — cheapens all online information.
  • These prose-widgets are not hammered out by robots, surprisingly. But they are written by writers who work like robots. As recent accounts of life in these words-are-money mills make clear, some content-farm writers have deadlines as frequently as every 25 minutes. Others are expected to turn around reported pieces, containing interviews with several experts, in an hour. Some compose, edit, format and publish 10 articles in a single shift. Many with decades of experience in journalism work 70-hour weeks for salaries of $40,000 with no vacation time. The content farms have taken journalism hackwork to a whole new level.
  • ...6 more annotations...
  • So who produces all this bulk jive? Business Insider, the business-news site, has provided a forum to a half dozen low-paid content farmers, especially several who work at AOL’s enormous Seed and Patch ventures. They describe exhausting and sometimes exploitative writing conditions. Oliver Miller, a journalist with an MFA in fiction from Sarah Lawrence who once believed he’d write the Great American Novel, told me AOL paid him about $28,000 for writing 300,000 words about television, all based on fragments of shows he’d never seen, filed in half-hour intervals, on a graveyard shift that ran from 11 p.m. to 7 or 8 in the morning.
  • Mr. Miller’s job, as he made clear in an article last week in The Faster Times, an online newspaper, was to cram together words that someone’s research had suggested might be in demand on Google, position these strings as titles and headlines, embellish them with other inoffensive words and make the whole confection vaguely resemble an article. AOL would put “Rick Fox mustache” in a headline, betting that some number of people would put “Rick Fox mustache” into Google, and retrieve Mr. Miller’s article. Readers coming to AOL, expecting information, might discover a subliterate wasteland. But before bouncing out, they might watch a video clip with ads on it. Their visits would also register as page views, which AOL could then sell to advertisers.
  • commodify writing: you pay little or nothing to writers, and make readers pay a lot — in the form of their “eyeballs.” But readers get zero back, no useful content.
  • You can’t mess with Google forever. In February, the corporation concocted what it concocts best: an algorithm. The algorithm, called Panda, affects some 12 percent of searches, and it has — slowly and imperfectly — been improving things. Just a short time ago, the Web seemed ungovernable; bad content was driving out good. But Google asserted itself, and credit is due: Panda represents good cyber-governance. It has allowed Google to send untrustworthy, repetitive and unsatisfying content to the back of the class. No more A’s for cheaters.
  • the goal, according to Amit Singhal and Matt Cutts, who worked on Panda, is to “provide better rankings for high-quality sites — sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.”
  • Google officially rolled out Panda 2.2. Put “Whitey Bulger” into Google, and where you might once have found dozens of content farms, today you get links to useful articles from sites ranging from The Boston Globe, The Los Angeles Times, the F.B.I. and even Mashable, doing original analysis of how federal agents used social media to find Bulger. Last month, Demand Media, once the most notorious of the content farms, announced plans to improve quality by publishing more feature articles by hired writers, and fewer by “users” — code for unpaid freelancers. Amazing. Demand Media is stepping up its game.
  •  
    Content farms, which have flourished on the Web in the past 18 months, are massive news sites that use headlines, keywords and other tricks to lure Web-users into looking at ads. These sites confound and embarrass Google by gaming its ranking system. As a business proposition, they once seemed exciting. Last year, The Economist admiringly described Associated Content and Demand Media as cleverly cynical operations that "aim to produce content at a price so low that even meager advertising revenue can support it."
Weiye Loh

The Way We Live Now - Metric Mania - NYTimes.com - 0 views

  • In the realm of public policy, we live in an age of numbers.
  • do wehold an outsize belief in our ability to gauge complex phenomena, measure outcomes and come up with compelling numerical evidence? A well-known quotation usually attributed to Einstein is “Not everything that can be counted counts, and not everything that counts can be counted.” I’d amend it to a less eloquent, more prosaic statement: Unless we know how things are counted, we don’t know if it’s wise to count on the numbers.
  • The problem isn’t with statistical tests themselves but with what we do before and after we run them.
  • ...9 more annotations...
  • First, we count if we can, but counting depends a great deal on previous assumptions about categorization. Consider, for example, the number of homeless people in Philadelphia, or the number of battered women in Atlanta, or the number of suicides in Denver. Is someone homeless if he’s unemployed and living with his brother’s family temporarily? Do we require that a women self-identify as battered to count her as such? If a person starts drinking day in and day out after a cancer diagnosis and dies from acute cirrhosis, did he kill himself? The answers to such questions significantly affect the count.
  • Second, after we’ve gathered some numbers relating to a phenomenon, we must reasonably aggregate them into some sort of recommendation or ranking. This is not easy. By appropriate choices of criteria, measurement protocols and weights, almost any desired outcome can be reached.
  • Are there good reasons the authors picked the criteria they did? Why did they weigh the criteria in the way they did?
  • Since the answer to the last question is usually yes, the problem of reasonable aggregation is no idle matter.
  • These two basic procedures — counting and aggregating — have important implications for public policy. Consider the plan to evaluate the progress of New York City public schools inaugurated by the city a few years ago. While several criteria were used, much of a school’s grade was determined by whether students’ performance on standardized state tests showed annual improvement. This approach risked putting too much weight on essentially random fluctuations and induced schools to focus primarily on the topics on the tests. It also meant that the better schools could receive mediocre grades becausethey were already performing well and had little room for improvement. Conversely, poor schools could receive high grades by improving just a bit.
  • Medical researchers face similar problems when it comes to measuring effectiveness.
  • Suppose that whenever people contract the disease, they always get it in their mid-60s and live to the age of 75. In the first region, an early screening program detects such people in their 60s. Because these people live to age 75, the five-year survival rate is 100 percent. People in the second region are not screened and thus do not receive their diagnoses until symptoms develop in their early 70s, but they, too, die at 75, so their five-year survival rate is 0 percent. The laissez-faire approach thus yields the same results as the universal screening program, yet if five-year survival were the criterion for effectiveness, universal screening would be deemed the best practice.
  • Because so many criteria can be used to assess effectiveness — median or mean survival times, side effects, quality of life and the like — there is a case to be made against mandating that doctors follow what seems at any given time to be the best practice. Perhaps, as some have suggested, we should merely nudge them with gentle incentives. A comparable tentativeness may be appropriate when devising criteria for effective schools.
  • Arrow’s Theorem, a famous result in mathematical economics, essentially states that no voting system satisfying certain minimal conditions can be guaranteed to always yield a fair or reasonable aggregation of the voters’ rankings of several candidates. A squishier analogue for the field of social measurement would say something like this: No method of measuring a societal phenomenon satisfying certain minimal conditions exists that can’t be second-guessed, deconstructed, cheated, rejected or replaced. This doesn’t mean we shouldn’t be counting — but it does mean we should do so with as much care and wisdom as we can muster.
  •  
    THE WAY WE LIVE NOW Metric Mania
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • in the last several months, one name turned up, with uncanny regularity, in the No. 1 spot for each and every term: J. C. Penney. The company bested millions of sites — and not just in searches for dresses, bedding and area rugs. For months, it was consistently at or near the top in searches for “skinny jeans,” “home decor,” “comforter sets,” “furniture” and dozens of other words and phrases, from the blandly generic (“tablecloths”) to the strangely specific (“grommet top curtains”).
  • J. C. Penney even beat out the sites of manufacturers in searches for the products of those manufacturers. Type in “Samsonite carry on luggage,” for instance, and Penney for months was first on the list, ahead of Samsonite.com.
  • the digital age’s most mundane act, the Google search, often represents layer upon layer of intrigue. And the intrigue starts in the sprawling, subterranean world of “black hat” optimization, the dark art of raising the profile of a Web site with methods that Google considers tantamount to cheating.
  • ...8 more annotations...
  • Despite the cowboy outlaw connotations, black-hat services are not illegal, but trafficking in them risks the wrath of Google. The company draws a pretty thick line between techniques it considers deceptive and “white hat” approaches, which are offered by hundreds of consulting firms and are legitimate ways to increase a site’s visibility. Penney’s results were derived from methods on the wrong side of that line, says Mr. Pierce. He described the optimization as the most ambitious attempt to game Google’s search results that he has ever seen.
  • TO understand the strategy that kept J. C. Penney in the pole position for so many searches, you need to know how Web sites rise to the top of Google’s results. We’re talking, to be clear, about the “organic” results — in other words, the ones that are not paid advertisements. In deriving organic results, Google’s algorithm takes into account dozens of criteria, many of which the company will not discuss.
  • But it has described one crucial factor in detail: links from one site to another. If you own a Web site, for instance, about Chinese cooking, your site’s Google ranking will improve as other sites link to it. The more links to your site, especially those from other Chinese cooking-related sites, the higher your ranking. In a way, what Google is measuring is your site’s popularity by polling the best-informed online fans of Chinese cooking and counting their links to your site as votes of approval.
  • But even links that have nothing to do with Chinese cooking can bolster your profile if your site is barnacled with enough of them. And here’s where the strategy that aided Penney comes in. Someone paid to have thousands of links placed on hundreds of sites scattered around the Web, all of which lead directly to JCPenney.com.
  • Who is that someone? A spokeswoman for J. C. Penney, Darcie Brossart, says it was not Penney.
  • “J. C. Penney did not authorize, and we were not involved with or aware of, the posting of the links that you sent to us, as it is against our natural search policies,” Ms. Brossart wrote in an e-mail. She added, “We are working to have the links taken down.”
  • Using an online tool called Open Site Explorer, Mr. Pierce found 2,015 pages with phrases like “casual dresses,” “evening dresses,” “little black dress” or “cocktail dress.” Click on any of these phrases on any of these 2,015 pages, and you are bounced directly to the main page for dresses on JCPenney.com.
  • Some of the 2,015 pages are on sites related, at least nominally, to clothing. But most are not. The phrase “black dresses” and a Penney link were tacked to the bottom of a site called nuclear.engineeringaddict.com. “Evening dresses” appeared on a site called casino-focus.com. “Cocktail dresses” showed up on bulgariapropertyportal.com. ”Casual dresses” was on a site called elistofbanks.com. “Semi-formal dresses” was pasted, rather incongruously, on usclettermen.org.
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • Here’s another hypothesis, this one for the conspiracy-minded. Last year, Advertising Age obtained a Google document that listed some of its largest advertisers, including AT&T, eBay and yes, J. C. Penney. The company, this document said, spent $2.46 million a month on paid Google search ads — the kind you see next to organic results.
  • Is it possible that Google was willing to countenance an extensive black-hat campaign because it helped one of its larger advertisers? It’s the sort of question that European Union officials are now studying in an investigation of possible antitrust abuses by Google.
  • Investigators have been asking advertisers in Europe questions like this: “Please explain whether and, if yes, to what extent your advertising spending with Google has ever had an influence on your ranking in Google’s natural search.” And: “Has Google ever mentioned to you that increasing your advertising spending could improve your ranking in Google’s natural search?”
  • ...5 more annotations...
  • Asked if Penney received any breaks because of the money it has spent on ads, Mr. Cutts said, “I’ll give a categorical denial.” He then made an impassioned case for Google’s commitment to separating the money side of the business from the search side. The former has zero influence on the latter, he said.
  • “There is a very long history at Google of saying ‘We are not going to worry about short-term revenue.’ ” He added: “We rely on the trust of our users. We realize the responsibility that we have to our users.”
  • He noted, too, that before The Times presented evidence of the paid links to JCPenney.com, Google had just begun to roll out an algorithm change that had a negative effect on Penney’s search results. (
  • True, JCPenney.com’s showing in Google searches had declined slightly by Feb. 8, as the algorithm change began to take effect. In “comforter sets,” Penney went from No. 1 to No. 7. In “sweater dresses,” from No. 1 to No. 10. But the real damage to Penney’s results began when Google started that “manual action.” The decline can be charted: On Feb. 1, the average Penney position for 59 search terms was 1.3.
  • MR. CUTTS said he did not plan to write about Penney’s situation, as he did with BMW in 2006. Rarely, he explained, does he single out a company publicly, because Google’s goal is to preserve the integrity of results, not to embarrass people. “But just because we don’t talk about it,” he said, “doesn’t mean we won’t take strong action.”
Weiye Loh

Likert scale - Wikipedia, the free encyclopedia - 0 views

  • Whether individual Likert items can be considered as interval-level data, or whether they should be considered merely ordered-categorical data is the subject of disagreement. Many regard such items only as ordinal data, because, especially when using only five levels, one cannot assume that respondents perceive all pairs of adjacent levels as equidistant. On the other hand, often (as in the example above) the wording of response levels clearly implies a symmetry of response levels about a middle category; at the very least, such an item would fall between ordinal- and interval-level measurement; to treat it as merely ordinal would lose information. Further, if the item is accompanied by a visual analog scale, where equal spacing of response levels is clearly indicated, the argument for treating it as interval-level data is even stronger.
  • When treated as ordinal data, Likert responses can be collated into bar charts, central tendency summarised by the median or the mode (but some would say not the mean), dispersion summarised by the range across quartiles (but some would say not the standard deviation), or analyzed using non-parametric tests, e.g. chi-square test, Mann–Whitney test, Wilcoxon signed-rank test, or Kruskal–Wallis test.[4] Parametric analysis of ordinary averages of Likert scale data is also justifiable by the Central Limit Theorem, although some would disagree that ordinary averages should be used for Likert scale data.
Weiye Loh

journalism.sg » Straits Times Forum editor accuses Chee of unfounded attack o... - 0 views

  • The Straits Times Forum editor Yap Koon Hong has told opposition politician Chee Soon Juan that he would be denied space in the Forum pages until he withdrew his “serious and unfounded aspersions” on the integrity of the newspaper. He was responding to previous reports on the SDP website which implied that The Straits Times had unfairly edited certain portions of Chee’s letters before publishing them.
  • Chee took issue with the fact that the title of his second letter had been changed to "PAP just as confrontational, replies Chee", which implied that he was indeed confrontational. In fact, newspapers rarely use the headlines suggested by writers and most readers know that headlines, together with picture selection and captions, are the work of sub-editors and not writers.
  • The SDP website shows that almost half of Chee's letter was deleted before publication. The original text was 635 words long. Forum page letters are generally less than 400 words long. The published version of Chee's letter was 348 words long.
  • ...2 more annotations...
  • Chee also questioned Yap's claims that The Straits Times' integrity had been damaged by the SDP's articles, citing how the Singapore media was already ranked lowly in international press freedom rankings.
  • Yap wanted Chee to retract his statements about the integrity of The Straits Times, and that the newspaper would withhold publishing his letters in the forum pages until he did so.
Weiye Loh

English: Who speaks English? | The Economist - 0 views

  • This was not a statistically controlled study: the subjects took a free test online and of their own accord.  They were by definition connected to the internet and interested in testing their English; they will also be younger and more urban than the population at large.
  • But Philip Hult, the boss of EF, says that his sample shows results similar to a more scientifically controlled but smaller study by the British Council.
  • Wealthy countries do better overall. But smaller wealthy countries do better still: the larger the number of speakers of a country’s main language, the worse that country tends to be at English. This is one reason Scandinavians do so well: what use is Swedish outside Sweden?  It may also explain why Spain was the worst performer in western Europe, and why Latin America was the worst-performing region: Spanish’s role as an international language in a big region dampens incentives to learn English.
  • ...4 more annotations...
  • Export dependency is another correlate with English. Countries that export more are better at English (though it’s not clear which factor causes which).  Malaysia, the best English-performer in Asia, is also the sixth-most export-dependent country in the world.  (Singapore was too small to make the list, or it probably would have ranked similarly.) This is perhaps surprising, given a recent trend towards anti-colonial and anti-Western sentiment in Malaysia’s politics. The study’s authors surmise that English has become seen as a mere tool, divorced in many minds from its associations with Britain and America.
  • Teaching plays a role, too. Starting young, while it seems a good idea, may not pay off: children between eight and 12 learn foreign languages faster than younger ones, so each class hour on English is better spent on a 10-year-old than on a six-year-old.
  • Between 1984 and 2000, the study's authors say, the Netherlands and Denmark began English-teaching between 10 and 12, while Spain and Italy began between eight and 11, with considerably worse results. Mr Hult reckons that poor methods, particularly the rote learning he sees in Japan, can be responsible for poor results despite strenuous efforts.
  • one surprising result is that China and India are next to each other (29th and 30th of 44) in the rankings, despite India’s reputation as more Anglophone. Mr Hult says that the Chinese have made a broad push for English (they're "practically obsessed with it”). But efforts like this take time to marinade through entire economies, and so may have avoided notice by outsiders. India, by contrast, has long had well-known Anglophone elites, but this is a narrow slice of the population in a country considerably poorer and less educated than China. English has helped India out-compete China in services, while China has excelled in manufacturing. But if China keeps up the push for English, the subcontinental neighbour's advantage may not last.
Syntacticsinc SEO

The Results of Persistent SEO - 1 views

I have hired Philippine outsourcing firm Syntactics Inc to work on my website and take care of my online marketing needs too. In just one month, they were able to put a business-oriented website th...

search engine optimization

started by Syntacticsinc SEO on 06 Jul 11 no follow-up yet
Weiye Loh

Ian Burrell: 'Hackgate' is a story that refuses to go away - Commentators, Opinion - Th... - 0 views

  • Mr Murdoch's close henchman Les Hinton assured MPs that the affair had been dealt with and when, two years later, Mr Coulson – by now director of communications for David Cameron – appeared before a renewed parliamentary inquiry he seemed confident of being fireproof. "We did not use subterfuge of any kind unless there was a clear public interest in doing so," he told MPs. When Scotland Yard concluded that, despite more allegations of hacking, there was nothing new to investigate, Wapping and Mr Coulson must again have concluded the affair was over.
  • But after an election campaign in which the Conservatives were roundly supported by Mr Murdoch's papers, a succession of further claimants against the News of the World has come forward. Sienna Miller, among others, seems determined to take her case to court, compelling Mulcaire to reveal his handlers and naming in court documents Ian Edmondson, once one of Coulson's executives. Mr Edmondson is now suspended. But the story is unlikely to end there
  • When Rupert Murdoch came to England last October to deliver a lecture, there were some in the audience who raised eyebrows when the media mogul broke off from a paean to Baroness Thatcher to say of his journalists: "We will vigorously pursue the truth – and we will not tolerate wrongdoing." The latter comment seemed to refer to the long-running phone-hacking scandal involving the News of the World, the tabloid he has owned for 41 years. Mr Murdoch's executives at his British headquarters in Wapping, east London, tried to draw a veil over the paper's own dirty secrets in 2007 and had no doubt assured him that the matter was history. Yet here was the boss, four years later, having to vouch for his organisation's honesty. Related articles
  •  
    The news agenda changes fast in tabloid journalism but Hackgate has been a story that refuses to go away. When the private investigator Glenn Mulcaire and the News of the World journalist Clive Goodman were jailed for conspiring to intercept the voicemails of members of the royal household, Wapping quickly closed ranks. The editor Andy Coulson was obliged to fall on his sword - while denying knowledge of illegality - and Goodman was condemned as a rogue operator.
Arthur Cane

Outstanding Team of SEO Specialists - 1 views

We have already tried a number of link builders and SEO services over the years and we were generally disappointed. Until we found our way to Syntactics Inc. I find their service great that is why,...

seo specialist specialists

started by Arthur Cane on 26 Jan 12 no follow-up yet
Weiye Loh

Essay - The End of Tenure? - NYTimes.com - 0 views

  • The cost of a college education has risen, in real dollars, by 250 to 300 percent over the past three decades, far above the rate of inflation. Elite private colleges can cost more than $200,000 over four years. Total student-loan debt, at nearly $830 billion, recently surpassed total national credit card debt. Meanwhile, university presidents, who can make upward of $1 million annually, gravely intone that the $50,000 price tag doesn’t even cover the full cost of a year’s education.
  • Then your daughter reports that her history prof is a part-time adjunct, who might be making $1,500 for a semester’s work. There’s something wrong with this picture.
  • The higher-ed jeremiads of the last generation came mainly from the right. But this time, it’s the tenured radicals — or at least the tenured liberals — who are leading the charge. Hacker is a longtime contributor to The New York Review of Books and the author of the acclaimed study “Two Nations: Black and White, Separate, Hostile, Unequal,”
  • ...6 more annotations...
  • And these two books arrive at a time, unlike the early 1990s, when universities are, like many students, backed into a fiscal corner. Taylor writes of walking into a meeting one day and learning that Columbia’s endowment had dropped by “at least” 30 percent. Simply brushing off calls for reform, however strident and scattershot, may no longer be an option.
  • The labor system, for one thing, is clearly unjust. Tenured and tenure-track professors earn most of the money and benefits, but they’re a minority at the top of a pyramid. Nearly two-thirds of all college teachers are non-tenure-track adjuncts like Matt Williams, who told Hacker and Dreifus he had taught a dozen courses at two colleges in the Akron area the previous year, earning the equivalent of about $8.50 an hour by his reckoning. It is foolish that graduate programs are pumping new Ph.D.’s into a world without decent jobs for them. If some programs were phased out, teaching loads might be raised for some on the tenure track, to the benefit of undergraduate education.
  • it might well be time to think about vetoing Olympic-quality athletic ­facilities and trimming the ranks of administrators. At Williams, a small liberal arts college renowned for teaching, 70 percent of employees do something other than teach.
  • But Hacker and Dreifus go much further, all but calling for an end to the role of universities in the production of knowledge. Spin off the med schools and research institutes, they say. University presidents “should be musing about education, not angling for another center on antiterrorist technologies.” As for the humanities, let professors do research after-hours, on top of much heavier teaching schedules. “In other occupations, when people feel there is something they want to write, they do it on their own time and at their own expense,” the authors declare. But it seems doubtful that, say, “Battle Cry of Freedom,” the acclaimed Civil War history by Princeton’s James McPherson, could have been written on the weekends, or without the advance spadework of countless obscure monographs. If it is false that research invariably leads to better teaching, it is equally false to say that it never does.
  • Hacker’s home institution, the public Queens College, which has a spartan budget, commuter students and a three-or-four-course teaching load per semester. Taylor, by contrast, has spent his career on the elite end of higher education, but he is no less disillusioned. He shares Hacker and Dreifus’s concerns about overspecialized research and the unintended effects of tenure, which he believes blocks the way to fresh ideas. Taylor has backed away from some of the most incendiary proposals he made last year in a New York Times Op-Ed article, cheekily headlined “End the University as We Know It” — an article, he reports, that drew near-universal condemnation from academics and near-universal praise from everyone else. Back then, he called for the flat-out abolition of traditional departments, to be replaced by temporary, “problem-centered” programs focusing on issues like Mind, Space, Time, Life and Water. Now, he more realistically suggests the creation of cross-­disciplinary “Emerging Zones.” He thinks professors need to get over their fear of corporate partnerships and embrace efficiency-enhancing technologies.
  • It is not news that America is a land of haves and have-nots. It is news that colleges are themselves dividing into haves and have-nots; they are becoming engines of inequality. And that — not whether some professors can afford to wear Marc Jacobs — is the real scandal.
  •  
    The End of Tenure? By CHRISTOPHER SHEA Published: September 3, 2010
Weiye Loh

nanopolitan: From the latest issue of Current Science: Scientometric Analysis of Indian... - 0 views

  • We have carried out a three-part study comparing the research performance of Indian institutions with that of other international institutions. In the first part, the publication profiles of various Indian institutions were examined and ranked based on the h-index and p-index. We found that the institutions of national importance contributed the highest in terms of publications and citations per institution. In the second part of the study, we looked at the publication profiles of various Indian institutions in the high-impact journals and compared these profiles against that of the top Asian and US universities. We found that the number of papers in these journals from India was miniscule compared to the US universities. Recognizing that the publication profiles of various institutions depend on the field/departments, we studied [in Part III] the publication profiles of many science and engineering departments at the Indian Institute of Science (IISc), Bangalore, the Indian Institutes of Technology, as well as top Indian universities. Because the number of faculty in each department varies widely, we have computed the publications and citations per faculty per year for each department. We have also compared this with other departments in various Asian and US universities. We found that the top Indian institution based on various parameters in various disciplines was IISc, but overall even the top Indian institutions do not compare favourably with the top US or Asian universities.
  • The comparison groups of institutions include MIT, UMinn, Purdue, PSU, MSU, OSU, Caltech, UCB, UTexas (all from the US), National University of Singapore, Tsing Hua Univerrsity (China), Seoul National University (South Korea), National Taiwan University (Taiwan), Kyushu University (Japan) and Chinese Academy of Sciences.
  • ... [T]he number of papers in these [high impact] journals from India was miniscule compared to [that from] the US universities. ... [O]verall even the top Indian institutions do not compare favourably with the top US or Asian universities.
  •  
    Scientometric analysis of some disciplines: Comparison of Indian institutions with other international institutions
Weiye Loh

Short Sharp Science: Computer beats human at Japanese chess for first time - 0 views

  • A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time.
  • computers have been beating humans at western chess for years, and when IBM's Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn't happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.
  • Japan's national broadcaster, NHK, reported that Akara "aggressively pursued Shimizu from the beginning". It's the first time a computer has beaten a professional human player.
  • ...2 more annotations...
  • The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu's defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.
  • Perhaps the association doesn't mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player.
  •  
    Computer beats human at Japanese chess for first time
Weiye Loh

Roger Pielke Jr.'s Blog: Political Affiliations of Scientists - 0 views

  • Dan Sarewitz tossed some red meat out on the table in the form of an essay in Slate on the apparent paucity of Republicans among the US scientific establishment.  Sarewitz suggests that it is in the interests f the scientific community both to understand this situation and to seek greater diversity in its ranks, explaining that "the issue here is legitimacy, not literacy."
  • The issue that Sarewitz raises is one of legitimacy.  All of us evaluate knowledge claims outside our own expertise (and actually very few people are in fact experts) based not on a careful consideration of facts and evidence, but by other factors, such as who we trust and how their values jibe with our own.  Thus if expert institutions are going to sustain and function in a democratic society they must attend to their legitimacy.  Scientific institutions that come to be associated with one political party risk their legitimacy among those who are not sympathetic to that party's views.
  • Of course, we don't just evaluate knowledge claims simply based on individuals, but usually through institutions, like scientific journals, national academies, professional associations, universities and so on. Sarewitz's Slate article did not get into a discussion of these institutions, but I think that it is essential to fully understand his argument.
  • ...4 more annotations...
  • Consider that the opinion poll that Sarewitz cited which found that only 6% of scientists self-identify as Republicans has some very important fine print -- specifically that the scientists that it surveyed were all members of the AAAS.  I do not have detailed demographics information, but based on my experience I would guess that AAAS membership is dominated by university and government scientists.  The opinion poll thus does not tell us much about US scientists as a whole, but rather something about one scientific institution -- AAAS.  And the poll indicates that AAAS is largely an association that does not include Republicans.
  • One factor might be seen in a recent action of the American Geophysical Union -- another big US science association: AGU recently appointed Chris Mooney to its Board.  I am sure that Chris is a fine fellow, but appointing an English major who has written divisively about the "Republican War on Science" to help AGU oversee "science communication" is more than a little ironic, and unlikely to attract many Republican scientists to the institution, perhaps even having the opposite effect.  To the extent that AAAS and AGU endorse the Democratic policy agenda, or just appear to do so, it reflects their role not as arbiters of knowledge claims, but rather as political actors.
  • I would wager that the partisan affiliation of scientists in the US military, in the energy , pharmaceutical and finance industries would look starkly different than that of AAAS.  If there is a crisis of legitimacy in the scientific community, it is among those institutions which have become to be so dominated by those espousing a shared political view, whatever that happens to be. This crisis is shared by AAAS and AGU, viewed with suspicion by those on the Right, and, for instance, by ExxonMobil, which is viewed by a similar suspicion by those on the Left.  Sarewitz is warning that for many on the Right, institutions like AAAS are viewed with every bit as skeptical an eye as those on the Left view ExxonMobil.
  • Many observers are so wrapped up in their own partisan battles that they either don't care that science is being associated with one political party or they somehow think that through such politicization they will once and for all win the partisan battles.  They won't. Political parties are far more robust than institutions of science. Institutions of science need help to survive intact partisan political battles.  The blogosphere and activist scientists and journalists offer little help.
1 - 20 of 36 Next ›
Showing 20 items per page