Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Social" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

Democracy's Laboratory: Are Science and Politics Interrelated?: Scientific American - 0 views

  • That science and politics are nonoverlapping magisteria (vide Stephen Jay Gould’s model separating science and religion) was long my position until I read Timothy Ferris’s new book The Science of Liberty (HarperCollins, 2010). Ferris, the best-selling author of such science classics as Coming of Age in the Milky Way and The Whole Shebang, has bravely ventured across the magisterial divide to argue that the scientific values of reason, empiricism and antiauthoritarianism are not the product of liberal democracy but the producers of it.
  • “The new government, like a scientific laboratory, was designed to accommodate an ongoing series of experiments, extending indefinitely into the future,” Ferris explains. “Nobody could anticipate what the results might be, so the government was structured, not to guide society toward a specified goal, but to sustain the experimental process itself.”
  • “Liberalism and science are methods, not ideologies. Both incorporate feedback loops through which actions (e.g., laws) can be evaluated to see whether they continue to meet with general approval. Neither science nor liberalism makes any doctrinaire claims beyond the efficacy of its respective methods—that is, that science obtains knowledge and that liberalism produces social orders generally acceptable to free peoples.”
  •  
    Democracy's Laboratory: Are Science and Politics Interrelated? Mixing science and politics is tricky but necessary for a functioning polity By Michael Shermer   
Weiye Loh

A Culture of Poverty - Ta-Nehisi Coates - Personal - The Atlantic - 0 views

  • When we talk "culture," as it relates to African-Americans, we assume a kind of exclusivity and suspension of logic. Stats are whipped out (70 percent of black babies born out of wedlock) and then claims are tossed around cavalierly, (black culture doesn't value marriage.) The problem isn't that "culture" doesn't exist, nor is it that elements of that "culture" might impair upward mobility.
  • It defies logic to think that any group, in a generationaly entrenched position, would not develop codes and mores for how to survive in that position. African-Americans, themselves, from poor to bourgeois, are the harshest critics of the street mentality. Of course, most white people only pay attention when Bill Cosby or Barack Obama are making that criticism. The problem is that rarely do such critiques ask  why anyone would embrace such values. Moreover, they tend to assume that there's something uniquely "black" about those values, and their the embrace.
  • If you are a young person living in an environment where violence is frequent and random, the willingness to meet any hint of violence with yet more violence is a shield.
  • ...6 more annotations...
  • once I was acculturated to the notion that often the quickest way to forestall more fighting, is to fight, I was a believer. And maybe it's wrong to say this, but it made my the rest of my time in Baltimore a lot easier, because the willingness to fight isn't just about yourself, it's a signal to your peer group. 
  • To the young people in my neighborhood, friendship was defined by having each other's back. And in that way, the personal shields, the personal willingness to meet violence with violence, combined and became a collective, neighborhood shield--a neighborhood rep.
  • I think one can safely call that an element of a kind of street culture. It's also an element which--once one leaves the streets--is a great impediment.
  • I suspect that a large part of the problem, when we talk about culture, is an inability to code-switch, to understand that the language of Rohan is not the language of Mordor
  • how difficult it is to get people to discard practices which were essential to them in one world, but hinder their advancement into another. And then there's the fear of that other world, that sense that if you discard those practices, you have discarded some of yourself, and done it in pursuit of a world, that you may not master. 
  • The streets are like any other world--we all assume an armor, a garment to suit that world. And indeed, in every world, some people wear the armor better than others, and thus reap considerable social reward.
  •  
    A Culture of Poverty
Weiye Loh

Epiphenom: Religion and suicide - a patchy global picture - 0 views

  • The main objective of this study is to understand the factors that contribute to suicide in different countries, and what can be done to reduce them. In each country, people who have attempted suicide are brought into the study and given a questionnaire to fill out. Another group of people, randomly chosen, are given the same questionnaire. That allows the team to compare religious affiliation, involvement in organised religion, and individual religiosity in suicide attempters and the general population. When they looked at the data, and adjusted them for a host of factors known to affect suicide risk (age, gender, marital status, employment, and education), a complex picture emerged.
  • In Iran, religion was highly protective, whether religion was measured as the rate of mosque attendance or as whether the individual thought of themselves as a religious person. In Brazil, going to religious services and personal religiosity were both highly protective. Bizarrely, however, religious affiliation was not. That might be because being Protestant was linked to greater risk, and Catholicism to lower risk. Put the two together, and it may balance out. In Estonia, suicides were lower in those who were affiliated to a religion, and those who said they were religious. They were also a bit lower in those who In India, there wasn't much effect of religion at all - a bit lower in those who go to religious services at least occasionally. Vietnam was similar. Those who went to religious services yearly were less likely to have attempted suicide, but no other measure of religion had any effect. In Sri Lanka, going to religious services had no protective effect, but subjective religiosity did. In South Africa, those who go to Church were no less likely to attempt suicide. In fact, those who said they were religious were actually nearly three times more likely to attempt suicide, and those who were affiliated to a religion were an incredible six times more likely!
  • In Brazil, religious people are six times less likely to commit suicide than the non religious. In South Africa, they are three times more likely. How to explain these national differences?
  • ...5 more annotations...
  • Part of it might be differences in the predominant religion. The protective effect of religion seems to be higher in monotheistic countries, and it's particularly high in the most fervently monotheistic country, Iran. In India, Sri Lanka, and Vietnam, the protective effect is smaller or non-existent.
  • But that doesn't explain South Africa. South Africa is unusual in that it is a highly diverse country, fractured by ethnic, social and religious boundaries. The researchers think that this might be a factor: South Africa has been described as ‘‘The Rainbow Nation’’ because of its cultural diversity. There are a variety of ethnic groups and a greater variety of cultures within each of these groups. While cultural diversity is seen as a national asset, the interaction of cultures results in the blurring of cultural norms and boundaries at the individual, family and cultural group levels. Subsequently, there is a large diversity of religious denominations and this does not seem favorable in terms of providing protection against attempted suicide.
  • earlier studies have shown that religious homogeneity is linked to lower suicide rates, and they suggest that the reverse might well be happening in South Africa.
  • this also could explain why, in Brazil, Protestants have a higher suicide rate than the unaffiliated. That too could be linked to their status as a religious minority.
  • we've got a study showing the double-edged nature of religion. For those inside the group, it provides support and comfort. But once fractures appear, religion just seems to turn up the heat!
  •  
     Religion and suicide
Weiye Loh

Do Intelligent People Drink More Alcohol? : Discovery News - 0 views

  • More intelligent children in both studies grew up to drink alcohol more frequently and in greater quantities than less intelligent children. In the Brits' case, "very bright" children grew up to consume nearly eight-tenths of a standard deviation more alcohol than their "very dull" cohorts.
  • Researchers controlled for demographic variables -- such as marital status, parents' education, earnings, childhood social class and more -- that may have also affected adult drinking. Still, the findings held true: Smarter kids were drinking more as adults.
  • Psychology Today takes an evolutionary approach. They argue that drinkable alcohol is a relatively novel invention of 10,000 years ago. Our ancestors had previously gotten their alcohol kick through eating rotten fruits, so more intelligent humans may be more likely to choose modern alcoholic beverages. Although increased alcohol consumption could be a reflection of exceptional brainpower, drinking more will certainly not make you any more intelligent than you already are.
  •  
    DO INTELLIGENT PEOPLE DRINK MORE ALCOHOL?
Weiye Loh

How should we use data to improve our lives? - By Michael Agger - Slate Magazine - 0 views

  • The Swiss economists Bruno Frey and Alois Stutzer argue that people do not appreciate the real cost of a long commute. And especially when that commute is unpredictable, it takes a toll on our daily well-being.
  • imagine if we shared our commuting information so that we could calculate the average commute from various locations around a city. When the growing family of four pulls up to a house for sale for in New Jersey, the listing would indicate not only the price and the number of bathrooms but also the rush-hour commute time to Midtown Manhattan. That would be valuable information to have, since buyers could realistically factor the tradeoffs of remaining in a smaller space closer to work against moving to a larger space and taking on a longer commute.
  • In a cover story for the New York Times Magazine, the writer Gary Wolf documented the followers of “The Data-Driven Life,” programmers, students, and self-described geeks who track various aspects of their lives. Seth Roberts does a daily math exercise to measure small changes in his mental acuity. Kiel Gilleade is a "Body Blogger" who shares his heart rate via Twitter. On the more extreme end, Mark Carranza has a searchable database of every idea he's had since 1984. They're not alone. This community continues to thrive, and its efforts are chronicled at a blog called the Quantified Self, co-founded by Wolf and Kevin Kelly.
  • ...3 more annotations...
  • If you've ever asked Nike+ to log your runs or given Google permission to keep your search history, you've participated in a bit of self-tracking. Now that more people have location-aware smartphones and the Web has made data easy to share, personal data is poised to become an important tool to understand how we live, and how we all might live better. One great example of this phenomenon in action is the site Cure Together, which allows you to enter your symptoms—for, say, "anxiety" or "insomnia"—and the various remedies you've tried to feel better. One thing the site does is aggregate this information and present the results in chart form. Here is the chart for depression:
  • Instead of being isolated in your own condition, you can now see what has worked for others. The same principle is at work at the site Fuelly, where you can "track, share, and compare" your miles per gallon and see how efficient certain makes and models really are.
  • Businesses are also using data tracking to spur their employees to accomplishing companywide goals: Wal-Mart partnered with Zazengo to help employees track their "personal sustainability" actions such as making a home-cooked meal or buying local produce. The app Rescue Time, which records all of the activity on your computer, gives workers an easy way to account for their time. And that comes in handy when you want to show the boss how efficient telecommuting can be.
  •  
    Data for a better planet
Weiye Loh

Cooks Source, condescension and copyright law | Econsultancy - 0 views

  • It’s hard to make a living writing online. In general, those who write for the web are looked down on by their ‘in-print’ counterparts. Despite the fact that we often speak to larger and more relevant audiences, there’s still an attitude that web copy is somehow illegitimate, less professional.
  • A couple of years ago, LiveJournal user Monica Gaudio posted a short article on the history of the apple pie. Conclusion: It isn’t quite as all-American as you might think. Fairly innocuous stuff, until it recently resurfaced in Cook’s Source magazine. According to Monica , she only became aware of this when a friend asked her how she had managed to be published. Monica acted correctly, contacting the magazine under the assumption that a mix-up had occurred. The response showed an astonishing lack of knowledge about digital copyright, content value, and of course, the ever-looming spectre of social media fail and internet wrath. Apparently, the magazine had simply lifted the article directly from Monica’s site, publishing it in their print magazine, on their website and on the Cooks Source Facebook page. 
  • A few emails in and the editor finally asked what Monica wanted. Her list of demands was hardly excessive: A printed apology, and a donation of $130 to the Columbia school of Journalism, and she’d ignore the entire incident. Instead, the editor of Cook’s Source responded with a remarkable display of ignorance and condescension: Honestly Monica, the web is considered "public domain" and you should be happy we just didn't "lift" your whole article and put someone else's name on it! It happens a lot, clearly more than you are aware of, especially on college campuses, and the workplace. If you took offence and are unhappy, I am sorry, but you as a professional should know that the article we used written by you was in very bad need of editing, and is much better now than was originally. Now it will work well for your portfolio. For that reason, I have a bit of a difficult time with your requests for monetary gain, albeit for such a fine (and very wealthy!) institution. We put some time into rewrites, you should compensate me! I never charge young writers for advice or rewriting poorly written pieces, and have many who write for me... ALWAYS for free! I’m unable to fathom where the notion that all web content is public domain came from for starters. If this is true, then it should be perfectly fine for me to reprint the entire contents of The Times on my blog each day.
  •  
    Cooks Source, condescension and copyright law
Weiye Loh

Balderdash: The problem with Liberal Utilitarianism - 0 views

  • Sam Harris's reinvention of Utilitarianism/Consequentialism has charmed many, and in my efforts to show people how pure Utilitarianism/Consequentialism fails (in the process encountering people who seem never to have read anything Harris has written or read on the subject, since I have been challenged to show where Harris has proposed Science as the foundation of our moral system, or that one can derive moral facts from facts about the world), "liberal utilitarianism" has been thrown at me as a way to resolve the problems with pure Utilitarianism/Consequentialism.
  • Liberal utilitarianism is not a position that one often encounters. I suspect this is because most philosophers recognise that unless one bites some big bullets, it is incoherent, being beholden to two separate moral theories, which brings many problems when they clash. It is much easier to stick to one foundation of morality.
  • utilitarians typically must claim that ‘the value of liberty. .. is wholly dependent on its contribution to utility. But if that is the case’, he asks, ‘how can the “right” to liberty be absolute and indefeasible when the consequences of exercising the right will surely vary with changing social circumstances?’ (1991, p. 213). His answer is that it cannot be, unless external moral considerations are imported into pure maximizing utilitarianism to guarantee the desired Millian result. In his view, the absolute barrier that Mill extcts against all forms of coercion really seems to require a non-utilitarian justification, even if ‘utilitarianism’ might somehow be defined or enlarged to subsume the requisite form of reasoning. Thus, ‘Mill is a consistent liberal’, he says, ‘whose view is inconsistent with hedonistic or preference utilitarianism’ (ibid., p. 236)...
  • ...4 more annotations...
  • From Riley's Mill on liberty:
  • Mill’s defence of liberty is not utilitarian’ because it ignores the dislike, disgust and so-called ‘moral’ disapproval which others feel as a result of self-regarding conduct.
  • Why doesn’t liberal utilitarianism consider the possibility that aggregate dislike of the individual’s self-regarding conduct might outweigh the value of his liberty, and justify suppression of his conduct? As we have seen, Mill devotes considerable effort to answering this question (111.1 , 10—1 9, IV.8— 12, pp. 260—1, 26 7—75, 280—4). Among other things, liberty in self-regarding matters is essential to the cultivation of individual character, he says, and is not incompatible with similar cultivation by others, because they remain free to think and do as they please, having directly suffered no perceptible damage against their wishes. When all is said and done, his implicit answer is that a person’s liberty in self-regarding matters is infinitely more valuable than any satisfaction the rest of us might take at suppression of his conduct. The utility of self-regarding liberty is of a higher kind than the utility of suppression based on mere dislike (no perceptible damages to others against their wishes is implicated), in that any amount (however small) of the higher kind outweighs any quantity (however large) of the lower.
  • The problem is that if you are using (implicitly or otherwise) mathematics to sum up the expected utility of different choices, you canot plug infinity into any expression, or you will get incoherent results as the expression in question will no longer be well-behaved.
Weiye Loh

Why Are the Rich So Good at the Internet? | Fast Company - 0 views

  • It even suggests the existence of a tipping point, where Internet use takes off at a certain income level.
  • even among groups that own the necessary technology, less wealth equates to less (and less varied) Internet usage.
  • The report, an umbrella analysis of three Pew surveys conducted in 2009 and 2010, compares Internet use among American households in four different income brackets: less than $30,000 a year; $30,000-50,000; $50,000-75,000; and greater than $75,000. Respondents--more than 3,000 people participated--were asked a variety of questions about how often they used the Internet, and what sorts of services they took advantage of (such as email, online news, booking travel online, or health research).
  • ...7 more annotations...
  • As might be expected, the wealthier used the Internet more.
  • Almost 90% of the wealthiest respondents reported broadband access at home. Of those in the under-$30,000 households, that figure was only 40%. "I would expect some type of correlation," says Jansen. "But we controlled for community type--urban, rural, suburban--educational attainment, race, ethnicity, gender, and age." None was nearly so strongly correlated as income.
  • Age did have some effect, and rural regions were a good deal less wired
  • Once a modestly middle-class family buys a computer and Internet access, why is it that they spend less time researching products online than their wealthier counterparts, given that they have a tighter budget than the ultra-wealthy?
  • Jansen notes that for many questions Pew asked about Internet use, there appeared to be a tipping point somewhere in the $30,000-$50,000 range. Consider, for instance, the data on those who researched products online. Only 67% of lowest-income Internet users research products online. Make it over the hump into the $30,000-$50,000 bracket, though, and all of a sudden 81% of internet users do so--a jump of 14 points. But then as you climb the income ladder, the change in behavior begins to level out, just climbing a few percentage points with each bracket
  • "It would be interesting to look at what is going on at that particular income level," says Jansen, suggesting a potential tack for further research, "that seems to indicate a fairly robust use of technology and interest."
  • Jansen, like any careful researcher, cautions against confusing correlation with causation. It may be that people are using the web to make their fortunes, and not using their fortunes to surf the web.
  •  
    Pew Internet has released a report finding that income is the strongest predictor of whether, how often, and in what ways Americans use the web.
Weiye Loh

Genome Biology | Full text | A Faustian bargain - 0 views

  • on October 1st, you announced that the departments of French, Italian, Classics, Russian and Theater Arts were being eliminated. You gave several reasons for your decision, including that 'there are comparatively fewer students enrolled in these degree programs.' Of course, your decision was also, perhaps chiefly, a cost-cutting measure - in fact, you stated that this decision might not have been necessary had the state legislature passed a bill that would have allowed your university to set its own tuition rates. Finally, you asserted that the humanities were a drain on the institution financially, as opposed to the sciences, which bring in money in the form of grants and contracts.
  • I'm sure that relatively few students take classes in these subjects nowadays, just as you say. There wouldn't have been many in my day, either, if universities hadn't required students to take a distribution of courses in many different parts of the academy: humanities, social sciences, the fine arts, the physical and natural sciences, and to attain minimal proficiency in at least one foreign language. You see, the reason that humanities classes have low enrollment is not because students these days are clamoring for more relevant courses; it's because administrators like you, and spineless faculty, have stopped setting distribution requirements and started allowing students to choose their own academic programs - something I feel is a complete abrogation of the duty of university faculty as teachers and mentors. You could fix the enrollment problem tomorrow by instituting a mandatory core curriculum that included a wide range of courses.
  • the vast majority of humanity cannot handle freedom. In giving humans the freedom to choose, Christ has doomed humanity to a life of suffering.
  • ...7 more annotations...
  • in Dostoyevsky's parable of the Grand Inquisitor, which is told in Chapter Five of his great novel, The Brothers Karamazov. In the parable, Christ comes back to earth in Seville at the time of the Spanish Inquisition. He performs several miracles but is arrested by Inquisition leaders and sentenced to be burned at the stake. The Grand Inquisitor visits Him in his cell to tell Him that the Church no longer needs Him. The main portion of the text is the Inquisitor explaining why. The Inquisitor says that Jesus rejected the three temptations of Satan in the desert in favor of freedom, but he believes that Jesus has misjudged human nature.
  • I'm sure the budgetary problems you have to deal with are serious. They certainly are at Brandeis University, where I work. And we, too, faced critical strategic decisions because our income was no longer enough to meet our expenses. But we eschewed your draconian - and authoritarian - solution, and a team of faculty, with input from all parts of the university, came up with a plan to do more with fewer resources. I'm not saying that all the specifics of our solution would fit your institution, but the process sure would have. You did call a town meeting, but it was to discuss your plan, not let the university craft its own. And you called that meeting for Friday afternoon on October 1st, when few of your students or faculty would be around to attend. In your defense, you called the timing 'unfortunate', but pleaded that there was a 'limited availability of appropriate large venue options.' I find that rather surprising. If the President of Brandeis needed a lecture hall on short notice, he would get one. I guess you don't have much clout at your university.
  • As for the argument that the humanities don't pay their own way, well, I guess that's true, but it seems to me that there's a fallacy in assuming that a university should be run like a business. I'm not saying it shouldn't be managed prudently, but the notion that every part of it needs to be self-supporting is simply at variance with what a university is all about.
  • You seem to value entrepreneurial programs and practical subjects that might generate intellectual property more than you do 'old-fashioned' courses of study. But universities aren't just about discovering and capitalizing on new knowledge; they are also about preserving knowledge from being lost over time, and that requires a financial investment.
  • what seems to be archaic today can become vital in the future. I'll give you two examples of that. The first is the science of virology, which in the 1970s was dying out because people felt that infectious diseases were no longer a serious health problem in the developed world and other subjects, such as molecular biology, were much sexier. Then, in the early 1990s, a little problem called AIDS became the world's number 1 health concern. The virus that causes AIDS was first isolated and characterized at the National Institutes of Health in the USA and the Institute Pasteur in France, because these were among the few institutions that still had thriving virology programs. My second example you will probably be more familiar with. Middle Eastern Studies, including the study of foreign languages such as Arabic and Persian, was hardly a hot subject on most campuses in the 1990s. Then came September 11, 2001. Suddenly we realized that we needed a lot more people who understood something about that part of the world, especially its Muslim culture. Those universities that had preserved their Middle Eastern Studies departments, even in the face of declining enrollment, suddenly became very important places. Those that hadn't - well, I'm sure you get the picture.
  • one of your arguments is that not every place should try to do everything. Let other institutions have great programs in classics or theater arts, you say; we will focus on preparing students for jobs in the real world. Well, I hope I've just shown you that the real world is pretty fickle about what it wants. The best way for people to be prepared for the inevitable shock of change is to be as broadly educated as possible, because today's backwater is often tomorrow's hot field. And interdisciplinary research, which is all the rage these days, is only possible if people aren't too narrowly trained. If none of that convinces you, then I'm willing to let you turn your institution into a place that focuses on the practical, but only if you stop calling it a university and yourself the President of one. You see, the word 'university' derives from the Latin 'universitas', meaning 'the whole'. You can't be a university without having a thriving humanities program. You will need to call SUNY Albany a trade school, or perhaps a vocational college, but not a university. Not anymore.
  • I started out as a classics major. I'm now Professor of Biochemistry and Chemistry. Of all the courses I took in college and graduate school, the ones that have benefited me the most in my career as a scientist are the courses in classics, art history, sociology, and English literature. These courses didn't just give me a much better appreciation for my own culture; they taught me how to think, to analyze, and to write clearly. None of my sciences courses did any of that.
Weiye Loh

Did Mark Zuckerberg Deserve to Be Named Person of the Year? No - 0 views

  • First, Time carried out a reader poll, in which individuals got the chance to vote and rate their favorite nominees. Zuckerberg ended up in 10th place with 18,353 votes and an average rating of 52 behind renowned individuals such as Lady Gaga, Julian Assange, John Stewart and Stephen Colbert, Barack Obama, Steve Jobs, et cetera. On the other end of the spectrum, Julian Assange managed to grab the first place with a whopping 382,026 votes and an average rating of 92.It turns out that the poll had no point or purpose at all. Time clearly did not take into account its readers’ opinion on the matter.
  • ulian Assange should have been named Person of the Year. His contribution to the world and history — whether you see it as positive or negative — has been more controversial and life-changing that those of Zuckerberg. Assange and his non-profit organization has changed the way we look at various governments around the world. Specially, the U.S. government. There’s a reason why hundreds of thousands of individuals voted for Assange and not Zuckerberg.
  • even other nominees deserve the title more than Zuckerberg. For instance, Lady Gaga has become a huge influence in the music scene. She’s also done a lot of charitable work for LGBT [lesbian, gay, bisexual, and transgender] individuals and support equality rights. Even though I’m not a fan, Apple CEO Steve Jobs has also done more than Zuckerberg. His opinion and mandate at Apple has completely revolutionize the tech industry.
  • ...1 more annotation...
  • Facebook as a company and social network deserve the title more than its CEO
Weiye Loh

Citizen Ethics Network - 0 views

  • There is a widespread concern that the winner takes all mentality of the banker, and the corrupted values of the politician, have replaced a common sense ethics of fairness and integrity. Many worry that an emphasis on a shallow individualism has damaged personal relationships and weakened important social bonds.
  • The Citizen Ethics Network exists to promote this debate and to renew the ethical underpinnings of economic, political and daily life.
  •  
    How do we decide our values? How can we do economics as if ethics matters? What kind of politics do we want? What sort of common life can we share?
Weiye Loh

BBC News - Cleaners 'worth more to society' than bankers - study - 0 views

  • The research, carried out by think tank the New Economics Foundation, says hospital cleaners create £10 of value for every £1 they are paid. It claims bankers are a drain on the country because of the damage they caused to the global economy. They reportedly destroy £7 of value for every £1 they earn. Meanwhile, senior advertising executives are said to "create stress". The study says they are responsible for campaigns which create dissatisfaction and misery, and encourage over-consumption.
  • And tax accountants damage the country by devising schemes to cut the amount of money available to the government, the research suggests. By contrast, child minders and waste recyclers are also doing jobs that create net wealth to the country.
  • a new form of job evaluation to calculate the total contribution various jobs make to society, including for the first time the impact on communities and environment.
  • ...3 more annotations...
  • "Pay levels often don't reflect the true value that is being created. As a society, we need a pay structure which rewards those jobs that create most societal benefit rather than those that generate profits at the expense of society and the environment".
  • "The point we are making is more fundamental - that there should be a relationship between what we are paid and the value our work generates for society. We've found a way to calculate that,"
  • The research also makes a variety of policy recommendations to align pay more closely with the value of work. These include establishing a high pay commission, building social and environmental value into prices, and introducing more progressive taxation.
  •  
    Cleaners 'worth more to society' than bankers - study
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

On newspapers' online comments « Yawning Bread Sampler 2 - 0 views

  • Assistant Professor Mark Cenite of Nanyang Technological University’s Wee Kim Wee School of Communication and Information said: ‘This approach allows users to moderate themselves, and the news site is seen as being sensitive to readers’ values.’
  • But Mr Alex Au, who runs socio-political blog Yawning Bread, cautioned that this could lead to astroturfing. The term, derived from a brand of fake grass, refers to a fake grassroots movement in which a group wishing to push its agenda sends out manipulated and replicated online messages in support of a certain policy or issue. His suggestion: user tiers, in which comments by users with verified identities are displayed visibly and anonymous comments less conspicuously. He said: ‘This approach does not bar people from speaking up, but weighs in by signalling the path towards responsible participation.’
  • what is astroturfing? It is when a few people do one or both of two things: create multiple identities for each of themselves and flood a forum or topic with similar opinions, or get their friends to post boilerplate letters (expressing similar opinions of course) even if they do not totally share them to the same degree. The intent is to create an impression that a certain opinion is more widely held than is actually the case.
  • ...4 more annotations...
  • user-rating will have the tendency of giving prominence to widely-shared opinion. Comments expressing unpopular opinions will get fewer “stars” from other readers and sink in display priority. In theory, it doesn’t have to be so. People may very well give “stars” to well-thought-out comments that argue cogently for a view they don’t agree with, lauding the quality of expression rather than the conclusion, but let’s get real. Most people like to hear what they already believe. That being the case, the effect of such a scheme would be to crowd out unpopular opinion even if they have merit; it produces a majoritarian effect in newspapers’ comments sections.
  • it is open to abuse in that a small group of people wanting to push a particular opinion could repeatedly vote for a certain comment, thereby giving it increased ranking and more prominent display. Such action would be akin to astroturfing.
  • The value of discussion lies not in hearing what we already know or what we already believe in. It lies in hearing alternative arguments and learning new facts. Structuring a discussion forum by giving prominence to merely popular opinion just makes it an echo chamber. The greater public purpose is better served when contrary opinion is aired. That is why I disagree with a scheme whereby users apply ratings and prominence is given to highly-rated comments.
    • Weiye Loh
       
      But the majority of users who participate in online activism/ slacktivism are very much the young, western educated folks. This in itself already make the online social sphere an echo chamber isn't it? 
  • nonymous comments have their uses. Most obviously, there will be times when whistle-blowing serves the public purpose, and so, even if displayed less prominently, they should still be allowed.
  •  
    A popular suggestion among media watchers interviewed is to let users rate the comments and display the highly ranked ones prominently.
Weiye Loh

Is it a boy or a girl? You decide - Prospect Magazine « Prospect Magazine - 0 views

  • The only way to guarantee either a daughter or son is to undergo pre-implantation genetic diagnosis: a genetic analysis of an embryo before it is placed in the womb. This is illegal in Britain except for couples at risk of having a child with a life-threatening gender-linked disorder.
  • It’s also illegal for clinics to offer sex selection methods such as MicroSort, that sift the slightly larger X chromosome-bearing (female) sperm from their weedier Y chromosome-bearing (male) counterparts, and then use the preferred sperm in an IVF cycle. With a success rate hovering around 80-90 per cent, it’s better than Mother Nature’s odds of conception, but not immaculate.
  • Years ago I agreed with this ban on socially motivated sex selection. But I can’t defend that stance today. My opposition was based on two worries: the gender balance being skewed—look at China—and the perils of letting society think it’s acceptable to prize one sex more than the other. Unlike many politicians, however, I think it is only right and proper to perform an ideological U-turn when presented with convincing opposing evidence.
  • ...4 more annotations...
  • A 2003 survey published in the journal Human Reproduction showed that few British adults would be concerned enough about their baby’s gender to use the technology, and most adults wanted the same number of sons as daughters
  • Bioethics specialist Edgar Dahl of the University of Geissen found that 68 per cent of Britons craved an equal number of boys and girls; 6 per cent wanted more boys; 4 per cent more girls; 3 per cent only boys; and 2 per cent only girls. Fascinatingly, even if a baby’s sex could be decided by simply taking a blue pill or a pink pill, 90 per cent of British respondents said they wouldn’t take it.
  • What about the danger of stigmatising the unwanted sex if gender selection was allowed? According to experts on so-called “gender disappointment,” the unwanted sex would actually be male.
  • I may think it is old-fashioned to want a son so that he can inherit the family business, or a daughter to have someone to go shopping with. But how different is that from the other preferences and expectations we have for our children, such as hoping they will be gifted at mathematics, music or sport? We all nurture secret expectations for our children: I hope that mine will be clever, beautiful, witty and wise. Perhaps it is not the end of the world if we allow some parents to add “female” or “male” to the list.
  •  
    Is it a boy or a girl? You decide ANJANA AHUJA   28th April 2010  -  Issue 170 Choosing the sex of an unborn child is illegal, but would it harm society if it wasn't?
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

The Way We Live Now - Metric Mania - NYTimes.com - 0 views

  • In the realm of public policy, we live in an age of numbers.
  • do wehold an outsize belief in our ability to gauge complex phenomena, measure outcomes and come up with compelling numerical evidence? A well-known quotation usually attributed to Einstein is “Not everything that can be counted counts, and not everything that counts can be counted.” I’d amend it to a less eloquent, more prosaic statement: Unless we know how things are counted, we don’t know if it’s wise to count on the numbers.
  • The problem isn’t with statistical tests themselves but with what we do before and after we run them.
  • ...9 more annotations...
  • First, we count if we can, but counting depends a great deal on previous assumptions about categorization. Consider, for example, the number of homeless people in Philadelphia, or the number of battered women in Atlanta, or the number of suicides in Denver. Is someone homeless if he’s unemployed and living with his brother’s family temporarily? Do we require that a women self-identify as battered to count her as such? If a person starts drinking day in and day out after a cancer diagnosis and dies from acute cirrhosis, did he kill himself? The answers to such questions significantly affect the count.
  • Second, after we’ve gathered some numbers relating to a phenomenon, we must reasonably aggregate them into some sort of recommendation or ranking. This is not easy. By appropriate choices of criteria, measurement protocols and weights, almost any desired outcome can be reached.
  • Are there good reasons the authors picked the criteria they did? Why did they weigh the criteria in the way they did?
  • Since the answer to the last question is usually yes, the problem of reasonable aggregation is no idle matter.
  • These two basic procedures — counting and aggregating — have important implications for public policy. Consider the plan to evaluate the progress of New York City public schools inaugurated by the city a few years ago. While several criteria were used, much of a school’s grade was determined by whether students’ performance on standardized state tests showed annual improvement. This approach risked putting too much weight on essentially random fluctuations and induced schools to focus primarily on the topics on the tests. It also meant that the better schools could receive mediocre grades becausethey were already performing well and had little room for improvement. Conversely, poor schools could receive high grades by improving just a bit.
  • Medical researchers face similar problems when it comes to measuring effectiveness.
  • Suppose that whenever people contract the disease, they always get it in their mid-60s and live to the age of 75. In the first region, an early screening program detects such people in their 60s. Because these people live to age 75, the five-year survival rate is 100 percent. People in the second region are not screened and thus do not receive their diagnoses until symptoms develop in their early 70s, but they, too, die at 75, so their five-year survival rate is 0 percent. The laissez-faire approach thus yields the same results as the universal screening program, yet if five-year survival were the criterion for effectiveness, universal screening would be deemed the best practice.
  • Because so many criteria can be used to assess effectiveness — median or mean survival times, side effects, quality of life and the like — there is a case to be made against mandating that doctors follow what seems at any given time to be the best practice. Perhaps, as some have suggested, we should merely nudge them with gentle incentives. A comparable tentativeness may be appropriate when devising criteria for effective schools.
  • Arrow’s Theorem, a famous result in mathematical economics, essentially states that no voting system satisfying certain minimal conditions can be guaranteed to always yield a fair or reasonable aggregation of the voters’ rankings of several candidates. A squishier analogue for the field of social measurement would say something like this: No method of measuring a societal phenomenon satisfying certain minimal conditions exists that can’t be second-guessed, deconstructed, cheated, rejected or replaced. This doesn’t mean we shouldn’t be counting — but it does mean we should do so with as much care and wisdom as we can muster.
  •  
    THE WAY WE LIVE NOW Metric Mania
Weiye Loh

The hidden philosophy of David Foster Wallace - Salon.com Mobile - 0 views

  • Taylor's argument, which he himself found distasteful, was that certain logical and seemingly unarguable premises lead to the conclusion that even in matters of human choice, the future is as set in stone as the past. We may think we can affect it, but we can't.
  • human responsibility — that, with advances in neuroscience, is of increasing urgency in jurisprudence, social codes and personal conduct. And it also shows a brilliant young man struggling against fatalism, performing exquisite exercises to convince others, and maybe himself, that what we choose to do is what determines the future, rather than the future more or less determining what we choose to do. This intellectual struggle on Wallace's part seems now a kind of emotional foreshadowing of his suicide. He was a victim of depression from an early age — even during his undergraduate years — and the future never looks more intractable than it does to someone who is depressed.
  • "Fate, Time, and Language" reminded me of how fond philosophers are of extreme situations in creating their thought experiments. In this book alone we find a naval battle, the gallows, a shotgun, poison, an accident that leads to paraplegia, somebody stabbed and killed, and so on. Why not say "I have a pretzel in my hand today. Tomorrow I will have eaten it or not eaten it" instead of "I have a gun in my hand and I will either shoot you through the heart and feast on your flesh or I won't"? Well, OK — the answer is easy: The extreme and violent scenarios catch our attention more forcefully than pretzels do. Also, philosophers, sequestered and meditative as they must be, may long for real action — beyond beekeeping.
  • ...1 more annotation...
  • Wallace, in his essay, at the very center of trying to show that we can indeed make meaningful choices, places a terrorist in the middle of Amherst's campus with his finger on the trigger mechanism of a nuclear weapon. It is by far the most narratively arresting moment in all of this material, and it says far more about the author's approaching antiestablishment explosions of prose and his extreme emotional makeup than it does about tweedy profs fantasizing about ordering their ships into battle. For, after all, who, besides everyone around him, would the terrorist have killed?
  •  
    In 1962, a philosopher (and world-famous beekeeper) named Richard Taylor published a soon-to-be-notorious essay called "Fatalism" in the Philosophical Review.
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • THE primacy of this ideal is very recent. In the late 1970s, clearly a full thirty years after World War II, it all came about quite abruptly. And the ascendancy of rights as we now understand them came as a response, in part, to developments in the academy.
  • There were versions of utilitarianism, some scornful of rights (with Jeremy Bentham describing them as “nonsense upon stilts”), others that accepted that rights have important social functions (as in John Stuart Mill), but none of them asserted that rights were fundamental in ethical and political thinking.
  • There were various kinds of historicism—the English thinker Michael Oakeshott’s conservative traditionalism and the American scholar Richard Rorty’s postmodern liberalism, for example—that viewed human values as cultural creations, whose contents varied significantly from society to society. There was British theorist Isaiah Berlin’s value pluralism, which held that while some values are universally human, they conflict with one another in ways that do not always have a single rational solution. There were also varieties of Marxism which understood rights in explicitly historical terms.
  • ...2 more annotations...
  • human rights were discussed—when they were mentioned at all—as demands made in particular times and places. Some of these demands might be universal in scope—that torture be prohibited everywhere was frequently (though not always) formulated in terms of an all-encompassing necessity, but no one imagined that human rights comprised the only possible universal morality.
  • the notion that rights are the foundation of society came only with the rise of the Harvard philosopher John Rawls’s vastly influential A Theory of Justice (1971). In the years following, it slowly came to be accepted that human rights were the bottom line in political morality.
Weiye Loh

Most scientists in this country are Democrats. That's a problem. - By Daniel Sarewitz - Slate Magazine - 0 views

  • A Pew Research Center Poll from July 2009 showed that only around 6 percent of U.S. scientists are Republicans; 55 percent are Democrats, 32 percent are independent, and the rest "don't know" their affiliation.
  • When President Obama appears Wednesday on Discovery Channel's Mythbusters (9 p.m. ET), he will be there not just to encourage youngsters to do their science homework but also to reinforce the idea that Democrats are the party of science and rationality. And why not? Most scientists are already on his side.
  • Yet, partisan politics aside, why should it matter that there are so few Republican scientists? After all, it's the scientific facts that matter, and facts aren't blue or red.
  • ...7 more annotations...
  • For 20 years, evidence about global warming has been directly and explicitly linked to a set of policy responses demanding international governance regimes, large-scale social engineering, and the redistribution of wealth. These are the sort of things that most Democrats welcome, and most Republicans hate. No wonder the Republicans are suspicious of the science.
  • Think about it: The results of climate science, delivered by scientists who are overwhelmingly Democratic, are used over a period of decades to advance a political agenda that happens to align precisely with the ideological preferences of Democrats. Coincidence—or causation?
  • How would a more politically diverse scientific community improve this situation? First, it could foster greater confidence among Republican politicians about the legitimacy of mainstream science. Second, it would cultivate more informed, creative, and challenging debates about the policy implications of scientific knowledge. This could help keep difficult problems like climate change from getting prematurely straitjacketed by ideology. A more politically diverse scientific community would, overall, support a healthier relationship between science and politics.
  • American society has long tended toward pragmatism, with a great deal of respect for the value and legitimacy not just of scientific facts, but of scientists themselves.
  • Yet this exceptional status could well be forfeit in the escalating fervor of national politics, given that most scientists are on one side of the partisan divide. If that public confidence is lost, it would be a huge and perhaps unrecoverable loss for a democratic society.
  • A democratic society needs Republican scientists.
  • I have to imagine 50 years ago there were a lot more Republican scientists, when the Democrats were still the party of Southern Baptists. As a rational person I find it impossible to support any candidate who panders to the religious right, and in current politics, that's every National Republican.
« First ‹ Previous 161 - 180 of 251 Next › Last »
Showing 20 items per page