Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Fraud

Rss Feed Group items tagged

Weiye Loh

Facial Recognition Software Singles Out Innocent Man | The Utopianist - Think Bigger - 0 views

  • Gass was at home when he got a letter from the Massachusetts Registry of Motor Vehicles saying his license had been revoked. Why? The Boston Globe explains: An antiterrorism computerized facial recognition system that scans a database of millions of state driver’s license images had picked his as a possible fraud. It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.
  •  
    While a boon to police departments looking to save time and money fighting identity fraud, it's frightening to think that people are having their lives seriously disrupted thanks to computer errors. If you are, say, a truck driver, something like this could cause you weeks of lost pay, something many Americans just can't afford to do. And what if this technology expands beyond just rooting out identity fraud? What if you were slammed against a car hood as police falsely identified you as a criminal? The fact that Hass didn't even have a chance to fight the computer's findings before his license was suspended is especially disturbing. What would you do if this happened to you?
Weiye Loh

It's appalling - 0 views

  •  
    Credit Card fraud. CASE's reply.
Weiye Loh

More credit card fraud if consumers less liable? - 0 views

  •  
    Same case on credit card fraud
Weiye Loh

Hacktivists as Gadflies - NYTimes.com - 0 views

  •  
    "Consider the case of Andrew Auernheimer, better known as "Weev." When Weev discovered in 2010 that AT&T had left private information about its customers vulnerable on the Internet, he and a colleague wrote a script to access it. Technically, he did not "hack" anything; he merely executed a simple version of what Google Web crawlers do every second of every day - sequentially walk through public URLs and extract the content. When he got the information (the e-mail addresses of 114,000 iPad users, including Mayor Michael Bloomberg and Rahm Emanuel, then the White House chief of staff), Weev did not try to profit from it; he notified the blog Gawker of the security hole. For this service Weev might have asked for free dinners for life, but instead he was recently sentenced to 41 months in prison and ordered to pay a fine of more than $73,000 in damages to AT&T to cover the cost of notifying its customers of its own security failure. When the federal judge Susan Wigenton sentenced Weev on March 18, she described him with prose that could have been lifted from the prosecutor Meletus in Plato's "Apology." "You consider yourself a hero of sorts," she said, and noted that Weev's "special skills" in computer coding called for a more draconian sentence. I was reminded of a line from an essay written in 1986 by a hacker called the Mentor: "My crime is that of outsmarting you, something that you will never forgive me for." When offered the chance to speak, Weev, like Socrates, did not back down: "I don't come here today to ask for forgiveness. I'm here to tell this court, if it has any foresight at all, that it should be thinking about what it can do to make amends to me for the harm and the violence that has been inflicted upon my life." He then went on to heap scorn upon the law being used to put him away - the Computer Fraud and Abuse Act, the same law that prosecutors used to go after the 26-year-old Internet activist Aaron Swart
yongernn teo

Eli Lilly Accused of Unethical Marketing of Zyprexa - 0 views

  •  
    Summary of the Unethical Marketing of Zyprexa by Eli Lilly: \n\nEli Lilly is a global pharmaceutical company. In the year 2006, it was charged with unethical marketing of Zyprexa, the top-selling drug. It is approved only for the treatment of schizophrenia and bipolar disorder. \nFirstly, Eli Lilly in a report downplayed the risks of obesity and increased blood sugar associated with Zyprexa. Although Eli Lilly was aware of these risks for at least a decade, they went ahead without emphasizing the significance of these risks, in fear of jeopardizing their sales. \nSecondly, Eli Lilly held a promotional campaign called Viva Zyprexa, encouraging off-label usage of this drug in patients who had neither schizophrenia nor bipolar disorder. This campaign was targeted at the elderly who had dementia. However, this drug was not approved to treat dementia. In fact, it could increase the risk of death in older patients who had dementia-related psychosis. \nAll these were done to boost the sale of Zyprexa and to bring in more revenue for Eli Lilly. Zyprexa could alone bring in $4billion worth of sales annually. \n\nEthical Question:\nTo what extent should pharmaceutical companies go to inform potential consumers on the side-effects of their drugs? \n\nEthical Problem: \nThe information that is disseminated through marketing campaigns have to be true and transparent. There should not be any hidden agenda behind the amount of information being released. In this case, to prevent sales from plummeting, Eli Lilly downplayed the side-effects of Zyprexa. It also encouraged off-label usage. \nIt is very important that pharmaceutical companies practice good ethics as this concerns the health of its consumers. While one drug may act as a remedy for a health-problem, it could possibly lead to other health problems due to the side-effects. All these have to be conveyed to the consumer who exchanges his money for the product. \nNot being transparent and honest with the information of the pr
Jiamin Lin

Firms allowed to share private data - 0 views

  •  
    Companies who request for their customer's private information may in turn distribute these confidential particulars to others. As such, cases of fraud and identity theft have surfaced, with fraudsters using these distributed identities to apply for loans or credit cards. Unlike other countries, no privacy law to safeguard an individual's data against unauthorized commercial use has been put in place. As a result, fraudsters are able to ride on this loophole. Ethical Question: Is it right for companies to request for their customer's private information for certain reasons? Is it even fair that they distribute these information to third parties, perhaps as a way to make money? Problem: I think the main problem is that there isn't a law in Singapore that safeguards an individual's data against unauthorized commercial use. Even though the Model Data Protection Code scheme tries to do the above, it is after all, still a voluntary scheme. Companies can opt to adopt the scheme, but whether they choose to apply it regularly, is another issue. As long as a privacy law is not in place, this issue will continue to recur in Singapore.
Li-Ling Gan

Facebook awarded $873 million in spam case | Security - CNET News - 0 views

  •  
    Description of case: The issue being put across in this case is Spamming. In summary, a Canadian man was accused of sending spam messages to its members through Facebook, using this to earn money for his company. Facebook then took action and sued them under the Can-Spam (Contolling the Assault of Non-Solicited Pornography and Marketing) Act, and was awarded $873 million in damages for winning this case. Ethical question: I think the most important question here is to what extent is it considered unethical to send messages to people who might not want such information. In the case of Facebook, should there be a line drawn between sending such 'spam' messages to people you do not know, and people who are already on your 'Friends' list or in the same online community? Ethical problem: I feel the problem of wastage surfaces with spamming. Resources are being used up to keep the internet working and these are in turn wasted when people receieve unwanted mail or messages that they end up deleting. Furthermore, there is a large amount of spam received that are also scams, this then touches on the problem of fraud and cheating other users for the sender's benefit.
Weiye Loh

Credit card stolen? Mind the pitfalls - 0 views

  •  
    More on credit card fraud
Weiye Loh

Card fraud: Banks not doing enough - 0 views

  • Customers cannot be faulted for negligence by merchants to verify signatures on credit cards
  • Customers cannot be faulted for negligence by merchants to verify signatures on credit cards, or for the banks' failure to implement an effective foolproof secondary security mechanism to protect cardholders.
  •  
    Contrast this case in Singapore to other countries like the United States or Malaysia that limits the liability of the consumers of such cases to a specific amount - which policy is better? On another note, I have always been intrigued by the fact that organizations, while being infinitely more powerful, are regarded as individuals with individual rights legally. What does this have to say about the identity of organizations?
  •  
    The issue of responsibility was heavily debated and the parties identified are 1. the credit card owners, 2. the banks, 3. the retailers. 4. government bodies e.g. MAS, CASE on their regulations and policies. Which party do you all think should shoulder the moral obligations of owning the technology of cashless payment? How then should this translate to the laws and enforcement?
  •  
    The case came to light when a certain Mdm Tan Shock Ling's credit cards got stolen. Within an hour, the fraudsters used her credit cards to chock up bills amounting to $17k. She was only notified of the purchases when a bank called her to confirm if she has just purchased a rolex watch using one of her credit card. The banks requested her to pay back the bills because they will only cover payments made after she has reported the lost of her credit cards. There were a few articles regarding the issue, with Newpaper sending their reporters (Chinese women) out shopping with an Indian man's credit card. Their investigative journalism showed that retailers are generally lax in their verification of the purchaser's identity vis-a-vis the name and signature.
Weiye Loh

Skepticblog » ClimateGate Follow Up - 0 views

  • Recently the third of three independent reviews of the Climatic Research Unit (CRU) e-mail scandal has been completed. All three reviews concluded that the CRU was not hiding, destroying, or manipulating data.
  • At the time there were those who believed the e-mails to be the innocent chatter of scientists and others who thought it was the smoking gun of scientific fraud. At the time I wrote: I don’t know what the lessons of climategate are yet – we need to see what actually happened first. But how people deal with climategate says a lot about their process. Those who are making bold claims based upon ambiguous, circumstantial, and out-of-context evidence, are not doing themselves or their side any favors.
  • after a thorough review there is no evidence of any actual scientific fraud, but the scientists were not adequately complying with FOI requests. It seems the climate scientists at the CRU had developed a bit of a bunker mentality and felt justified in frustrating what they felt were frivolous and harassing FOI requests.
  • ...3 more annotations...
  • This, in turn, seems to be a symptom of an obscure scientific discipline (climate science) being thrust in recent years into the middle of a raging world-wide political controversy. There was not a culture among these scientists of dealing with the politically controversial aspects of their science.
  • This episode reminds us that scientists are human, and therefore science itself is a human endeavor and subject to all the foibles that plague any human activity.
  • there were charges that the CRU did not have backups of data they relied upon for their conclusions. But the CRU was never the primary source of this data – they simply aggregated and analyzed it. The primary data has always been available from the sources. As the BBC reports: “We find that CRU was not in a position to withhold access to such data or tamper with it,” it says. “We demonstrated that any independent researcher can download station data directly from primary sources and undertake their own temperature trend analysis”.
  •  
    CLIMATEGATE FOLLOW UP by STEVEN NOVELLA, Jul 12 2010
Weiye Loh

The Mysterious Decline Effect | Wired Science | Wired.com - 0 views

  • Question #1: Does this mean I don’t have to believe in climate change? Me: I’m afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn’t mean, of course, that such theories won’t change or get modified – the strength of science is that nothing is settled.) Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study. The larger point is that we need to be a better job of considering the context behind every claim. In 1952, the Harvard philosopher Willard Von Orman published “The Two Dogmas of Empiricism.” In the essay, Quine compared the truths of science to a spider’s web, in which the strength of the lattice depends upon its interconnectedness. (Quine: “The unit of empirical significance is the whole of science.”) One of the implications of Quine’s paper is that, when evaluating the power of a given study, we need to also consider the other studies and untested assumptions that it depends upon. Don’t just fixate on the effect size – look at the web. Unfortunately for the denialists, climate change and natural selection have very sturdy webs.
  • biases are not fraud. We sometimes forget that science is a human pursuit, mingled with all of our flaws and failings. (Perhaps that explains why an episode like Climategate gets so much attention.) If there’s a single theme that runs through the article it’s that finding the truth is really hard. It’s hard because reality is complicated, shaped by a surreal excess of variables. But it’s also hard because scientists aren’t robots: the act of observation is simultaneously an act of interpretation.
  • (As Paul Simon sang, “A man sees what he wants to see and disregards the rest.”) Most of the time, these distortions are unconscious – we don’t know even we are misperceiving the data. However, even when the distortion is intentional it’s still rarely rises to the level of outright fraud. Consider the story of Mike Rossner. He’s executive director of the Rockefeller University Press, and helps oversee several scientific publications, including The Journal of Cell Biology.  In 2002, while trying to format a scientific image in Photoshop that was going to appear in one of the journals, Rossner noticed that the background of the image contained distinct intensities of pixels. “That’s a hallmark of image manipulation,” Rossner told me. “It means the scientist has gone in and deliberately changed what the data looks like. What’s disturbing is just how easy this is to do.” This led Rossner and his colleagues to begin analyzing every image in every accepted paper. They soon discovered that approximately 25 percent of all papers contained at least one “inappropriately manipulated” picture. Interestingly, the vast, vast majority of these manipulations (~99 percent) didn’t affect the interpretation of the results. Instead, the scientists seemed to be photoshopping the pictures for aesthetic reasons: perhaps a line on a gel was erased, or a background blur was deleted, or the contrast was exaggerated. In other words, they wanted to publish pretty images. That’s a perfectly understandable desire, but it gets problematic when that same basic instinct – we want our data to be neat, our pictures to be clean, our charts to be clear – is transposed across the entire scientific process.
  • ...2 more annotations...
  • One of the philosophy papers that I kept on thinking about while writing the article was Nancy Cartwright’s essay “Do the Laws of Physics State the Facts?” Cartwright used numerous examples from modern physics to argue that there is often a basic trade-off between scientific “truth” and experimental validity, so that the laws that are the most true are also the most useless. “Despite their great explanatory power, these laws [such as gravity] do not describe reality,” Cartwright writes. “Instead, fundamental laws describe highly idealized objects in models.”  The problem, of course, is that experiments don’t test models. They test reality.
  • Cartwright’s larger point is that many essential scientific theories – those laws that explain things – are not actually provable, at least in the conventional sense. This doesn’t mean that gravity isn’t true or real. There is, perhaps, no truer idea in all of science. (Feynman famously referred to gravity as the “greatest generalization achieved by the human mind.”) Instead, what the anomalies of physics demonstrate is that there is no single test that can define the truth. Although we often pretend that experiments and peer-review and clinical trials settle the truth for us – that we are mere passive observers, dutifully recording the results – the actuality of science is a lot messier than that. Richard Rorty said it best: “To say that we should drop the idea of truth as out there waiting to be discovered is not to say that we have discovered that, out there, there is no truth.” Of course, the very fact that the facts aren’t obvious, that the truth isn’t “waiting to be discovered,” means that science is intensely human. It requires us to look, to search, to plead with nature for an answer.
Weiye Loh

DenialDepot: A word of caution to the BEST project team - 0 views

  • 1) Any errors, however inconsequential, will be taken Very Seriously and accusations of fraud will be made.
  • 2) If you adjust the raw data we will accuse you of fraudulently fiddling the figures whilst cooking the books.3) If you don't adjust the raw data we will accuse you of fraudulently failing to account for station biases and UHI.
  • 7) By all means publish all your source code, but we will still accuse you of hiding the methodology for your adjustments.
  • ...10 more annotations...
  • 8) If you publish results to your website and errors are found, we will accuse you of a Very Serious Error irregardless of severity (see point #1) and bemoan the press release you made about your results even though you won't remember making any press release about your results.
  • 9) With regard to point #8 above, at extra cost and time to yourself you must employ someone to thoroughly check each monthly update before is is published online, even if this delays publication of the results till the end of the month. You might be surprised at this because no-one actually relies on such freshly published data anyway and aren't the many eyes of blog audit better than a single pair of eyes? Well that's irrelevant. See points #1 and #810) If you don't publish results promptly at the start of the month on the public website, but instead say publish the results to a private site for checks to be performed before release, we will accuse you of engaging in unscientific-like secrecy and massaging the data behind closed doors.
  • 14) If any region/station shows a warming trend that doesn't match the raw data, and we can't understand why, we will accuse you of fraud and dismiss the entire record. Don't expect us to have to read anything to understand results.
  • 15) You must provide all input datasets on your website. It's no good referencing NOAAs site and saying they "own" the GHCN data for example. I don't want their GHCN raw temperatures file, I want the one on your hard drive which you used for the analysis, even if you claim they are the same. If you don't do this we will accuse you of hiding the data and preventing us checking your results.
  • 24. In the event that you comply with all of the above, we will point out that a mere hundred-odd years of data is irrelevant next to the 4.5 billion year history of Earth. So why do you even bother?
  • 23) In the unlikely event that I haven't wasted enough of your time forcing you to comply with the above rules, I also demand to see all emails you have sent or will send during the period 1950 to 2050 that contain any of these keywords
  • 22) We don't need any scrutiny because our role isn't important.
  • 17) We will treat your record as if no alternative exists. As if your record is the make or break of Something Really Important (see point #1) and we just can't check the results in any other way.
  • 16) You are to blame for any station data your team uses. If we find out that a station you use is next to an AC Unit, we will conclude you personally planted the thermometer there to deliberately get warming.
  • an article today by Roger Pielke Nr. (no relation) that posited the fascinating concept that thermometers are just as capricious and unreliable proxies for temperature as tree rings. In fact probably more so, and re-computing global temperature by gristlecone pines would reveal the true trend of global cooling, which will be in all our best interests and definitely NOT just those of well paying corporate entities.
  •  
    Dear Professor Muller and Team, If you want your Berkley Earth Surface Temperature project to succeed and become the center of attention you need to learn from the vast number of mistakes Hansen and Jones have made with their temperature records. To aid this task I created a point by point list for you.
Weiye Loh

Red-Wine Researcher Charged With 'Photoshop' Fraud - 0 views

  •  
    A University of Connecticut researcher known for touting the health benefits of red wine is guilty of 145 counts of fabricating and falsifying data with image-editing software, according to a 3-year university investigation made public Wednesday. The researcher, Dipak K. Das, PhD, is a director of the university's Cardiovascular Research Center (CRC) and a professor in the Department of Surgery. The university stated in a press release that it has frozen all externally funded research in Dr. Das's lab and turned down $890,000 in federal research grants awarded to him. The process to dismiss Dr. Das from the university is already underway, the university added.
Weiye Loh

Wk 4 Online censorship & digital access: Mormon Church Attacks Wikileaks - 6 views

WIKILEAK RELEASES SECRET CHURCH DOCUMENTS! The First Link is an article regarding Wikileaks releasing a 'copyrighted' and confidential Church document of the Mormons (also known as the Church of J...

Mormons Scientology Wikileaks Copyright Censorship

Olivia Chang

The Phishing Problem - 7 views

URL: http://www.ft.com/cms/s/0/7c03fd14-b011-11dd-a795-0000779fd18c.html Case Summary: The world of the Internet is slowly becoming dangerous ground to tread on. The onset of viruses, hackers and ...

phishing scams

started by Olivia Chang on 19 Aug 09 no follow-up yet
Weiye Loh

What is the role of the state? | Martin Wolf's Exchange | FT.com - 0 views

  • This question has concerned western thinkers at least since Plato (5th-4th century BCE). It has also concerned thinkers in other cultural traditions: Confucius (6th-5th century BCE); China’s legalist tradition; and India’s Kautilya (4th-3rd century BCE). The perspective here is that of the contemporary democratic west.
  • The core purpose of the state is protection. This view would be shared by everybody, except anarchists, who believe that the protective role of the state is unnecessary or, more precisely, that people can rely on purely voluntary arrangements.
  • Contemporary Somalia shows the horrors that can befall a stateless society. Yet horrors can also befall a society with an over-mighty state. It is evident, because it is the story of post-tribal humanity that the powers of the state can be abused for the benefit of those who control it.
  • ...9 more annotations...
  • In his final book, Power and Prosperity, the late Mancur Olson argued that the state was a “stationary bandit”. A stationary bandit is better than a “roving bandit”, because the latter has no interest in developing the economy, while the former does. But it may not be much better, because those who control the state will seek to extract the surplus over subsistence generated by those under their control.
  • In the contemporary west, there are three protections against undue exploitation by the stationary bandit: exit, voice (on the first two of these, see this on Albert Hirschman) and restraint. By “exit”, I mean the possibility of escaping from the control of a given jurisdiction, by emigration, capital flight or some form of market exchange. By “voice”, I mean a degree of control over, the state, most obviously by voting. By “restraint”, I mean independent courts, division of powers, federalism and entrenched rights.
  • defining what a democratic state, viewed precisely as such a constrained protective arrangement, is entitled to do.
  • There exists a strand in classical liberal or, in contemporary US parlance, libertarian thought which believes the answer is to define the role of the state so narrowly and the rights of individuals so broadly that many political choices (the income tax or universal health care, for example) would be ruled out a priori. In other words, it seeks to abolish much of politics through constitutional restraints. I view this as a hopeless strategy, both intellectually and politically. It is hopeless intellectually, because the values people hold are many and divergent and some of these values do not merely allow, but demand, government protection of weak, vulnerable or unfortunate people. Moreover, such values are not “wrong”. The reality is that people hold many, often incompatible, core values. Libertarians argue that the only relevant wrong is coercion by the state. Others disagree and are entitled to do so. It is hopeless politically, because democracy necessitates debate among widely divergent opinions. Trying to rule out a vast range of values from the political sphere by constitutional means will fail. Under enough pressure, the constitution itself will be changed, via amendment or reinterpretation.
  • So what ought the protective role of the state to include? Again, in such a discussion, classical liberals would argue for the “night-watchman” role. The government’s responsibilities are limited to protecting individuals from coercion, fraud and theft and to defending the country from foreign aggression. Yet once one has accepted the legitimacy of using coercion (taxation) to provide the goods listed above, there is no reason in principle why one should not accept it for the provision of other goods that cannot be provided as well, or at all, by non-political means.
  • Those other measures would include addressing a range of externalities (e.g. pollution), providing information and supplying insurance against otherwise uninsurable risks, such as unemployment, spousal abandonment and so forth. The subsidisation or public provision of childcare and education is a way to promote equality of opportunity. The subsidisation or public provision of health insurance is a way to preserve life, unquestionably one of the purposes of the state. Safety standards are a way to protect people against the carelessness or malevolence of others or (more controversially) themselves. All these, then, are legitimate protective measures. The more complex the society and economy, the greater the range of the protections that will be sought.
  • What, then, are the objections to such actions? The answers might be: the proposed measures are ineffective, compared with what would happen in the absence of state intervention; the measures are unaffordable and might lead to state bankruptcy; the measures encourage irresponsible behaviour; and, at the limit, the measures restrict individual autonomy to an unacceptable degree. These are all, we should note, questions of consequences.
  • The vote is more evenly distributed than wealth and income. Thus, one would expect the tenor of democratic policymaking to be redistributive and so, indeed, it is. Those with wealth and income to protect will then make political power expensive to acquire and encourage potential supporters to focus on common enemies (inside and outside the country) and on cultural values. The more unequal are incomes and wealth and the more determined are the “haves” to avoid being compelled to support the “have-nots”, the more politics will take on such characteristics.
  • In the 1970s, the view that democracy would collapse under the weight of its excessive promises seemed to me disturbingly true. I am no longer convinced of this: as Adam Smith said, “There is a great deal of ruin in a nation”. Moreover, the capacity for learning by democracies is greater than I had realised. The conservative movements of the 1980s were part of that learning. But they went too far in their confidence in market arrangements and their indifference to the social and political consequences of inequality. I would support state pensions, state-funded health insurance and state regulation of environmental and other externalities. I am happy to debate details. The ancient Athenians called someone who had a purely private life “idiotes”. This is, of course, the origin of our word “idiot”. Individual liberty does indeed matter. But it is not the only thing that matters. The market is a remarkable social institution. But it is far from perfect. Democratic politics can be destructive. But it is much better than the alternatives. Each of us has an obligation, as a citizen, to make politics work as well as he (or she) can and to embrace the debate over a wide range of difficult choices that this entails.
  •  
    What is the role of the state?
Weiye Loh

nanopolitan: The Marc Hauser Saga: Some Commentary - 0 views

  • Janet Stemwedel at Adventures in Science and Ethics: Punishment, redemption, and celebrity status: still more on the Hauser case: Should there be different standards -- when it comes to our perceptions about misconducting researchers -- for elites and the newbies?
  • David Dobbs in Slate: A Rush to Moral Judgment - What went wrong with Marc Hauser's search for moral foundations. Dobbs blames it on Hauser's impatience -- aka the "Man In A Hurry" syndrome:
  • Chris Kelty at Savage Minds: Marc Hauser's Trolley Problem:
  •  
    The Marc Hauser Saga: Some Commentary
Weiye Loh

Skepticblog » Further Thoughts on Atheism - 0 views

  • Even before I started writing Evolution: How We and All Living Things Came to Be I knew that it would very briefly mention religion, make a mild assertion that religious questions are out of scope for science, and move on. I knew this was likely to provoke blow-back from some in the atheist community, and I knew mentioning that blow-back in my recent post “The Standard Pablum — Science and Atheism” would generate more.
  • Still, I was surprised by the quantity of the responses to the blog post (208 comments as of this moment, many of them substantial letters), and also by the fierceness of some of those responses. For example, according to one poster, “you not only pandered, you lied. And even if you weren’t lying, you lied.” (Several took up this “lying” theme.) Another, disappointed that my children’s book does not tell a general youth audience to look to “secular humanism for guidance,” declared  that “I’d have to tear out that page if I bought the book.”
  • I don’t mean to suggest that there are not points of legitimate disagreement in the mix — there are, many of them stated powerfully. There are also statements of support, vigorous debate, and (for me at least) a good deal of food for thought. I invite anyone to browse the thread, although I’d urge you to skim some of it. (The internet is after all a hyperbole-generating machine.)
  • ...10 more annotations...
  • I lack any belief in any deity. More than that, I am persuaded (by philosophical argument, not scientific evidence) to a high degree of confidence that gods and an afterlife do not exist.
  • do try to distinguish between my work as a science writer and skeptical activist on the one hand, and my personal opinions about religion and humanism on the other.
  • Atheism is a practical handicap for science outreach. I’m not naive about this, but I’m not cynical either. I’m a writer. I’m in the business of communicating ideas about science, not throwing up roadblocks and distractions. It’s good communication to keep things as clear, focused, and on-topic as possible.
  • Atheism is divisive for the skeptical community, and it distracts us from our core mandate. I was blunt about this in my 2007 essay “Where Do We Go From Here?”, writing, I’m both an atheist and a secular humanist, but it is clear to me that atheism is an albatross for the skeptical movement. It divides us, it distracts us, and it marginalizes us. Frankly, we can’t afford that. We need all the help we can get.
  • In What Do I Do Next? I urged skeptics to remember that there are many other skeptics who do hold or identify with some religion. Indeed, the modern skeptical movement is built partly on the work of people of faith (including giants like Harry Houdini and Martin Gardner). You don’t, after all, have to be against god to be against fraud.
  • In my Skeptical Inquirer article “The Paradoxical Future of Skepticism” I argued that skeptics must set aside the conceit that our goal is a cultural revolution or the dawning of a new Enlightenment. … When we focus on that distant, receding, and perhaps illusory goal, we fail to see the practical good we can do, the harm-reduction opportunities right in front of us. The long view subverts our understanding of the scale and hazard of paranormal beliefs, leading to sentiments that the paranormal is “trivial” or “played out.” By contrast, the immediate, local, human view — the view that asks “Will this help someone?” — sees obvious opportunities for every local group and grassroots skeptic to make a meaningful difference.
  • This practical argument, that skepticism can get more done if we keep our mandate tight and avoid alienating our best friends, seems to me an important one. Even so, it is not my main reason for arguing that atheism and skepticism are different projects.
  • In my opinion, Metaphysics and ethics are out of scope for science — and therefore out of scope for skepticism. This is by far the most important reason I set aside my own atheism when I put on my “skeptic” hat. It’s not that I don’t think atheism is rational — I do. That’s why I’m an atheist. But I know that I cannot claim scientific authority for a conclusion that science cannot test, confirm, or disprove. And so, I restrict myself as much as possible, in my role as a skeptic and science writer, to investigable claims. I’ve become a cheerleader for this “testable claims” criterion (and I’ll discuss it further in future posts) but it’s not a new or radical constriction of the scope of skepticism. It’s the traditional position occupied by skeptical organizations for decades.
  • In much of the commentary, I see an assumption that I must not really believe that testable paranormal and pseudoscientific claims (“I can read minds”) are different in kind from the untestable claims we often find at the core of religion (“god exists”). I acknowledge that many smart people disagree on this point, but I assure you that this is indeed what I think.
  • I’d like to call out one blogger’s response to my “Standard Pablum” post. The author certainly disagrees with me (we’ve discussed the topic often on Twitter), but I thank him for describing my position fairly: From what I’ve read of Daniel’s writings before, this seems to be a very consistent position that he has always maintained, not a new one he adopted for the book release. It appears to me that when Daniel says that science has nothing to say about religion, he really means it. I have nothing to say to that. It also appears to me that when he says skepticism is a “different project than atheism” he also means it.
  •  
    FURTHER THOUGHTS ON ATHEISM by DANIEL LOXTON, Mar 05 2010
Weiye Loh

After Wakefield: Undoing a decade of damaging debate « Skepticism « Critical ... - 0 views

  • Mass vaccination completely eradicated smallpox, which had been killing one in seven children.  Public health campaigns have also eliminated diptheria, and reduced the incidence of pertussis, tetanus, measles, rubella and mumps to near zero.
  • when vaccination rates drop, diseases can reemerge in the population again. Measles is currently endemic in the United Kingdom, after vaccination rates dropped below 80%. When diptheria immunization dropped in Russia and Ukraine in the early 1990′s, there were over 100,000 cases with 1,200 deaths.  In Nigeria in 2001, unfounded fears of the polio vaccine led to a drop in vaccinations, an re-emergence of infection, and the spread of polio to ten other countries.
  • one reason that has experienced a dramatic upsurge over the past decade or so has been the fear that vaccines cause autism. The connection between autism and vaccines, in particular the measles, mumps, rubella (MMR) vaccine, has its roots in a paper published by Andrew Wakefield in 1998 in the medical journal The Lancet.  This link has already been completely and thoroughly debunked – there is no evidence to substantiate this connection. But over the past two weeks, the full extent of the deception propagated by Wakefield was revealed. The British Medical Journal has a series of articles from journalist Brian Deer (part 1, part 2), who spent years digging into the facts behind Wakefield,  his research, and the Lancet paper
  • ...3 more annotations...
  • Wakefield’s original paper (now retracted) attempted to link gastrointestinal symptoms and regressive autism in 12 children to the administration of the MMR vaccine. Last year Wakefield was stripped of his medical license for unethical behaviour, including undeclared conflicts of interest.  The most recent revelations demonstrate that it wasn’t just sloppy research – it was fraud.
  • Unbelievably, some groups still hold Wakefield up as some sort of martyr, but now we have the facts: Three of the 9 children said to have autism didn’t have autism at all. The paper claimed all 12 children were normal, before administration of the vaccine. In fact, 5 had developmental delays that were detected prior to the administration of the vaccine. Behavioural symptoms in some children were claimed in the paper as being closely related to the vaccine administration, but documentation showed otherwise. What were initially determined to be “unremarkable” colon pathology reports were changed to “non-specific colitis” after a secondary review. Parents were recruited for the “study” by anti-vaccinationists. The study was designed and funded to support future litigation.
  • As Dr. Paul Offit has been quoted as saying, you can’t unring a bell. So what’s going to stop this bell from ringing? Perhaps an awareness of its fraudulent basis will do more to change perceptions than a decade of scientific investigation has been able to achieve. For the sake of population health, we hope so.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
1 - 20 of 23 Next ›
Showing 20 items per page