Skip to main content

Home/ TOK Friends/ Group items tagged fraud

Rss Feed Group items tagged

Emilio Ergueta

Voter Fraud Protection or Voter Suppression? | Talking Philosophy - 0 views

  • One essential aspect of a democracy is the right of each citizen to vote. This also includes the right to have her vote count. One aspect of protecting this right is to ensure that voter fraud does not occur.
  • This is because voter suppression can unjustly rob people of their votes.
  • However, the sincerity of a belief has no relevance to its truth. What matters are the reasons and evidence that support the belief. As such, I will look at the available evidence and endeavor to sort out the matter.
  • ...8 more annotations...
  • One Republican talking point is that voter fraud is widespread. For example, on April 7, 2014 Dick Morris claimed that over 1 million people voted twice in 2012. If this was true, then it would obviously be a serious matter: widespread voter fraud could change the results of elections and rob the legitimate voters of their right to decide
  • Settling this matter requires looking at the available facts. In regards to Dick Morris’ claim (which made the rounds as a conservative talking point), the facts show that it is false.
  • Republicans have argued for voter ID laws by contending that they will prevent fraud. However, investigation of voter fraud has shown only 31 credible cases out of one billion ballots. As such, this sort of fraud does occur—but only at an incredibly low rate.
  • One rather important matter is the moral issue of whether it is more important to prevent fraud or to prevent disenfranchisement.
  • In the United States, there is a presumption of innocence on the moral grounds that it is better that a guilty person goes free than an innocent person is unjustly punished.
  • Keith Bentele and Erin E. O’Brien published a study entitled “Jim Crow 2.0? Why States Consider and Adopt Restrictive Voter Access Policies.” Based on their analysis of the data, they concluded “the Republican Party has engaged in strategic demobilization efforts in response to changing demographics, shifting electoral fortunes, and an internal rightward ideological drift among the party faithful.”
  • One of the best-known methods proposed to counter voter fraud is the voter ID law. While, as shown above, the sort of fraud that would be prevented by these laws seems to occur 31 times per 1 billion ballots, it serves to disenfranchise voters. In Texas 600,000-800,000 registered voters lack such IDs with Hispanics being 40-120% more likely to lack an ID than whites.
  • It would seem that the laws and policies allegedly aimed at voter fraud would not reduced the existing fraud (which is already miniscule) and would have the effect of suppressing voters. As such, these laws and proposals fail to protect the rights of voters and instead are a violation of that basic right. In short, they are either a misguided and failed effort to prevent fraud or a wicked and potentially successful effort to suppress minority voters. Either way, these laws and policies are a violation of a fundamental right of the American democracy.
Javier E

Opinion | Cloning Scientist Hwang Woo-suk Gets a Second Chance. Should He? - The New Yo... - 0 views

  • The Hwang Woo-suk saga is illustrative of the serious deficiencies in the self-regulation of science. His fraud was uncovered because of brave Korean television reporters. Even those efforts might not have been enough, had Dr. Hwang’s team not been so sloppy in its fraud. The team’s papers included fabricated data and pairs of images that on close comparison clearly indicated duplicity.
  • Yet as a cautionary tale about the price of fraud, it is, unfortunately, a mixed bag. He lost his academic standing, and he was convicted of bioethical violations and embezzlement, but he never ended up serving jail time
  • Although his efforts at cloning human embryos, ended in failure and fraud, they provided him the opportunities and resources he needed to take on projects, such as dog cloning, that were beyond the reach of other labs. The fame he earned in academia proved an asset in a business world where there’s no such thing as bad press.
  • ...3 more annotations...
  • it is comforting to think that scientific truth inevitably emerges and scientific frauds will be caught and punished.
  • Dr. Hwang’s scandal suggests something different. Researchers don’t always have the resources or motivation to replicate others’ experiments
  • Even if they try to replicate and fail, it is the institution where the scientist works that has the right and responsibility to investigate possible fraud. Research institutes and universities, facing the prospect of an embarrassing scandal, might not do so.
cvanderloo

Nearly 500 Charged With Coronavirus-Related Fraud In Past Year : NPR - 1 views

  • Call it a nasty side effect of the COVID-19 pandemic — the flare-up in fraud, scams and hoaxes as some people have tried to use the crisis to line their pockets illegally.
  • The grand total that fraudsters tried to scam from the government and the public in those cases is more than $569 million.
  • The department's efforts to target fraud related to COVID-19 fraud date back to last March when then-Attorney General William Barr instructed federal prosecutors across the country to investigate and prosecute scams, price gouging and other coronavirus-related crimes aggressively.
  • ...5 more annotations...
  • One measure created was the Paycheck Protection Program, or PPP, which gives loans to businesses to keep employees on the payroll.
  • Economic Injury Disaster Loans, a program designed to provide loans to small businesses and agricultural entities, was also a target for fraud. The department said it has seized $580 million in proceeds so far from fraudulent loan applications.
  • Unemployment insurance — weekly federal unemployment benefits worth $600 a week — also came on line because of the CARES Act.
  • Most notable among these scams are the fake cures and treatments for COVID-19. These have run from attempts to sell everything from industrial bleach to colloidal silver as a miracle cure or treatment for the virus.
  • According to the memo, $626 million in funds had been seized or forfeited due to civil and criminal investigation by the Justice Department involving the Economic Injury Disaster Loans and PPP measures. The subcommittee memo said that amounts to "less than 1% of the nearly $84 billion in potential fraud identified in these programs."
sissij

The Voter Fraud Fantasy - The New York Times - 0 views

  • Perhaps the most damaging was his insistence that millions of Americans voted illegally in the election he narrowly won.
  • What once seemed like another harebrained claim by a president with little regard for the truth must now be recognized as a real threat to American democracy.
  • That would allow state and national lawmakers to impose even tighter voting requirements, harming minorities, the young and the elderly, who tend to vote Democratic.
  •  
    Trump's voter fraud fantasy has discussed frequently. In TOK class, I remember once we talked about an example of how the government policy may affect the result of the election. For example, the rule that people need to have an ID to vote may target African-American voters who doesn't have an ID. Although voter fraud is alleged by the election officials to be exceedingly rare, there are still a lot of factors that make the seemly fair election biased. --Sissi (1/29/2017)
Javier E

The Data Vigilante - Christopher Shea - The Atlantic - 0 views

  • He is, on the contrary, seized by the conviction that science is beset by sloppy statistical maneuvering and, in some cases, outright fraud. He has therefore been moonlighting as a fraud-buster, developing techniques to help detect doctored data in other people’s research. Already, in the space of less than a year, he has blown up two colleagues’ careers.
  • In a paper called “False-Positive Psychology,” published in the prestigious journal Psychological Science, he and two colleagues—Leif Nelson, a professor at the University of California at Berkeley, and Wharton’s Joseph Simmons—showed that psychologists could all but guarantee an interesting research finding if they were creative enough with their statistics and procedures.
  • By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.
  • ...2 more annotations...
  • “I couldn’t tolerate knowing something was fake and not doing something about it,” he told me. “Everything loses meaning. What’s the point of writing a paper, fighting very hard to get it published, going to conferences?”
  • Simonsohn stressed that there’s a world of difference between data techniques that generate false positives, and fraud, but he said some academic psychologists have, until recently, been dangerously indifferent to both. Outright fraud is probably rare. Data manipulation is undoubtedly more common—and surely extends to other subjects dependent on statistical study, including biomedicine. Worse, sloppy statistics are “like steroids in baseball”: Throughout the affected fields, researchers who are too intellectually honest to use these tricks will publish less, and may perish. Meanwhile, the less fastidious flourish.
Javier E

Doubts about Johns Hopkins research have gone unanswered, scientist says - The Washingt... - 0 views

  • Over and over, Daniel Yuan, a medical doctor and statistician, couldn’t understand the results coming out of the lab, a prestigious facility at Johns Hopkins Medical School funded by millions from the National Institutes of Health.He raised questions with the lab’s director. He reran the calculations on his own. He looked askance at the articles arising from the research, which were published in distinguished journals. He told his colleagues: This doesn’t make sense.“At first, it was like, ‘Okay — but I don’t really see it,’ ” Yuan recalled. “Then it started to smell bad.”
  • The passions of scientific debate are probably not much different from those that drive achievement in other fields, so a tragic, even deadly dispute might not be surprising.But science, creeping ahead experiment by experiment, paper by paper, depends also on institutions investigating errors and correcting them if need be, especially if they are made in its most respected journals.If the apparent suicide and Yuan’s detailed complaints provoked second thoughts about the Nature paper, though, there were scant signs of it.The journal initially showed interest in publishing Yuan’s criticism and told him that a correction was “probably” going to be written, according to e-mail rec­ords. That was almost six months ago. The paper has not been corrected.The university had already fired Yuan in December 2011, after 10 years at the lab. He had been raising questions about the research for years. He was escorted from his desk by two security guards.
  • Last year, research published in the Proceedings of the National Academy of Sciences found that the percentage of scientific articles retracted because of fraud had increased tenfold since 1975. The same analysis reviewed more than 2,000 retracted biomedical papers and found that 67 percent of the retractions were attributable to misconduct, mainly fraud or suspected fraud.
  • ...3 more annotations...
  • Fang said retractions may be rising because it is simply easier to cheat in an era of digital images, which can be easily manipulated. But he said the increase is caused at least in part by the growing competition for publication and for NIH grant money.He noted that in the 1960s, about two out of three NIH grant requests were funded; today, the success rate for applicants for research funding is about one in five. At the same time, getting work published in the most esteemed journals, such as Nature, has become a “fetish” for some scientists, Fang said.
  • many observers note that universities and journals, while sometimes agreeable to admitting small mistakes, are at times loath to reveal that the essence of published work was simply wrong.“The reader of scientific information is at the mercy of the scientific institution to investigate or not,” said Adam Marcus, who with Ivan Oransky founded the blog Retraction Watch in 2010. In this case, Marcus said, “if Hopkins doesn’t want to move, we may not find out what is happening for two or three years.”
  • The trouble is that a delayed response — or none at all — leaves other scientists to build upon shaky work. Fang said he has talked to researchers who have lost months by relying on results that proved impossible to reproduce.Moreover, as Marcus and Oransky have noted, much of the research is funded by taxpayers. Yet when retractions are done, they are done quietly and “live in obscurity,” meaning taxpayers are unlikely to find out that their money may have been wasted.
Javier E

Noted Dutch Psychologist, Stapel, Accused of Research Fraud - NYTimes.com - 0 views

  • A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found
  • Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.
  • In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.
  • ...8 more annotations...
  • “The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.”
  • Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.
  • Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.
  • In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.
  • Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error tha
  • t changed a reported finding — almost always in opposition to the authors’ hypothesis.
  • an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar, found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.
  • “We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’
Javier E

How Scientific Fraud Is Like Ponzi Finance - Edward Tenner - Business - The Atlantic - 0 views

  • scientific fraud sound a lot like Madoff-style financial deception: both include social networking, stonewalling disclosure, indignation when questioned. The Ponzi Schemer and data fabricator  share with other forms of confidence artists a gift for recognizing the stories that people would like to hear,
charlottedonoho

Who's to blame when fake science gets published? - 1 views

  • The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn't going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.
  • When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.
  • But in May, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating "irregularities" - statistical quirks and troubling patterns - that strongly implied that the whole LaCour/Green study was based upon made-up data.
  • ...6 more annotations...
  • The data hit the fan, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he'd really done was help LaCour write up the findings.
  • Science magazine didn't shoulder any blame, either. In a statement, editor in chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: "No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud."
  • This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his adviser's eyes - or vice versa. And if close collaborators aren't going to catch the problem, it's no surprise that outside reviewers dragooned into critiquing the research for a journal won't catch it either. A modern science article rests on a foundation of trust.
  • If the process can't catch such obvious fraud - a hoax the perpetrators probably thought wouldn't work - it's no wonder that so many scientists feel emboldened to sneak a plagiarised passage or two past the gatekeepers.
  • Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong.
  • Despite the artful passing of the buck by LaCour's senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists - even when they don't take an active part in deception - must recognise that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.
aprossi

474 charged with crimes related to theft of Covid relief funds - CNNPolitics - 0 views

  • 474 charged with crimes related to theft of Covid relief funds
  • Federal investigators identified more than half a billion dollars in fraud and charged 474 people with crimes related to theft of money from US Covid relief programs, the Justice Department announced Friday.
  • The announcement came one year after the passage of the $2 trillion economic aid legislation known as the CARES Act, which aimed to help people and businesses suffering financial losses in the coronavirus pandemic.
  • ...2 more annotations...
  • the Justice Department says it has charged 120 people with crimes related PPP fraud.
  • In one Texas case, a man pleaded guilty to seeking $24.8 million in PPP loans using the names of 11 different companies to make loan applications to 11 lenders. He managed to obtain $17.3 million in forgivable loans and used the money to buy homes, jewelry and luxury cars.
Javier E

Did Francesca Gino and Dan Ariely Fabricate Data for the Same Study? - The Atlantic - 0 views

  • Had the doctoring been done by someone from the insurer, as Ariely implied? There didn’t seem to be a way to dispute that contention, and the company itself wasn’t saying much. Then, last week, NPR’s Planet Money delivered a scoop: The company, called The Hartford, informed the show that it had finally tracked down the raw numbers that were provided to Ariely—and that the data had been “manipulated inappropriately” in the published study.
  • The analysis of insurance data from The Hartford appeared as “Experiment 3” in the paper. On the preceding page, an analysis of a different dataset—the one linked to Gino—was written up as “Experiment 1.” The scientists who say they discovered issues with both experiments—Leif Nelson, Uri Simonsohn, and Joe Simmons—dubbed the apparent double fraud a “clusterfake.” When I spoke with the scientific-misconduct investigator and Atlantic contributor James Heathers, he had his own way of describing it: “This is some kind of mad, fraudulent unicorn.”
  • When they set about reviewing Ariely’s work on the 2012 paper, a few quirks in the car-insurance data tipped them off that something might be amiss. Some entries were in one font, some in another. Some were rounded to the nearest 500 or 1,000; some were not. But the detail that really caught their attention was the distribution of recorded values. With such a dataset, you’d expect to see the numbers fall in a bell curve—most entries bunched up near the mean, and the rest dispersed along the tapering extremes. But the data that Ariely said he’d gotten from the insurance company did not form a bell curve; the distribution was completely flat. Clients were just as likely to have claimed that they’d driven 1,000 miles as 10,000 or 50,000 miles. It’s “hard to know what the distribution of miles driven should look like in those data,” the scientists wrote. “It is not hard, however, to know what it should not look like.”
Javier E

Study Finds Misconduct Widespread in Retracted Scientific Papers - NYTimes.com - 0 views

  • Last year the journal Nature reported an alarming increase in the number of retractions of scientific papers — a tenfold rise in the previous decade, to more than 300 a year across the scientific literature.
  • two scientists and a medical communications consultant analyzed 2,047 retracted papers in the biomedical and life sciences. They found that misconduct was the reason for three-quarters of the retractions for which they could determine the cause. “We found that the problem was a lot worse than we thought,”
  • the rising rate of retractions reflects perverse incentives that drive scientists to make sloppy mistakes or even knowingly publish false data.
  • ...1 more annotation...
  • “It convinces me more that we have a problem in science,” he said. While the fraudulent papers may be relatively few, he went on, their rapid increase is a sign of a winner-take-all culture in which getting a paper published in a major journal can be the difference between heading a lab and facing unemployment. “Some fraction of people are starting to cheat,” he said.
Javier E

The post-truth world of the Trump administration is scarier than you think - The Washin... - 0 views

  • it’s time to cross another bridge — into a world without facts. Or, more precisely, where facts do not matter a whit.
  • “There’s no such thing, unfortunately, anymore, of facts,” she declared on “The Diane Rehm Show”
  • Hughes, a frequent surrogate for President-elect Donald Trump and a paid commentator for CNN during the campaign, kept on defending that assertion at length
  • ...8 more annotations...
  • What matters now, Hughes argued, is not whether his fraud claim is true. No, what matters is who believes it.
  • “You guys took everything that Donald Trump said so literally,” said Lewandowski, who was another ill-advised CNN hire. “The American people didn’t. They understood it. They understood that sometimes — when you have a conversation with people, whether it’s around the dinner table or at a bar — you’re going to say things, and sometimes you don’t have all the facts to back it up.”
  • two other Trump surrogates echoed this sentiment.
  • Ousted Trump campaign manager Corey Lewandowski, speaking during an election post-mortem at Harvard University’s Shorenstein Center on Media, Politics and Public Policy, blamed journalists for — yes — believing what his candidate said.
  • “Mr. Trump’s tweet, amongst a certain crowd, a large — a large part of the population, are truth. When he says that millions of people illegally voted, he has some — in his — amongst him and his supporters, and people believe they have facts to back that up. Those that do not like Mr. Trump, they say that those are lies, and there’s no facts to back it up.”
  • but Trump is not a guy at a bar; he was the Republican nominee for president of the United States and will pretty soon be the leader of the free world
  • When CNN’s Jake Tapper asked Trump senior adviser Kellyanne Conway about the same election-fraud claim discussed above — specifically, whether disseminating misinformation was “presidential”
  • “He’s the president-elect, so that’s presidential behavior,” Conway said, using mind-bending pseudo-logic
sissij

Fake Academe, Looking Much Like the Real Thing - The New York Times - 0 views

  • Academics need to publish in order to advance professionally, get better jobs or secure tenure.
  •  
    Academe is losing its meaning now because the society only sees how many journals you have published but not what you actually write in the journals. I think the growing business of academic publication fraud reflects that our society values our certificates more than our skills. The numerous articles on those "good" colleges also put pressure on teenagers and parent that a title means all. However, that shouldn't be core of education. There is never a shortcut to success. --Sissi (12/31/2016)
grayton downing

Accused "Fraudster" Heads Two Journals | The Scientist Magazine® - 0 views

  • Dmitry Kuznetsov, a Russian biochemist whose published work has been repeatedly alleged to be fraudulent, is now the chief editor of two science journals. The appointments are raising questions about the scientific integrity of the publications.
  • one of the worst fraud records in the history of science,” said Dan Larhammar, a professor at Uppsala University in Sweden who has written about problems in Kuznetsov's work. “That should be a major concern to” the publisher that recruited Kuznetsov as editor-in-chief, he said.
  • “As a result of these claims [by Kuznetsov and colleagues] a couple of students have spent several years of their life on a wild goose chase,” Coey told The Scientist.
  • ...3 more annotations...
  • researchers in various fields of study have also voiced their concerns about the quality of Kuznetsov’s research
  • the questions concerning Kuznetsov’s work, researchers who have published in one of the journals he edits, the British Journal of Medicine and Medical Research, report having no abnormal interactions with the editors during the review process. “I found my experience with that journal to be no different than with any other,”
  • Swanson said he has no way to judge the validity of the accusations against Kuznetsov, and that it would be unfair to jump to conclusions. If the allegations are true, however, “it certainly hurts the reputation of their journal, and I suspect they would rectify that problem (again, if there is truth here) fairly quickly. Most reputable researchers would not want to submit to such a journal.”
Javier E

For Scientists, an Exploding World of Pseudo-Academia - NYTimes.com - 0 views

  • a parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them. Many of the journals and meetings have names that are nearly identical to those of established, well-known publications and events.
  • the dark side of open access,” the movement to make scholarly publications freely available.
  • The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.
  • ...2 more annotations...
  • Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.
  • Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 percent of the total number of open-access journals.
Javier E

Write My Essay, Please! - Richard Gunderman - The Atlantic - 1 views

  • Why aren't the students who use these services crafting their own essays to begin with?
  • Here is where the real problem lies. The idea of paying someone else to do your work for you has become increasingly commonplace in our broader culture, even in the realm of writing. It is well known that many actors, athletes, politicians, and businesspeople have contracted with uncredited ghostwriters to produce their memoirs for them. There is no law against it.
  • At the same time, higher education has been transformed into an industry, another sphere of economic activity where goods and services are bought and sold. By this logic, a student who pays a fair market price for it has earned whatever grade it brings. In fact, many institutions of higher education market not the challenges provided by their course of study, but the ease with which busy students can complete it in the midst of other daily responsibilities.
  • ...2 more annotations...
  • ultimately, students who use essay-writing services are cheating no one more than themselves. They are depriving themselves of the opportunity to ask, "What new insights and perspectives might I gain in the process of writing this paper?" instead of "How can I check this box and get my credential?"
  • why stop with exams? Why not follow this path to its logical conclusion? If the entire course is online, why shouldn't students hire someone to enroll and complete all its requirements on their behalf? In fact, "Take-my-course.com" sites have already begun to appear. One site called My Math Genius promises to get customers a "guaranteed grade," with experts who will complete all assignments and "ace your final and midterm."
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
kirkpatrickry

Making virtual real: How to hack human perception | The Drum - 0 views

  • Everything we do and see is real. It’s all experiential. Nobody has a dream and wakes up feeling like a victim of fraud. It really happened in the sense that we had a genuine dream. So when we talk about virtual reality, what is real and what is virtually real?
  • Suspension of disbelief, what we experienced essential in all mediums before, is not necessary in VR anymore. It’s what all prior mediums require and what VR, when done right, completely circumvents
  • Each and every facet of VR production relies on craft to properly convey an experience to the user and make it enjoyable and believable.
1 - 20 of 67 Next › Last »
Showing 20 items per page