Skip to main content

Home/ TOK Friends/ Group items tagged replication

Rss Feed Group items tagged

Javier E

Psychology's Replication Crisis Is Real, Many Labs 2 Says - The Atlantic - 1 views

  • n recent years, it has become painfully clear that psychology is facing a “reproducibility crisis,” in which even famous, long-established phenomena—the stuff of textbooks and TED Talks—might not be real
  • Ironically enough, it seems that one of the most reliable findings in psychology is that only half of psychological studies can be successfully repeated
  • That failure rate is especially galling, says Simine Vazire from the University of California at Davis, because the Many Labs 2 teams tried to replicate studies that had made a big splash and been highly cited
  • ...5 more annotations...
  • With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
  • Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming, or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth, or that people who grow up with more siblings are more altruistic.
  • Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker, the chair of the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.
  • it’s a serious blow to one of the most frequently cited criticisms of the “reproducibility crisis” rhetoric. Surely, skeptics argue, it’s a fantasy to expect studies to replicate everywhere. “There’s a massive deference to the sample,” Nosek says. “Your replication attempt failed? It must be because you did it in Ohio and I did it in Virginia, and people are different. But these results suggest that we can’t just wave those failures away very easily.”
  • the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
kushnerha

New Critique Sees Flaws in Landmark Analysis of Psychology Studies - The New York Times - 0 views

  • A landmark 2015 report that cast doubt on the results of dozens of published psychology studies has exposed deep divisions in the field, serving as a reality check for many working researchers but as an affront to others who continue to insist the original research was sound.
  • On Thursday, a group of four researchers publicly challenged the report, arguing that it was statistically flawed and, as a result, wrong.The 2015 report, called the Reproducibility Project, found that less than 40 studies in a sample of 100 psychology papers in leading journals held up when retested by an independent team. The new critique by the four researchers countered that when that team’s statistical methodology was adjusted, the rate was closer to 100 percent.Neither the original analysis nor the critique found evidence of fraud or manipulation of data.
  • “That study got so much press, and the wrong conclusions were drawn from it,” said Timothy D. Wilson, a professor of psychology at the University of Virginia and an author of the new critique. “It’s a mistake to make generalizations from something that was done poorly, and this we think was done poorly.”
  • ...6 more annotations...
  • countered that the critique was highly biased: “They are making assumptions based on selectively interpreting data and ignoring data that’s antagonistic to their point of view.”
  • The challenge comes as the field of psychology is facing a generational change, with young researchers beginning to share their data and study designs before publication, to improve transparency. Still, the new critique is likely to feed an already lively debate about how best to conduct and evaluate so-called replication projects of studies. Such projects are underway in several fields, scientists on both sides of the debate said.
  • “On some level, I suppose it is appealing to think everything is fine and there is no reason to change the status quo,” said Sanjay Srivastava, a psychologist at the University of Oregon, who was not a member of either team. “But we know too much, from many other sources, to put too much credence in an analysis that supports that remarkable conclusion.”
  • One issue the critique raised was how faithfully the replication team had adhered to the original design of the 100 studies it retested. Small alterations in design can make the difference between whether a study replicates or not, scientists say.
  • Another issue that the critique raised had to do with statistical methods. When Dr. Nosek began his study, there was no agreed-upon protocol for crunching the numbers. He and his team settled on five measures
  • He said that the original replication paper and the critique use statistical approaches that are “predictably imperfect” for this kind of analysis.One way to think about the dispute, Dr. Simohnson said, is that the original paper found that the glass was about 40 percent full, and the critique argues that it could be 100 percent full. In fact, he said in an email, “State-of-the-art techniques designed to evaluate replications say it is 40 percent full, 30 percent empty, and the remaining 30 percent could be full or empty, we can’t tell till we get more data.”
Javier E

New Truths That Only One Can See - NYTimes.com - 1 views

  • Replication, the ability of another lab to reproduce a finding, is the gold standard of science, reassurance that you have discovered something true. But that is getting harder all the time.
  • With the most accessible truths already discovered, what remains are often subtle effects, some so delicate that they can be conjured up only under ideal circumstances, using highly specialized techniques.
  • Taking into account the human tendency to see what we want to see, unconscious bias is inevitable.
  • ...8 more annotations...
  • He and his colleagues could not replicate 47 of 53 landmark papers about cancer. Some of the results could not be reproduced even with the help of the original scientists working in their own labs.
  • Paradoxically the hottest fields, with the most people pursuing the same questions, are most prone to error, Dr. Ioannidis argued. If one of five competing labs is alone in finding an effect, that result is the one likely to be published. But there is a four in five chance that it is wrong. Papers reporting negative conclusions are more easily ignored.
  • The effect is amplified by competition for a shrinking pool of grant money and also by the design of so many experiments — with small sample sizes (cells in a lab dish or people in an epidemiological pool) and weak standards for what passes as statistically significant. That makes it all the easier to fool oneself.
  • The fear that much published research is tainted has led to proposals to make replication easier by providing more detailed documentation, including videos of difficult procedures.
  • A call for the establishment of independent agencies to replicate experiments has led to a backlash, a fear that perfectly good results will be thrown out.
  • Scientists talk about “tacit knowledge,” the years of mastery it can take to perform a technique. The image they convey is of an experiment as unique as a Rembrandt.
  • Embedded in the tacit knowledge may be barely perceptible tweaks and jostles — ways of unknowingly smuggling one’s expectations into the results, like a message coaxed from a Ouija board.
  • Exciting new results will continue to appear. But as the quarry becomes more elusive, the trophies are bound to be fewer and fewer.
Javier E

Opinion | How Behavioral Economics Took Over America - The New York Times - 0 views

  • Some behavioral interventions do seem to lead to positive changes, such as automatically enrolling children in school free lunch programs or simplifying mortgage information for aspiring homeowners. (Whether one might call such interventions “nudges,” however, is debatable.)
  • it’s not clear we need to appeal to psychology studies to make some common-sense changes, especially since the scientific rigor of these studies is shaky at best.
  • Nudges are related to a larger area of research on “priming,” which tests how behavior changes in response to what we think about or even see without noticing
  • ...16 more annotations...
  • Behavioral economics is at the center of the so-called replication crisis, a euphemism for the uncomfortable fact that the results of a significant percentage of social science experiments can’t be reproduced in subsequent trials
  • this key result was not replicated in similar experiments, undermining confidence in a whole area of study. It’s obvious that we do associate old age and slower walking, and we probably do slow down sometimes when thinking about older people. It’s just not clear that that’s a law of the mind.
  • And these attempts to “correct” human behavior are based on tenuous science. The replication crisis doesn’t have a simple solution
  • Journals have instituted reforms like having scientists preregister their hypotheses to avoid the possibility of results being manipulated during the research. But that doesn’t change how many uncertain results are already out there, with a knock-on effect that ripples through huge segments of quantitative social scienc
  • The Johns Hopkins science historian Ruth Leys, author of a forthcoming book on priming research, points out that cognitive science is especially prone to building future studies off disputed results. Despite the replication crisis, these fields are a “train on wheels, the track is laid and almost nothing stops them,” Dr. Leys said.
  • These cases result from lax standards around data collection, which will hopefully be corrected. But they also result from strong financial incentives: the possibility of salaries, book deals and speaking and consulting fees that range into the millions. Researchers can get those prizes only if they can show “significant” findings.
  • It is no coincidence that behavioral economics, from Dr. Kahneman to today, tends to be pro-business. Science should be not just reproducible, but also free of obvious ideology.
  • Technology and modern data science have only further entrenched behavioral economics. Its findings have greatly influenced algorithm design.
  • The collection of personal data about our movements, purchases and preferences inform interventions in our behavior from the grocery store to who is arrested by the police.
  • Setting people up for safety and success and providing good default options isn’t bad in itself, but there are more sinister uses as well. After all, not everyone who wants to exploit your cognitive biases has your best interests at heart.
  • Despite all its flaws, behavioral economics continues to drive public policy, market research and the design of digital interfaces.
  • One might think that a kind of moratorium on applying such dubious science would be in order — except that enacting one would be practically impossible. These ideas are so embedded in our institutions and everyday life that a full-scale audit of the behavioral sciences would require bringing much of our society to a standstill.
  • There is no peer review for algorithms that determine entry to a stadium or access to credit. To perform even the most banal, everyday actions, you have to put implicit trust in unverified scientific results.
  • We can’t afford to defer questions about human nature, and the social and political policies that come from them, to commercialized “research” that is scientifically questionable and driven by ideology. Behavioral economics claims that humans aren’t rational.
  • That’s a philosophical claim, not a scientific one, and it should be fought out in a rigorous marketplace of ideas. Instead of unearthing real, valuable knowledge of human nature, behavioral economics gives us “one weird trick” to lose weight or quit smoking.
  • Humans may not be perfectly rational, but we can do better than the predictably irrational consequences that behavioral economics has left us with today.
Emily Freilich

Psychology Research Control - NYTimes.com - 0 views

  • goal-priming experiments are coming under scrutiny — and in the process, revealing a problem at the heart of psychological research itself.
  • people are fascinated by counterintuitive findings regarding human nature — who would think that reading seemingly incidental words would influence behavior? Also intriguing is the notion that we don’t have as much control of ourselves as we think.
  • Furthermore, goal priming carries an exculpatory whiff of “don’t blame me, blame my brain” — or better yet, “blame the world around me.”
  • ...7 more annotations...
  • researchers who have examined the method have found it wanting.
  • To be sure, a failure to replicate is not confined to psychology, as the Stanford biostatistician John P. A. Ioannidis documented in his much-discussed 2005 article “Why Most Published Research Findings Are False.”
  • in a variety of fields, subtle differences in protocols between the original study and the replication attempt may cause discrepant findings; even little tweaks in research design could matter a lot.
  • . The larger issue, though, is that because relatively few replication studies appear in the academic literature, it is difficult to know why several seemingly comparable experiments yield conflicting results
  • publish-or-perish world offers little reward for researchers who spend precious time reproducing their own work or that of others. This is a problem for many fields, but particularly worrisome for psychology. The field is suffering a “crisis of confidence,” as Mr. Pashler put it, thanks to a glut of neat results that are long on mass appeal but short on scientific confirmation.
  • group of psychologists established the Reproducibility Project, which aims to replicate the first 30 studies published in three high-profile psychology journals
  • illuminate the extent to which studies fail when they are reproduced by a different set of researchers, the factors that predict a study’s reproducibility and, perhaps, the conditions under which the goal-priming effect, assuming it exists, is most robust.
Sophia C

When Studies Are Wrong: A Coda - NYTimes.com - 0 views

  • All scientific results are, of course, subject to revision and refutation by later experiments. The problem comes when these replications don’t occur and the information keeps spreading unchecked. Continue reading the main story Related Coverage Raw Data: Hills to Scientific Discoveries Grow SteeperFEB. 17, 2014 Raw Data: New Truths That Only One Can SeeJAN. 20, 2014 D
  • Based on the number of papers in major journals, Dr. Ioannidis estimates that the field accounts for some 50 percent of published research.
  • Together that constitutes most of scientific research. The remaining slice is physical science — everything from geology and climatology to cosmology and particle physics. These fields have not received the same kind of scrutiny as the others. Is that because they are less prone to the problems Dr. Ioannides describe
  • ...5 more annotations...
  • “This certainly increases the transparency, reliability and cross-checking of proposed research findings,” he wrote.
  • “There seems to be a higher community standard for ‘shaming’ reputations if people step out and make claims that are subsequently refuted.” Cold fusion was a notorious example. He also saw less of an aversion to publishing negative experimental results — that is, failed replications.
  • Almost anything might be suspected of causing cancer, but physicists are unlikely to propose conjectures that violate quantum mechanics or general relativity. But I’m not sure the difference is always that stark. Here is how I put it my blog post:
  • “I have no doubt that false positives occur in all of these fields,” he concluded, “and occasionally they may be a major problem.”I’ll be looking further into this matter for a future column and would welcome comments from scientists about the situation in their own domain.
  • problem comes when these replications don’t occur and the information keeps spreading unchecked.
Javier E

Psychology Is Not in Crisis - The New York Times - 0 views

  • Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate
  • Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.
  • Much of science still assumes that phenomena can be explained with universal laws and therefore context should not matter. But this is not how the world works. Even a simple statement like “the sky is blue” is true only at particular times of day, depending on the mix of molecules in the air as they reflect and scatter light, and on the viewer’s experience of color. Continue reading the main story Write A Comment
  • ...1 more annotation...
  • Science is not a body of facts that emerge, like an orderly string of light bulbs, to illuminate a linear path to universal truth. Rather, science (to paraphrase Henry Gee, an editor at Nature) is a method to quantify doubt about a hypothesis, and to find the contexts in which a phenomenon is likely. Failure to replicate is not a bug; it is a feature. It is what leads us along the path — the wonderfully twisty path — of scientific discovery.
Javier E

Opinion | Cloning Scientist Hwang Woo-suk Gets a Second Chance. Should He? - The New Yo... - 0 views

  • The Hwang Woo-suk saga is illustrative of the serious deficiencies in the self-regulation of science. His fraud was uncovered because of brave Korean television reporters. Even those efforts might not have been enough, had Dr. Hwang’s team not been so sloppy in its fraud. The team’s papers included fabricated data and pairs of images that on close comparison clearly indicated duplicity.
  • Yet as a cautionary tale about the price of fraud, it is, unfortunately, a mixed bag. He lost his academic standing, and he was convicted of bioethical violations and embezzlement, but he never ended up serving jail time
  • Although his efforts at cloning human embryos, ended in failure and fraud, they provided him the opportunities and resources he needed to take on projects, such as dog cloning, that were beyond the reach of other labs. The fame he earned in academia proved an asset in a business world where there’s no such thing as bad press.
  • ...3 more annotations...
  • it is comforting to think that scientific truth inevitably emerges and scientific frauds will be caught and punished.
  • Dr. Hwang’s scandal suggests something different. Researchers don’t always have the resources or motivation to replicate others’ experiments
  • Even if they try to replicate and fail, it is the institution where the scientist works that has the right and responsibility to investigate possible fraud. Research institutes and universities, facing the prospect of an embarrassing scandal, might not do so.
Javier E

The Class Politics of Instagram Face - Tablet Magazine - 0 views

  • by approaching universality, Instagram Face actually secured its role as an instrument of class distinction—a mark of a certain kind of woman. The women who don’t mind looking like others, or the conspicuousness of the work they’ve had done
  • Instagram Face goes with implants, middle-aged dates and nails too long to pick up the check. Batting false eyelashes, there in the restaurant it orders for dinner all the food groups of nouveau riche Dubai: caviar, truffle, fillers, foie gras, Botox, bottle service, bodycon silhouettes. The look, in that restaurant and everywhere, has reached a definite status. It’s the girlfriend, not the wife.
  • Does cosmetic work have a particular class? It has a price tag, which can amount to the same thing, unless that price drops low enough.
  • ...29 more annotations...
  • Before the introduction of Botox and hyaluronic acid dermal fillers in 2002 and 2003, respectively, aesthetic work was serious, expensive. Nose jobs and face lifts required general anesthesia, not insignificant recovery time, and cost thousands of dollars (in 2000, a facelift was $5,416 on average, and a rhinoplasty $4,109, around $9,400 and $7,000 adjusted).
  • In contrast, the average price of a syringe of hyaluronic acid filler today is $684, while treating, for example, the forehead and eyes with Botox will put you out anywhere from $300 to $600
  • We copied the beautiful and the rich, not in facsimile, but in homage.
  • In 2018, use of Botox and fillers was up 18% and 20% from five years prior. Philosophies of prejuvenation have made Botox use jump 22% among 22- to 37-year-olds in half a decade as well. By 2030, global noninvasive aesthetic treatments are predicted to triple.
  • The trouble is that a status symbol, without status, is common.
  • Beauty has always been exclusive. When someone strikes you as pretty, it means they are something that everyone else is not.
  • It’s a zero-sum game, as relative as our morals. Naturally, we hoard of beauty what we can. It’s why we call grooming tips “secrets.”
  • Largely the secrets started with the wealthy, who possess the requisite money and leisure to spare on their appearances
  • Botox and filler only accelerated a trend that began in the ’70s and ’80s and is just now reaching its saturation point.
  • we didn’t have the tools for anything more than emulation. Fake breasts and overdrawn lips only approximated real ones; a birthmark drawn with pencil would always be just that.
  • Instagram Face, on the other hand, distinguishes itself by its sheer reproducibility. Not only because of those new cosmetic technologies, which can truly reshape features, at reasonable cost and with little risk.
  • built in to the whole premise of reversible, low-stakes modification is an indefinite flux, and thus a lack of discretion.
  • Instagram Face has replicated outward, with trendsetters giving up competing with one another in favor of looking eerily alike. And obviously it has replicated down.
  • Eva looks like Eva. If she has procedures in common with Kim K, you couldn’t tell. “I look at my features and I think long and hard of how I can, without looking different and while keeping as natural as possible, make them look better and more proportional. I’m against everything that is too invasive. My problem with Instagram Face is that if you want to look like someone else, you should be in therapy.”
  • natural looks have always been, and still are, more valuable than artificial ones. Partly because of our urge to legitimize in any way we can the advantages we have over other people. Hotness is a class struggle.
  • As more and more women post videos of themselves eating, sleeping, dressing, dancing, and Only-Fanning online, in a logical bid for economic ascendance, the women who haven’t needed to do that gain a new status symbol.
  • Privacy. A life which is not a ticketed show. An intimacy that does not admit advertisers. A face that does not broadcast its insecurity, or the work undergone to correct it.
  • Upper class, private women get discrete work done. The differences aren’t in the procedures themselves—they’re the same—but in disposition
  • Eva, who lives between central London, Geneva, and the south of France, says: “I do stuff, but none of the stuff I do is at all in my head associated with Instagram Face. Essentially you do similar procedures, but the end goal is completely different. Because they are trying to get the result of looking like another human being, and I’m just beautifying myself.”
  • But the more rapidly it replicates, and the clearer our manuals for quick imitation become, the closer we get to singularity—that moment Kim Kardashian fears unlike any other: the moment when it becomes unclear whether we’re copying her, or whether she is copying us.
  • what he restores is complicated and yet not complicated at all. It’s herself, the fingerprint of her features. Her aura, her presence and genealogy, her authenticity in space and time.
  • Dr. Taktouk’s approach is “not so formulaic.” He aims to give his patients the “better versions of themselves.” “It’s not about trying to be anyone else,” he says, “or creating a conveyor belt of patients. It’s about working with your best features, enhancing them, but still looking like you.”
  • “Vulgar” says that in pursuing indistinguishability, women have been duped into another punishing divide. “Vulgar” says that the subtlety of his work is what signals its special class—and that the women who’ve obtained Instagram Face for mobility’s sake have unwittingly shut themselves out of it.
  • While younger women are dissolving their gratuitous work, the 64-year-old Madonna appeared at the Grammy Awards in early February, looking so tragically unlike herself that the internet launched an immediate postmortem.
  • The folly of Instagram Face is that in pursuing a bionic ideal, it turns cosmetic technology away from not just the reality of class and power, but also the great, poignant, painful human project of trying to reverse time. It misses the point of what we find beautiful: that which is ephemeral, and can’t be reproduced
  • Age is just one of the hierarchies Instagram Face can’t topple, in the history of women striving versus the women already arrived. What exactly have they arrived at?
  • Youth, temporarily. Wealth. Emotional security. Privacy. Personal choices, like cosmetic decisions, which are not so public, and do not have to be defended as empowered, in the defeatist humiliation of our times
  • Maybe they’ve arrived at love, which for women has never been separate from the things I’ve already mentioned.
  • I can’t help but recall the time I was chatting with a plastic surgeon. I began to point to my features, my flaws. I asked her, “What would you do to me, if I were your patient?” I had many ideas. She gazed at me, and then noticed my ring. “Nothing,” she said. “You’re already married.”
Javier E

The Excel Depression - NYTimes.com - 0 views

  • the paper instantly became famous; it was, and is, surely the most influential economic analysis of recent years.
  • In fact, Reinhart-Rogoff quickly achieved almost sacred status among self-proclaimed guardians of fiscal responsibility; their tipping-point claim was treated not as a disputed hypothesis but as unquestioned fact.
  • another problem emerged: Other researchers, using seemingly comparable data on debt and growth, couldn’t replicate the Reinhart-Rogoff results.
  • ...5 more annotations...
  • the truth is that Reinhart-Rogoff faced substantial criticism from the start, and the controversy grew over time. As soon as the paper was released, many economists pointed out that a negative correlation between debt and economic performance need not mean that high debt causes low growth. It could just as easily be the other way around
  • Finally, Ms. Reinhart and Mr. Rogoff allowed researchers at the University of Massachusetts to look at their original spreadsheet — and the mystery of the irreproducible results was solved. First, they omitted some data; second, they used unusual and highly questionable statistical procedures; and finally, yes, they made an Excel coding error.
  • Correct these oddities and errors, and you get what other researchers have found: some correlation between high debt and slow growth, with no indication of which is causing which, but no sign at all of that 90 percent “threshold.”
  • the Reinhart-Rogoff fiasco needs to be seen in the broader context of austerity mania: the obviously intense desire of policy makers, politicians and pundits across the Western world to turn their backs on the unemployed and instead use the economic crisis as an excuse to slash social programs.
  • What the Reinhart-Rogoff affair shows is the extent to which austerity has been sold on false pretenses. For three years, the turn to austerity has been presented not as a choice but as a necessity. Economic research, austerity advocates insisted, showed that terrible things happen once debt exceeds 90 percent of G.D.P. But “economic research” showed no such thing; a couple of economists made that assertion, while many others disagreed. Policy makers abandoned the unemployed and turned to austerity because they wanted to, not because they had to.
Javier E

I.Q. Points for Sale, Cheap - NYTimes.com - 1 views

  • Until recently, the overwhelming consensus in psychology was that intelligence was essentially a fixed trait. But in 2008, an article by a group of researchers led by Susanne Jaeggi and Martin Buschkuehl challenged this view and renewed many psychologists’ enthusiasm about the possibility that intelligence was trainable — with precisely the kind of tasks that are now popular as games.
  • it’s important to explain why we’re not sold on the idea.
  • There have been many attempts to demonstrate large, lasting gains in intelligence through educational interventions, with few successes. When gains in intelligence have been achieved, they have been modest and the result of many years of effort.
  • ...3 more annotations...
  • Web site PsychFileDrawer.org, which was founded as an archive for failed replication attempts in psychological research, maintains a Top 20 list of studies that its users would like to see replicated. The Jaeggi study is currently No. 1.
  • Another reason for skepticism is a weakness in the Jaeggi study’s design: it included only a single test of reasoning to measure gains in intelligence.
  • Demonstrating that subjects are better on one reasoning test after cognitive training doesn’t establish that they’re smarter. It merely establishes that they’re better on one reasoning test.
Sophia C

New Truths That Only One Can See - NYTimes.com - 0 views

  • eproducible result may actually be the rarest of birds. Replication, the ability of another lab to reproduce a finding, is the gold standard of science, reassurance that you have discovered something true
  • With the most accessible truths already discovered, what remains are often subtle effects, some so delicate that they can be conjured up only under ideal circumstances, using highly specialized techniques.
  • any hypotheses already start with a high chance of being wrong
  • ...5 more annotations...
  • the human tendency to see what we want to see, unconscious bias is inevitable. Without any ill intent, a scientist may be nudged toward interpreting the data so it supports the hypothesis, even if just barely.
  • He found that a large proportion of the conclusions were undermined or contradicted by later studies.
  • He and his colleagues could not replicate 47 of 53 landmark papers about cancer
  • esearchers deeply believed that their findings were true. But that is the problem. The more passionate scientists are about their work, the more susceptible they are to bias
  • “The slightest shift in their microenvironment can alter the results — something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced.
Javier E

What Would Plato Tweet? - NYTimes.com - 1 views

  • In a mere couple of centuries, Greek speakers went from anomie and illiteracy, lacking even an alphabet, to Aeschylus and Aristotle. They invented not only the discipline of philosophy, but also science, mathematics, the study of history (as opposed to mere chronicles) and that special form of government they called democracy — literally rule of the people (though it goes without saying that “the people” didn’t include women and slaves). They also produced timeless art, architecture, poetry and drama.
  • The more outstanding you were, the more mental replication of you there would be, and the more replication, the more you mattered.
  • Kleos lay very near the core of the Greek value system. Their value system was at least partly motivated, as perhaps all value systems are partly motivated, by the human need to feel as if our lives matter
  • ...8 more annotations...
  • what they wanted was the attention of other mortals. All that we can do to enlarge our lives, they concluded, is to strive to make of them things worth the telling
  • Greek philosophy also represented a departure from its own culture. Mattering wasn’t acquired by gathering attention of any kind, mortal or immortal. Acquiring mattering was something people had to do for themselves, cultivating such virtuous qualities of character as justice and wisdom. They had to put their own souls in order.
  • what the Greeks had called kleos. The word comes from the old Homeric word for “I hear,” and it meant a kind of auditory renown. Vulgarly speaking, it was fame. But it also could mean the glorious deed that merited the fame, as well as the poem that sang of the deed and so produced the fame.
  • the one and only God, the Master of the Universe, providing the foundation for both the physical world without and the moral world within. From his position of remotest transcendence, this god nevertheless maintains a rapt interest in human concerns, harboring many intentions directed at us, his creations, who embody nothing less than his reasons for going to the trouble of creating the world ex nihilo
  • the Ivrim, the Hebrews, apparently from their word for “over,” since they were over on the other side of the Jordan
  • Over the centuries, philosophy, perhaps aided by religion, learned to abandon entirely the flawed Greek presumption that only extraordinary lives matter. This was progress of the philosophical variety, subtler than the dazzling triumphs of science, but nevertheless real. Philosophy has laboriously put forth arguments that have ever widened the sphere of mattering.
  • We’ve come a long way from the kleos of Greeks, with its unexamined presumption that mattering is inequitably distributed among us, with the multireplicated among us mattering more.
  • our culture has, with the dwindling of theism, returned to the answer to the problem of mattering that Socrates and Plato judged woefully inadequate.
Javier E

Is Science Kind of a Scam? - The New Yorker - 1 views

  • No well-tested scientific concept is more astonishing than the one that gives its name to a new book by the Scientific American contributing editor George Musser, “Spooky Action at a Distance
  • The ostensible subject is the mechanics of quantum entanglement; the actual subject is the entanglement of its observers.
  • his question isn’t so much how this weird thing can be true as why, given that this weird thing had been known about for so long, so many scientists were so reluctant to confront it. What keeps a scientific truth from spreading?
  • ...29 more annotations...
  • it is as if two magic coins, flipped at different corners of the cosmos, always came up heads or tails together. (The spooky action takes place only in the context of simultaneous measurement. The particles share states, but they don’t send signals.)
  • fashion, temperament, zeitgeist, and sheer tenacity affected the debate, along with evidence and argument.
  • The certainty that spooky action at a distance takes place, Musser says, challenges the very notion of “locality,” our intuitive sense that some stuff happens only here, and some stuff over there. What’s happening isn’t really spooky action at a distance; it’s spooky distance, revealed through an action.
  • Why, then, did Einstein’s question get excluded for so long from reputable theoretical physics? The reasons, unfolding through generations of physicists, have several notable social aspects,
  • What started out as a reductio ad absurdum became proof that the cosmos is in certain ways absurd. What began as a bug became a feature and is now a fact.
  • “If poetry is emotion recollected in tranquility, then science is tranquility recollected in emotion.” The seemingly neutral order of the natural world becomes the sounding board for every passionate feeling the physicist possesses.
  • Musser explains that the big issue was settled mainly by being pushed aside. Generational imperatives trumped evidentiary ones. The things that made Einstein the lovable genius of popular imagination were also the things that made him an easy object of condescension. The hot younger theorists patronized him,
  • There was never a decisive debate, never a hallowed crucial experiment, never even a winning argument to settle the case, with one physicist admitting, “Most physicists (including me) accept that Bohr won the debate, although like most physicists I am hard pressed to put into words just how it was done.”
  • Arguing about non-locality went out of fashion, in this account, almost the way “Rock Around the Clock” displaced Sinatra from the top of the charts.
  • The same pattern of avoidance and talking-past and taking on the temper of the times turns up in the contemporary science that has returned to the possibility of non-locality.
  • the revival of “non-locality” as a topic in physics may be due to our finding the metaphor of non-locality ever more palatable: “Modern communications technology may not technically be non-local but it sure feels that it is.”
  • Living among distant connections, where what happens in Bangalore happens in Boston, we are more receptive to the idea of such a strange order in the universe.
  • The “indeterminacy” of the atom was, for younger European physicists, “a lesson of modernity, an antidote to a misplaced Enlightenment trust in reason, which German intellectuals in the 1920’s widely held responsible for their country’s defeat in the First World War.” The tonal and temperamental difference between the scientists was as great as the evidence they called on.
  • Science isn’t a slot machine, where you drop in facts and get out truths. But it is a special kind of social activity, one where lots of different human traits—obstinacy, curiosity, resentment of authority, sheer cussedness, and a grudging readiness to submit pet notions to popular scrutiny—end by producing reliable knowledge
  • What was magic became mathematical and then mundane. “Magical” explanations, like spooky action, are constantly being revived and rebuffed, until, at last, they are reinterpreted and accepted. Instead of a neat line between science and magic, then, we see a jumpy, shifting boundary that keeps getting redrawn
  • Real-world demarcations between science and magic, Musser’s story suggests, are like Bugs’s: made on the move and as much a trap as a teaching aid.
  • In the past several decades, certainly, the old lines between the history of astrology and astronomy, and between alchemy and chemistry, have been blurred; historians of the scientific revolution no longer insist on a clean break between science and earlier forms of magic.
  • Where once logical criteria between science and non-science (or pseudo-science) were sought and taken seriously—Karl Popper’s criterion of “falsifiability” was perhaps the most famous, insisting that a sound theory could, in principle, be proved wrong by one test or another—many historians and philosophers of science have come to think that this is a naïve view of how the scientific enterprise actually works.
  • They see a muddle of coercion, old magical ideas, occasional experiment, hushed-up failures—all coming together in a social practice that gets results but rarely follows a definable logic.
  • Yet the old notion of a scientific revolution that was really a revolution is regaining some credibility.
  • David Wootton, in his new, encyclopedic history, “The Invention of Science” (Harper), recognizes the blurred lines between magic and science but insists that the revolution lay in the public nature of the new approach.
  • What killed alchemy was the insistence that experiments must be openly reported in publications which presented a clear account of what had happened, and they must then be replicated, preferably before independent witnesses.
  • Wootton, while making little of Popper’s criterion of falsifiability, makes it up to him by borrowing a criterion from his political philosophy. Scientific societies are open societies. One day the lunar tides are occult, the next day they are science, and what changes is the way in which we choose to talk about them.
  • Wootton also insists, against the grain of contemporary academia, that single observed facts, what he calls “killer facts,” really did polish off antique authorities
  • once we agree that the facts are facts, they can do amazing work. Traditional Ptolemaic astronomy, in place for more than a millennium, was destroyed by what Galileo discovered about the phases of Venus. That killer fact “serves as a single, solid, and strong argument to establish its revolution around the Sun, such that no room whatsoever remains for doubt,” Galileo wrote, and Wootton adds, “No one was so foolish as to dispute these claims.
  • everal things flow from Wootton’s view. One is that “group think” in the sciences is often true think. Science has always been made in a cloud of social networks.
  • There has been much talk in the pop-sci world of “memes”—ideas that somehow manage to replicate themselves in our heads. But perhaps the real memes are not ideas or tunes or artifacts but ways of making them—habits of mind rather than products of mind
  • science, then, a club like any other, with fetishes and fashions, with schemers, dreamers, and blackballed applicants? Is there a real demarcation to be made between science and every other kind of social activity
  • The claim that basic research is valuable because it leads to applied technology may be true but perhaps is not at the heart of the social use of the enterprise. The way scientists do think makes us aware of how we can think
Javier E

Five months on, what scientists now know about the coronavirus | World news | The Guardian - 0 views

  • The Sars-CoV-2 virus almost certainly originated in bats, which have evolved fierce immune responses to viruses, researchers have discovered. These defences drive viruses to replicate faster so that they can get past bats’ immune defences. In turn, that transforms the bat into a reservoir of rapidly reproducing and highly transmissible viruses
  • “This virus probably jumped from a bat into another animal, and that other animal was probably near a human, maybe in a market,
  • Virus-ridden particles are inhaled by others and come into contact with cells lining the throat and larynx. These cells have large numbers of receptors – known as Ace-2 receptors – on their surfaces. (Cell receptors play a key role in passing chemicals into cells and in triggering signals between cells.
  • ...19 more annotations...
  • “This virus has a surface protein that is primed to lock on that receptor and slip its RNA into the cell,”
  • Once inside, that RNA inserts itself into the cell’s own replication machinery and makes multiple copies of the virus. These burst out of the cell, and the infection spreads. Antibodies generated by the body’s immune system eventually target the virus and in most cases halt its progress.
  • “A Covid-19 infection is generally mild, and that really is the secret of the virus’s success,” adds Ball. “Many people don’t even notice they have got an infection and so go around their work, homes and supermarkets infecting others.”
  • the virus can cause severe problems. This happens when it moves down the respiratory tract and infects the lungs, which are even richer in cells with Ace-2 receptors. Many of these cells are destroyed, and lungs become congested with bits of broken cell. In these cases, patients will require treatment in intensive care.
  • Even worse, in some cases, a person’s immune system goes into overdrive, attracting cells to the lungs in order to attack the virus, resulting in inflammation
  • This process can run out of control, more immune cells pour in, and the inflammation gets worse. This is known as a cytokine storm.
  • Just why cytokine storms occur in some patients but not in the vast majority is unclear
  • Doctors examining patients recovering from a Covid-19 infection are finding fairly high levels of neutralising antibodies in their blood. These antibodies are made by the immune system, and they coat an invading virus at specific points, blocking its ability to break into cells.
  • Instead, most virologists believe that immunity against Covid-19 will last only a year or two. “That is in line with other coronaviruses that infect humans,
  • “It is clear that immune responses are being mounted against Covid-19 in infected people,” says virologist Mike Skinner of Imperial College London. “And the antibodies created by that response will provide protection against future infections – but we should note that it is unlikely this protection will be for life.”
  • “That means that even if most people do eventually become exposed to the virus, it is still likely to become endemic – which means we would see seasonal peaks of infection of this disease. We will have reached a steady state with regard to Covid-19.”
  • Skinner is doubtful. “We have got to consider this pandemic from the virus’s position,” he says. “It is spreading round the world very nicely. It is doing OK. Change brings it no benefit.”
  • In the end, it will be the development and roll-out of an effective vaccine that will free us from the threat of Covid-19,
  • the journal Nature reported that 78 vaccine projects had been launched round the globe – with a further 37 in development.
  • vaccines require large-scale safety and efficacy studies. Thousands of people would receive either the vaccine itself or a placebo to determine if the former were effective at preventing infection from the virus which they would have encountered naturally. That, inevitably, is a lengthy process.
  • some scientists have proposed a way to speed up the process – by deliberately exposing volunteers to the virus to determine a vaccine’s efficacy.
  • Volunteers would have to be young and healthy, he stresses: “Their health would also be closely monitored, and they would have access to intensive care and any available medicines.”
  • The result could be a vaccine that would save millions of lives by being ready for use in a much shorter time than one that went through standard phase three trials.
  • phase-three trials are still some way off, so we have time to consider the idea carefully.”
oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

I Sent All My Text Messages in Calligraphy for a Week - Cristina Vanko - The Atlantic - 2 views

  • I decided to blend a newfound interest in calligraphy with my lifelong passion for written correspondence to create a new kind of text messaging. The idea: I wanted to message friends using calligraphic texts for one week. The average 18-to-24-year-old sends and gets something like 4,000 messages a month, which includes sending more than 500 texts a week, according to Experian. The week of my experiment, I only sent 100
  • Before I started, I established rules for myself: I could create only handwritten text messages for seven days, absolutely no using my phone’s keyboard. I had to write out my messages on paper, photograph them, then hit “send.” I didn’t reveal my plan to my friends unless asked
  • That week, the sense of urgency I normally felt about my phone virtually vanished. It was like back when texts were rationed, and when I lacked anxiety about viewing "read" receipts. I didn’t feel naked without having my phone on me every moment. 
  • ...10 more annotations...
  • So while the experiment began as an exercise to learn calligraphy, it doubled as a useful sort of digital detox that revealed my relationship with technology. Here's what I learned:
  • Receiving handwritten messages made people feel special. The awesome feeling of receiving personalized mail really can be replicated with a handwritten text.
  • Handwriting allows for more self-expression. I found I could give words a certain flourish to mimic the intonation of spoken language. Expressing myself via handwriting could also give the illusion of real-time presence. One friend told me, “it’s like you’re here with us!”
  • We are a youth culture that heavily relies on emojis. I didn’t realize how much I depend on emojis and emoticons to express myself until I didn’t have them. Handdrawn emoticons, though original, just aren’t the same. I wasn't able to convey emoticons as neatly as the cleanliness of a typeface. Sketching emojis is too time consuming. To bridge the gap between time and the need for graphic imagery, I sent out selfies on special occasions when my facial expression spoke louder than words.
  • Sometimes you don't need to respond. Most conversations aren’t life or death situations, so it was refreshing to feel 100 percent present in all interactions. I didn’t interrupt conversations by checking social media or shooting text messages to friends. I was more in tune with my surroundings. On transit, I took part in people watching—which, yes, meant mostly watching people staring at their phones. I smiled more at passersby while walking since I didn’t feel the need to avoid human interaction by staring at my phone.
  • A phone isn't only a texting device. As I texted less, I used my phone less frequently—mostly because I didn’t feel the need to look at it to keep me busy, nor did I want to feel guilty for utilizing the keyboard through other applications. I still took photos, streamed music, and logged workouts since I felt okay with pressing buttons for selection purposes
  • People don’t expect to receive phone calls anymore. Texting brings about a less intimidating, more convenient experience. But it wasn't that long ago when real-time voice were the norm. It's clear to me that, these days, people prefer to be warned about an upcoming phone call before it comes in.
  • Having a pen and paper is handy at all times. Writing out responses is a great reminder to slow down and use your hands. While all keys on a keyboard feel the same, it’s difficult to replicate the tactile activity of tracing a letter’s shape
  • My sent messages were more thoughtful.
  • I was more careful with grammar and spelling. People often ignore the rules of grammar and spelling just to maintain the pace of texting conversation. But because a typical calligraphic text took minutes to craft, I had time to make sure I got things right. The usual texting acronyms and misspellings look absurd when texted with type, but they'd be especially ridiculous written by hand.
1 - 20 of 70 Next › Last »
Showing 20 items per page