Skip to main content

Home/ TOK Friends/ Group items tagged researchers

Rss Feed Group items tagged

grayton downing

Send in the Bots | The Scientist Magazine® - 0 views

  • any hypothesis, his idea needed to be tested. But measuring brain activity in a moving ant—the most direct way to determine cognitive processing during animal decision making—was not possible. So Garnier didn’t study ants; he studied robots. U
  • The robots then navigated the environment by sensing light intensity through two sensors on their “heads.”
  • , several groups have used autonomous robots that sense and react to their environments to “debunk the idea that you need higher cognitive processing to do what look like cognitive things,”
  • ...10 more annotations...
  • a growing number of scientists are using autonomous robots to interrogate animal behavior and cognition. Researchers have designed robots to behave like ants, cockroaches, rodents, chickens, and more, then deployed their bots in the lab or in the environment to see how similarly they behave to their flesh-and-blood counterparts.
  • robots give behavioral biologists the freedom to explore the mind of an animal in ways that would not be possible with living subjects, says University of Sheffield researcher James Marshall, who in March helped launch a 3-year collaborative project to build a flying robot controlled by a computer-run simulation of the entire honeybee brain.
  • “I really think there is a lot to be discovered by doing the engineering side along with the science.”
  • Not only did the bots move around the space like the rat pups did, they aggregated in remarkably similar ways to the real animals.3 Then Schank realized that there was a bug in his program. The robots weren’t following his predetermined rules; they were moving randomly.
  • Animal experiments are still needed to advance neuroscience.” But, he adds, robots may prove to be an indispensable new ethological tool for focusing the scope of research. “If you can have good physical models,” Prescott says, “then you can reduce the number of experiments and only do the ones that answer really important questions.”
  • animal-mimicking robots is not easy, however, particularly when knowledge of the system’s biology is lacking.
  • However, when the researchers also gave the robots a sense of flow, and programmed them to assume that odors come from upstream, the bots much more closely mimicked real lobster behavior. “That was a demonstration that the animals’ brains were multimodal—that they were using chemical information and flow information,” says Grasso, who has since worked on robotic models of octopus arms and crayfish.
  • some sense, the use of robotics in animal-behavior research is not that new. Since the inception of the field of ethology, researchers have been using simple physical models of animals—“dummies”—to examine the social behavior of real animals, and biologists began animating their dummies as soon as technology would allow. “The fundamental problem when you’re studying an interaction between two individuals is that it’s a two-way interaction—you’ve got two players whose behaviors are both variable,”
  • building a robot that animals will accept as one of their own is complicated, to say the least.
  • handful of other researchers have also successfully integrated robots with live animals—including fish, ducks, and chickens. There are several notable benefits to intermixing robots and animals; first and foremost, control. “One of the problems when studying behavior is that, of course, it’s very difficult to have control of animals, and so it’s hard for us to interpret fully how they interact with each other
Javier E

Noted Dutch Psychologist, Stapel, Accused of Research Fraud - NYTimes.com - 0 views

  • A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found
  • Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.
  • In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.
  • ...8 more annotations...
  • “The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.”
  • Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.
  • In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.
  • Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.
  • Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error tha
  • t changed a reported finding — almost always in opposition to the authors’ hypothesis.
  • an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar, found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.
  • “We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’
Javier E

It's True: False News Spreads Faster and Wider. And Humans Are to Blame. - The New York... - 0 views

  • What if the scourge of false news on the internet is not the result of Russian operatives or partisan zealots or computer-controlled bots? What if the main problem is us?
  • People are the principal culprits
  • people, the study’s authors also say, prefer false news.
  • ...18 more annotations...
  • As a result, false news travels faster, farther and deeper through the social network than true news.
  • those patterns applied to every subject they studied, not only politics and urban legends, but also business, science and technology.
  • The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org
  • with or without the bots, the results were essentially the same.
  • “It’s not really the robots that are to blame.”
  • “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
  • False claims were 70 percent more likely than the truth to be shared on Twitter. True stories were rarely retweeted by more than 1,000 people, but the top 1 percent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.
  • the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter.
  • “The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
  • The M.I.T. researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
  • The goal, said Soroush Vosoughi, a postdoctoral researcher at the M.I.T. Media Lab and the lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
  • The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions
  • False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
  • The M.I.T. researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon, and mention possible interventions, like better labeling, to alter behavior.
  • For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of American adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed.
  • In fall 2016, Mr. Roy, an associate professor at the M.I.T. Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes like shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
  • Mr. Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big internet platforms like Facebook, Google, YouTube and Twitter, and media companies
  • “Polarization,” he said, “has turned out to be a great business model.”
Javier E

Older Americans Are 'Hooked' on Vitamins - The New York Times - 1 views

  • When she was a young physician, Dr. Martha Gulati noticed that many of her mentors were prescribing vitamin E and folic acid to patients. Preliminary studies in the early 1990s had linked both supplements to a lower risk of heart disease.She urged her father to pop the pills as well: “Dad, you should be on these vitamins, because every cardiologist is taking them or putting their patients on [them],” recalled Dr. Gulati, now chief of cardiology for the University of Arizona College of Medicine-Phoenix
  • But just a few years later, she found herself reversing course, after rigorous clinical trials found neither vitamin E nor folic acid supplements did anything to protect the heart. Even worse, studies linked high-dose vitamin E to a higher risk of heart failure, prostate cancer and death from any cause.
  • More than half of Americans take vitamin supplements, including 68 percent of those age 65 and older, according to a 2013 Gallup poll. Among older adults, 29 percent take four or more supplements of any kind
  • ...20 more annotations...
  • Often, preliminary studies fuel irrational exuberance about a promising dietary supplement, leading millions of people to buy in to the trend. Many never stop. They continue even though more rigorous studies — which can take many years to complete — almost never find that vitamins prevent disease, and in some cases cause harm
  • There’s no conclusive evidence that dietary supplements prevent chronic disease in the average American, Dr. Manson said. And while a handful of vitamin and mineral studies have had positive results, those findings haven’t been strong enough to recommend supplements to the general American public, she said.
  • The National Institutes of Health has spent more than $2.4 billion since 1999 studying vitamins and minerals. Yet for “all the research we’ve done, we don’t have much to show for it,” said Dr. Barnett Kramer, director of cancer prevention at the National Cancer Institute.
  • A big part of the problem, Dr. Kramer said, could be that much nutrition research has been based on faulty assumptions, including the notion that people need more vitamins and minerals than a typical diet provides; that megadoses are always safe; and that scientists can boil down the benefits of vegetables like broccoli into a daily pill.
  • when researchers tried to deliver the key ingredients of a healthy diet in a capsule, Dr. Kramer said, those efforts nearly always failed.
  • It’s possible that the chemicals in the fruits and vegetables on your plate work together in ways that scientists don’t fully understand — and which can’t be replicated in a table
  • More important, perhaps, is that most Americans get plenty of the essentials, anyway. Although the Western diet has a lot of problems — too much sodium, sugar, saturated fat and calories, in general — it’s not short on vitamins
  • Without even realizing it, someone who eats a typical lunch or breakfast “is essentially eating a multivitamin,”
  • The body naturally regulates the levels of many nutrients, such as vitamin C and many B vitamins, Dr. Kramer said, by excreting what it doesn’t need in urine. He added: “It’s hard to avoid getting the full range of vitamins.”
  • Not all experts agree. Dr. Walter Willett, a professor at the Harvard T.H. Chan School of Public Health, says it’s reasonable to take a daily multivitamin “for insurance.” Dr. Willett said that clinical trials underestimate supplements’ true benefits because they aren’t long enough, often lasting five to 10 years. It could take decades to notice a lower rate of cancer or heart disease in vitamin taker
  • For Charlsa Bentley, 67, keeping up with the latest nutrition research can be frustrating. She stopped taking calcium, for example, after studies found it doesn’t protect against bone fractures. Additional studies suggest that calcium supplements increase the risk of kidney stones and heart disease.
  • People who take vitamins tend to be healthier, wealthier and better educated than those who don’t, Dr. Kramer said. They are probably less likely to succumb to heart disease or cancer, whether they take supplements or not. That can skew research results, making vitamin pills seem more effective than they really are
  • Because folic acid can lower homocysteine levels, researchers once hoped that folic acid supplements would prevent heart attacks and strokes.In a series of clinical trials, folic acid pills lowered homocysteine levels but had no overall benefit for heart disease, Dr. Lichtenstein said
  • When studies of large populations showed that people who eat lots of seafood had fewer heart attacks, many assumed that the benefits came from the omega-3 fatty acids in fish oil, Dr. Lichtenstein said.Rigorous studies have failed to show that fish oil supplements prevent heart attacks
  • But it’s possible the benefits of sardines and salmon have nothing to do with fish oil, Dr. Lichtenstein said. People who have fish for dinner may be healthier as a result of what they don’t eat, such as meatloaf and cheeseburgers.
  • “Eating fish is probably a good thing, but we haven’t been able to show that taking fish oil [supplements] does anything for you,
  • In the tiny amounts provided by fruits and vegetables, beta carotene and similar substances appear to protect the body from a process called oxidation, which damages healthy cells, said Dr. Edgar Miller, a professor of medicine at Johns Hopkins School of Medicine.Experts were shocked when two large, well-designed studies in the 1990s found that beta carotene pills actually increased lung cancer rates.
  • Likewise, a clinical trial published in 2011 found that vitamin E, also an antioxidant, increased the risk of prostate cancer in men by 17 percent
  • “Vitamins are not inert,” said Dr. Eric Klein, a prostate cancer expert at the Cleveland Clinic who led the vitamin E study. “They are biologically active agents. We have to think of them in the same way as drugs. If you take too high a dose of them, they cause side effects.”
  • “We should be responsible physicians,” she said, “and wait for the data.”
oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

There's No Such Thing As 'Sound Science' | FiveThirtyEight - 1 views

  • cience is being turned against itself. For decades, its twin ideals of transparency and rigor have been weaponized by those who disagree with results produced by the scientific method. Under the Trump administration, that fight has ramped up again.
  • The same entreaties crop up again and again: We need to root out conflicts. We need more precise evidence. What makes these arguments so powerful is that they sound quite similar to the points raised by proponents of a very different call for change that’s coming from within science.
  • Despite having dissimilar goals, the two forces espouse principles that look surprisingly alike: Science needs to be transparent. Results and methods should be openly shared so that outside researchers can independently reproduce and validate them. The methods used to collect and analyze data should be rigorous and clear, and conclusions must be supported by evidence.
  • ...26 more annotations...
  • they’re also used as talking points by politicians who are working to make it more difficult for the EPA and other federal agencies to use science in their regulatory decision-making, under the guise of basing policy on “sound science.” Science’s virtues are being wielded against it.
  • What distinguishes the two calls for transparency is intent: Whereas the “open science” movement aims to make science more reliable, reproducible and robust, proponents of “sound science” have historically worked to amplify uncertainty, create doubt and undermine scientific discoveries that threaten their interests.
  • “Our criticisms are founded in a confidence in science,” said Steven Goodman, co-director of the Meta-Research Innovation Center at Stanford and a proponent of open science. “That’s a fundamental difference — we’re critiquing science to make it better. Others are critiquing it to devalue the approach itself.”
  • alls to base public policy on “sound science” seem unassailable if you don’t know the term’s history. The phrase was adopted by the tobacco industry in the 1990s to counteract mounting evidence linking secondhand smoke to cancer.
  • The sound science tactic exploits a fundamental feature of the scientific process: Science does not produce absolute certainty. Contrary to how it’s sometimes represented to the public, science is not a magic wand that turns everything it touches to truth. Instead, it’s a process of uncertainty reduction, much like a game of 20 Questions.
  • Any given study can rarely answer more than one question at a time, and each study usually raises a bunch of new questions in the process of answering old ones. “Science is a process rather than an answer,” said psychologist Alison Ledgerwood of the University of California, Davis. Every answer is provisional and subject to change in the face of new evidence. It’s not entirely correct to say that “this study proves this fact,” Ledgerwood said. “We should be talking instead about how science increases or decreases our confidence in something.”
  • While insisting that they merely wanted to ensure that public policy was based on sound science, tobacco companies defined the term in a way that ensured that no science could ever be sound enough. The only sound science was certain science, which is an impossible standard to achieve.
  • “Doubt is our product,” wrote one employee of the Brown & Williamson tobacco company in a 1969 internal memo. The note went on to say that doubt “is the best means of competing with the ‘body of fact’” and “establishing a controversy.” These strategies for undermining inconvenient science were so effective that they’ve served as a sort of playbook for industry interests ever since
  • Doubt merchants aren’t pushing for knowledge, they’re practicing what Proctor has dubbed “agnogenesis” — the intentional manufacture of ignorance. This ignorance isn’t simply the absence of knowing something; it’s a lack of comprehension deliberately created by agents who don’t want you to know,
  • In the hands of doubt-makers, transparency becomes a rhetorical move. “It’s really difficult as a scientist or policy maker to make a stand against transparency and openness, because well, who would be against it?
  • But at the same time, “you can couch everything in the language of transparency and it becomes a powerful weapon.” For instance, when the EPA was preparing to set new limits on particulate pollution in the 1990s, industry groups pushed back against the research and demanded access to primary data (including records that researchers had promised participants would remain confidential) and a reanalysis of the evidence. Their calls succeeded and a new analysis was performed. The reanalysis essentially confirmed the original conclusions, but the process of conducting it delayed the implementation of regulations and cost researchers time and money.
  • Delay is a time-tested strategy. “Gridlock is the greatest friend a global warming skeptic has,” said Marc Morano, a prominent critic of global warming research
  • which has received funding from the oil and gas industry. “We’re the negative force. We’re just trying to stop stuff.”
  • these ploys are getting a fresh boost from Congress. The Data Quality Act (also known as the Information Quality Act) was reportedly written by an industry lobbyist and quietly passed as part of an appropriations bill in 2000. The rule mandates that federal agencies ensure the “quality, objectivity, utility, and integrity of information” that they disseminate, though it does little to define what these terms mean. The law also provides a mechanism for citizens and groups to challenge information that they deem inaccurate, including science that they disagree with. “It was passed in this very quiet way with no explicit debate about it — that should tell you a lot about the real goals,” Levy said.
  • in the 20 months following its implementation, the act was repeatedly used by industry groups to push back against proposed regulations and bog down the decision-making process. Instead of deploying transparency as a fundamental principle that applies to all science, these interests have used transparency as a weapon to attack very particular findings that they would like to eradicate.
  • Now Congress is considering another way to legislate how science is used. The Honest Act, a bill sponsored by Rep. Lamar Smith of Texas,3The bill has been passed by the House but still awaits a vote in the Senate. is another example of what Levy calls a “Trojan horse” law that uses the language of transparency as a cover to achieve other political goals. Smith’s legislation would severely limit the kind of evidence the EPA could use for decision-making. Only studies whose raw data and computer codes were publicly available would be allowed for consideration.
  • It might seem like an easy task to sort good science from bad, but in reality it’s not so simple. “There’s a misplaced idea that we can definitively distinguish the good from the not-good science, but it’s all a matter of degree,” said Brian Nosek, executive director of the Center for Open Science. “There is no perfect study.” Requiring regulators to wait until they have (nonexistent) perfect evidence is essentially “a way of saying, ‘We don’t want to use evidence for our decision-making,’
  • ost scientific controversies aren’t about science at all, and once the sides are drawn, more data is unlikely to bring opponents into agreement.
  • objective knowledge is not enough to resolve environmental controversies. “While these controversies may appear on the surface to rest on disputed questions of fact, beneath often reside differing positions of value; values that can give shape to differing understandings of what ‘the facts’ are.” What’s needed in these cases isn’t more or better science, but mechanisms to bring those hidden values to the forefront of the discussion so that they can be debated transparently. “As long as we continue down this unabashedly naive road about what science is, and what it is capable of doing, we will continue to fail to reach any sort of meaningful consensus on these matters,”
  • The dispute over tobacco was never about the science of cigarettes’ link to cancer. It was about whether companies have the right to sell dangerous products and, if so, what obligations they have to the consumers who purchased them.
  • Similarly, the debate over climate change isn’t about whether our planet is heating, but about how much responsibility each country and person bears for stopping it
  • While researching her book “Merchants of Doubt,” science historian Naomi Oreskes found that some of the same people who were defending the tobacco industry as scientific experts were also receiving industry money to deny the role of human activity in global warming. What these issues had in common, she realized, was that they all involved the need for government action. “None of this is about the science. All of this is a political debate about the role of government,”
  • These controversies are really about values, not scientific facts, and acknowledging that would allow us to have more truthful and productive debates. What would that look like in practice? Instead of cherry-picking evidence to support a particular view (and insisting that the science points to a desired action), the various sides could lay out the values they are using to assess the evidence.
  • For instance, in Europe, many decisions are guided by the precautionary principle — a system that values caution in the face of uncertainty and says that when the risks are unclear, it should be up to industries to show that their products and processes are not harmful, rather than requiring the government to prove that they are harmful before they can be regulated. By contrast, U.S. agencies tend to wait for strong evidence of harm before issuing regulations
  • the difference between them comes down to priorities: Is it better to exercise caution at the risk of burdening companies and perhaps the economy, or is it more important to avoid potential economic downsides even if it means that sometimes a harmful product or industrial process goes unregulated?
  • But science can’t tell us how risky is too risky to allow products like cigarettes or potentially harmful pesticides to be sold — those are value judgements that only humans can make.
Javier E

The Disease Detective - The New York Times - 1 views

  • What’s startling is how many mystery infections still exist today.
  • More than a third of acute respiratory illnesses are idiopathic; the same is true for up to 40 percent of gastrointestinal disorders and more than half the cases of encephalitis (swelling of the brain).
  • Up to 20 percent of cancers and a substantial portion of autoimmune diseases, including multiple sclerosis and rheumatoid arthritis, are thought to have viral triggers, but a vast majority of those have yet to be identified.
  • ...34 more annotations...
  • Globally, the numbers can be even worse, and the stakes often higher. “Say a person comes into the hospital in Sierra Leone with a fever and flulike symptoms,” DeRisi says. “After a few days, or a week, they die. What caused that illness? Most of the time, we never find out. Because if the cause isn’t something that we can culture and test for” — like hepatitis, or strep throat — “it basically just stays a mystery.”
  • It would be better, DeRisi says, to watch for rare cases of mystery illnesses in people, which often exist well before a pathogen gains traction and is able to spread.
  • Based on a retrospective analysis of blood samples, scientists now know that H.I.V. emerged nearly a dozen times over a century, starting in the 1920s, before it went global.
  • Zika was a relatively harmless illness before a single mutation, in 2013, gave the virus the ability to enter and damage brain cells.
  • The beauty of this approach” — running blood samples from people hospitalized all over the world through his system, known as IDseq — “is that it works even for things that we’ve never seen before, or things that we might think we’ve seen but which are actually something new.”
  • In this scenario, an undiscovered or completely new virus won’t trigger a match but will instead be flagged. (Even in those cases, the mystery pathogen will usually belong to a known virus family: coronaviruses, for instance, or filoviruses that cause hemorrhagic fevers like Ebola and Marburg.)
  • And because different types of bacteria require specific conditions in order to grow, you also need some idea of what you’re looking for in order to find it.
  • The same is true of genomic sequencing, which relies on “primers” designed to match different combinations of nucleotides (the building blocks of DNA and RNA).
  • Even looking at a slide under a microscope requires staining, which makes organisms easier to see — but the stains used to identify bacteria and parasites, for instance, aren’t the same.
  • The practice that DeRisi helped pioneer to skirt this problem is known as metagenomic sequencing
  • Unlike ordinary genomic sequencing, which tries to spell out the purified DNA of a single, known organism, metagenomic sequencing can be applied to a messy sample of just about anything — blood, mud, seawater, snot — which will often contain dozens or hundreds of different organisms, all unknown, and each with its own DNA. In order to read all the fragmented genetic material, metagenomic sequencing uses sophisticated software to stitch the pieces together by matching overlapping segments.
  • The assembled genomes are then compared against a vast database of all known genomic sequences — maintained by the government-run National Center for Biotechnology Information — making it possible for researchers to identify everything in the mix
  • Traditionally, the way that scientists have identified organisms in a sample is to culture them: Isolate a particular bacterium (or virus or parasite or fungus); grow it in a petri dish; and then examine the result under a microscope, or use genomic sequencing, to understand just what it is. But because less than 2 percent of bacteria — and even fewer viruses — can be grown in a lab, the process often reveals only a tiny fraction of what’s actually there. It’s a bit like planting 100 different kinds of seeds that you found in an old jar. One or two of those will germinate and produce a plant, but there’s no way to know what the rest might have grown into.
  • Such studies have revealed just how vast the microbial world is, and how little we know about it
  • “The selling point for researchers is: ‘Look, this technology lets you investigate what’s happening in your clinic, whether it’s kids with meningitis or something else,’” DeRisi said. “We’re not telling you what to do with it. But it’s also true that if we have enough people using this, spread out all around the world, then it does become a global network for detecting emerging pandemics
  • One study found more than 1,000 different kinds of viruses in a tiny amount of human stool; another found a million in a couple of pounds of marine sediment. And most were organisms that nobody had seen before.
  • After the Biohub opened in 2016, one of DeRisi’s goals was to turn metagenomics from a rarefied technology used by a handful of elite universities into something that researchers around the world could benefit from
  • metagenomics requires enormous amounts of computing power, putting it out of reach of all but the most well-funded research labs. The tool DeRisi created, IDseq, made it possible for researchers anywhere in the world to process samples through the use of a small, off-the-shelf sequencer, much like the one DeRisi had shown me in his lab, and then upload the results to the cloud for analysis.
  • he’s the first to make the process so accessible, even in countries where lab supplies and training are scarce. DeRisi and his team tested the chemicals used to prepare DNA for sequencing and determined that using as little as half the recommended amount often worked fine. They also 3-D print some of the labs’ tools and replacement parts, and offer ongoing training and tech support
  • The metagenomic analysis itself — normally the most expensive part of the process — is provided free.
  • But DeRisi’s main innovation has been in streamlining and simplifying the extraordinarily complex computational side of metagenomics
  • IDseq is also fast, capable of doing analyses in hours that would take other systems weeks.
  • “What IDseq really did was to marry wet-lab work — accumulating samples, processing them, running them through a sequencer — with the bioinformatic analysis,”
  • “Without that, what happens in a lot of places is that the researcher will be like, ‘OK, I collected the samples!’ But because they can’t analyze them, the samples end up in the freezer. The information just gets stuck there.”
  • Meningitis itself isn’t a disease, just a description meaning that the tissues around the brain and spinal cord have become inflamed. In the United States, bacterial infections can cause meningitis, as can enteroviruses, mumps and herpes simplex. But a high proportion of cases have, as doctors say, no known etiology: No one knows why the patient’s brain and spinal tissues are swelling.
  • When Saha and her team ran the mystery meningitis samples through IDseq, though, the result was surprising. Rather than revealing a bacterial cause, as expected, a third of the samples showed signs of the chikungunya virus — specifically, a neuroinvasive strain that was thought to be extremely rare. “At first we thought, It cannot be true!” Saha recalls. “But the moment Joe and I realized it was chikungunya, I went back and looked at the other 200 samples that we had collected around the same time. And we found the virus in some of those samples as well.”
  • Until recently, chikungunya was a comparatively rare disease, present mostly in parts of Central and East Africa. “Then it just exploded through the Caribbean and Africa and across Southeast Asia into India and Bangladesh,” DeRisi told me. In 2011, there were zero cases of chikungunya reported in Latin America. By 2014, there were a million.
  • Chikungunya is a mosquito-borne virus, but when DeRisi and Saha looked at the results from IDseq, they also saw something else: a primate tetraparvovirus. Primate tetraparvoviruses are almost unknown in humans, and have been found only in certain regions. Even now, DeRisi is careful to note, it’s not clear what effect the virus has on people. “Maybe it’s dangerous, maybe it isn’t,” DeRisi says. “But I’ll tell you what: It’s now on my radar.
  • it reveals a landscape of potentially dangerous viruses that we would otherwise never find out about. “What we’ve been missing is that there’s an entire universe of pathogens out there that are causing disease in humans,” Imam notes, “ones that we often don’t even know exist.”
  • “The plan was, Let’s let researchers around the world propose studies, and we’ll choose 10 of them to start,” DeRisi recalls. “We thought we’d get, like, a couple dozen proposals, and instead we got 350.”
  • Metagenomic sequencing is especially good at what scientists call “environmental sampling”: identifying, say, every type of bacteria present in the gut microbiome, or in a teaspoon of seawater.
  • “When you draw blood from someone who has a fever in Ghana, you really don’t know very much about what would normally be in their blood without fever — let alone about other kinds of contaminants in the environment. So how do you interpret the relevance of all the things you’re seeing?”
  • Such criticisms have led some to say that metagenomics simply isn’t suited to the infrastructure of developing countries. Along with the problem of contamination, many labs struggle to get the chemical reagents needed for sequencing, either because of the cost or because of shipping and customs holdups
  • we’re less likely to be caught off-guard. “With Ebola, there’s always an issue: Where’s the virus hiding before it breaks out?” DeRisi explains. “But also, once we start sampling people who are hospitalized more widely — meaning not just people in Northern California or Boston, but in Uganda, and Sierra Leone, and Indonesia — the chance of disastrous surprises will go down. We’ll start seeing what’s hidden.”
kaylynfreeman

How Reliable Are the Social Sciences? - The New York Times - 1 views

  • How much authority should we give to such work in our policy decisions?  The question is important because media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention.
  • A rational assessment of a scientific result must first take account of the broader context of the particular science involved.  Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science? 
  • Second, and even more important, there is our overall assessment of work in a given science in comparison with other sciences.  The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive. 
  • ...10 more annotations...
  • While the physical sciences produce many detailed and precise predictions, the social sciences do not.  The reason is that such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved.  For one thing, we are too complex: our behavior depends on an enormous number of tightly interconnected variables that are extraordinarily difficult to  distinguish and study separately
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
  • How much authority should we give to such work in our policy decisions?  The question is important because media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention.
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Social sciences may be surrounded by the “paraphernalia” of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments. 
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
Javier E

How the leading coronavirus vaccines made it to the finish line - The Washington Post - 0 views

  • If, as expected in the next few weeks, regulators give those vaccines the green light, the technology and the precision approach to vaccine design could turn out to be the pandemic’s silver linings: scientific breakthroughs that could begin to change the trajectory of the virus this winter and also pave the way for highly effective vaccines and treatments for other diseases.
  • Vaccine development typically takes years, even decades. The progress of the last 11 months shifts the paradigm for what’s possible, creating a new model for vaccine development and a toolset for a world that will have to fight more never-before-seen viruses in years to come.
  • Long before the pandemic, Graham worked with colleagues there and in academia to create a particularly accurate 3-D version of the spiky proteins that protrude from the surface of coronaviruses — an innovation that was rejected for publication by scientific journals five times because reviewers questioned its relevance.
  • ...26 more annotations...
  • Messenger RNA is a powerful, if fickle, component of life’s building blocks — a workhorse of the cell that is also truly just a messenger, unstable and prone to degrade.
  • . In 1990,
  • That same year, a team at the University of Wisconsin startled the scientific world with a paper that showed it was possible to inject a snippet of messenger RNA into mice and turn their muscle cells into factories, creating proteins on demand.
  • If custom-designed RNA snippets could be used to turn cells into bespoke protein factories, messenger RNA could become a powerful medical tool. It could encode fragments of virus to teach the immune system to defend against pathogens. It could also create whole proteins that are missing or damaged in people with devastating genetic diseases, such as cystic fibrosis.
  • In 2005, the pair discovered a way to modify RNA, chemically tweaking one of the letters of its code, so it didn’t trigger an inflammatory response. Deborah Fuller, a scientist who works on RNA and DNA vaccines at the University of Washington, said that work deserves a Nobel Prize.
  • messenger RNA posed a bigger challenge than other targets.“It’s tougher — it’s a much bigger molecule, it’s much more unstable,”
  • Unlike fields that were sparked by a single powerful insight, Sahin said that the recent success of messenger RNA vaccines is a story of countless improvements that turned an alluring biological idea into a beneficial technology.
  • “This is a field which benefited from hundreds of inventions,” said Sahin, who noted that when he started BioNTech in 2008, he cautioned investors that the technology would not yield a product for at least a decade. He kept his word: Until the coronavirus sped things along, BioNTech projected the launch of its first commercial project in 2023.
  • “It’s new to you,” Fuller said. “But for basic researchers, it’s been long enough. . . . Even before covid, everyone was talking: RNA, RNA, RNA.”
  • All vaccines are based on the same underlying idea: training the immune system to block a virus. Old-fashioned vaccines do this work by injecting dead or weakened viruses
  • ewer vaccines use distinctive bits of the virus, such as proteins on their surface, to teach the lesson. The latest genetic techniques, like messenger RNA, don’t take as long to develop because those virus bits don’t have to be generated in a lab. Instead, the vaccine delivers a genetic code that instructs cells to build those characteristic proteins themselves.
  • They wanted the immune system to learn to recognize the thumb tack spike, so McLellan tasked a scientist in his laboratory with identifying genetic mutations that could anchor the protein into the right configuration. It was a painstaking process for Nianshuang Wang, who now works at a biotechnology company, Regeneron Pharmaceuticals. After trying hundreds of genetic mutations, he found two that worked. Five journals rejected the finding, questioning its significance, before it was published in 2017.
  • That infection opened Graham’s eyes to an opportunity. HKU1 was merely a nuisance, as opposed to a deadly pneumonia; that meant it would be easier to work with in the lab, since researchers wouldn’t have to don layers of protective gear and work in a pressurized laboratory.
  • Severe acute respiratory syndrome had emerged in 2003. Middle East respiratory syndrome (MERS) broke out in 2012. It seemed clear to Graham and Jason McLellan, a structural biologist now at the University of Texas at Austin, that new coronaviruses were jumping into people on a 10-year-clock and it might be time to brace for the next one.
  • Last winter, when Graham heard rumblings of a new coronavirus in China, he brought the team back together. Once its genome was shared online by Chinese scientists, the laboratories in Texas and Maryland designed a vaccine, utilizing the stabilizing mutations and the knowledge they had gained from years of basic research — a weekend project thanks to the dividends of all that past work.
  • Graham needed a technology that could deliver it into the body — and had already been working with Moderna, using its messenger RNA technology to create a vaccine against a different bat virus, Nipah, as a dress rehearsal for a real pandemic. Moderna and NIH set the Nipah project aside and decided to go forward with a coronavirus vaccine.
  • On Jan. 13, Moderna’s Moore came into work and found her team already busy translating the stabilized spike protein into their platform. The company could start making the vaccine almost right away because of its experience manufacturing experimental cancer vaccines, which involves taking tumor samples and developing personalized vaccines in 45 days.
  • At BioNTech, Sahin said that even in the early design phases of its vaccine candidates, he incorporated the slight genetic changes designed in Graham’s lab that would make the spike look more like the real thing. At least two other companies would incorporate that same spike.
  • If all goes well with regulators, the coronavirus vaccines have the makings of a pharmaceutical industry fairy tale. The world faced an unparalleled threat, and companies leaped into the fight. Pfizer plowed $2 billion into the effort. Massive infusions of government cash helped remove the financial risks for Moderna.
  • But the world will also owe their existence to many scientists outside those companies, in government and academia who pursued ideas they thought were important even when the world doubted them
  • Some of those scientists will receive remuneration, since their inventions are licensed and integrated into the products that could save the world.
  • As executives become billionaires, many scientists think it is fair to earn money from their inventions that can help them do more important work. But McLellan’s laboratory at the University of Texas is proud to have licensed an even more potent version of their spike protein, royalty-free, to be incorporated into a vaccine for low and middle income countries.
  • “They’re using the technology that [Kariko] and I developed,” he said. “We feel like it’s our vaccine, and we are incredibly excited — at how well it’s going, and how it’s going to be used to get rid of this pandemic.”
  • “People hear about [vaccine progress] and think someone just thought about it that night. The amount of work — it’s really a beautiful story of fundamental basic research,” Fauci said. “It was chancy, in the sense that [the vaccine technology] was new. We were aware there would be pushback. The proof in the pudding is a spectacular success.”
  • The Vaccine Research Center, where Graham is deputy director, was the brainchild of Anthony S. Fauci, director of the National Institute of Allergy and Infectious Diseases. It was created in 1997 to bring together scientists and physicians from different disciplines to defeat diseases, with a heavy focus on HIV.
  • the pandemic wasn’t a sudden eureka moment — it was a catalyst that helped ignite lines of research that had been moving forward for years, far outside the spotlight of a global crisis.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

The new science of death: 'There's something happening in the brain that makes no sense... - 0 views

  • Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness
  • Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived
  • when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit.
  • ...43 more annotations...
  • Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies
  • According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.
  • In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.
  • in 1975, an American medical student named Raymond Moody published a book called Life After Life.
  • Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased.
  • “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.
  • Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.
  • “I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”
  • Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences
  • Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.
  • near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine
  • It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care
  • The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individua
  • Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.
  • Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain.
  • Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221.
  • Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries
  • “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,”
  • “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”
  • it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest
  • In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”.
  • Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.
  • That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.
  • scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.
  • In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest
  • In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.
  • That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain
  • Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.
  • In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down
  • To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead.
  • At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,”
  • In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,”
  • The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?
  • The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.
  • Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course,
  • “As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.
  • n those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irre
  • something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.
  • Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.
  • In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.
  • “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.
  • “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?”
  • Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing.
Javier E

Science and gun violence: why is the research so weak? [Part 2] - Boing Boing - 1 views

  • Scientists are missing some important bits of data that would help them better understand the effects of gun policy and the causes of gun-related violence. But that’s not the only reason why we don’t have solid answers. Once you have the data, you still have to figure out what it means. This is where the research gets complicated, because the problem isn’t simply about what we do and don’t know right now. The problem, say some scientists, is that we —from the public, to politicians, to even scientists themselves—may be trying to force research to give a type of answer that we can’t reasonably expect it to offer. To understand what science can do for the gun debates, we might have to rethink what “evidence-based policy” means to us.
  • For the most part, there aren’t a lot of differences in the data that these studies are using. So how can they reach such drastically different conclusions? The issue is in the kind of data that exists, and what you have to do to understand it, says Charles Manski, professor of economics at Northwestern University. Manski studies the ways that other scientists do research and how that research translates into public policy.
  • Even if we did have those gaps filled in, Manski said, what we’d have would still just be observational data, not experimental data. “We don’t have randomized, controlled experiments, here,” he said. “The only way you could do that, you’d have to assign a gun to some people randomly at birth and follow them throughout their lives. Obviously, that’s not something that’s going to work.”
  • ...14 more annotations...
  • This means that, even under the best circumstances, scientists can’t directly test what the results of a given gun policy are. The best you can do is to compare what was happening in a state before and after a policy was enacted, or to compare two different states, one that has the policy and one that doesn’t. And that’s a pretty inexact way of working.
  • Add in enough assumptions, and you can eventually come up with an estimate. But is the estimate correct? Is it even close to reality? That’s a hard question to answer, because the assumptions you made—the correlations you drew between cause and effect, what you know and what you assume to be true because of that—might be totally wrong.
  • It’s hard to tease apart the effect of one specific change, compared to the effects of other things that could be happening at the same time.
  • This process of taking the observational data we do have and then running it through a filter of assumptions plays out in the real world in the form of statistical modeling. When the NAS report says that nobody yet knows whether more guns lead to more crime, or less crime, what they mean is that the models and the assumptions built into those models are all still proving to be pretty weak.
  • From either side of the debate, he said, scientists continue to produce wildly different conclusions using the same data. On either side, small shifts in the assumptions lead the models to produce different results. Both factions continue to choose sets of assumptions that aren’t terribly logical. It’s as if you decided that anybody with blue shoes probably had a belly-button piercing. There’s not really a good reason for making that correlation. And if you change the assumption—actually, belly-button piercings are more common in people who wear green shoes—you end up with completely different results.
  • The Intergovernmental Panel on Climate Change (IPCC) produces these big reports periodically, which analyze lots of individual papers. In essence, they’re looking at lots of trees and trying to paint you a picture of the forest. IPCC reports are available for free online, you can go and read them yourself. When you do, you’ll notice something interesting about the way that the reports present results. The IPCC never says, “Because we burned fossil fuels and emitted carbon dioxide into the atmosphere then the Earth will warm by x degrees.” Instead, those reports present a range of possible outcomes … for everything. Depending on the different models used, different scenarios presented, and the different assumptions made, the temperature of the Earth might increase by anywhere between 1.5 and 4.5 degrees Celsius.
  • What you’re left with is an environment where it’s really easy to prove that your colleague’s results are probably wrong, and it’s easy for him to prove that yours are probably wrong. But it’s not easy for either of you to make a compelling case for why you’re right.
  • Statistical modeling isn’t unique to gun research. It just happens to be particularly messy in this field. Scientists who study other topics have done a better job of using stronger assumptions and of building models that can’t be upended by changing one small, seemingly randomly chosen detail. It’s not that, in these other fields, there’s only one model being used, or even that all the different models produce the exact same results. But the models are stronger and, more importantly, the scientists do a better job of presenting the differences between models and drawing meaning from them.
  • “Climate change is one of the rare scientific literatures that has actually faced up to this,” Charles Manski said. What he means is that, when scientists model climate change, they don’t expect to produce exact, to-the-decimal-point answers.
  • “It’s been a complete waste of time, because we can’t validate one model versus another,” Pepper said. Most likely, he thinks that all of them are wrong. For instance, all the models he’s seen assume that a law will affect every state in the same way, and every person within that state in the same way. “But if you think about it, that’s just nonsensical,” he said.
  • On the one hand, that leaves politicians in a bit of a lurch. The response you might mount to counteract a 1.5 degree increase in global average temperature is pretty different from the response you’d have to 4.5 degrees. On the other hand, the range does tell us something valuable: the temperature is increasing.
  • The problem with this is that it flies in the face of what most of us expect science to do for public policy. Politics is inherently biased, right? The solutions that people come up with are driven by their ideologies. Science is supposed to cut that Gordian Knot. It’s supposed to lay the evidence down on the table and impartially determine who is right and who is wrong.
  • Manski and Pepper say that this is where we need to rethink what we expect science to do. Science, they say, isn’t here to stop all political debate in its tracks. In a situation like this, it simply can’t provide a detailed enough answer to do that—not unless you’re comfortable with detailed answers that are easily called into question and disproven by somebody else with a detailed answer.
  • Instead, science can reliably produce a range of possible outcomes, but it’s still up to the politicians (and, by extension, up to us) to hash out compromises between wildly differing values on controversial subjects. When it comes to complex social issues like gun ownership and gun violence, science doesn’t mean you get to blow off your political opponents and stake a claim on truth. Chances are, the closest we can get to the truth is a range that encompasses the beliefs of many different groups.
sissij

Believe It Or Not, Most Published Research Findings Are Probably False | Big Think - 0 views

  • but this has come with the side effect of a toxic combination of confirmation bias and Google, enabling us to easily find a study to support whatever it is that we already believe, without bothering to so much as look at research that might challenge our position
  • Indeed, this is a statement oft-used by fans of pseudoscience who take the claim at face value, without applying the principles behind it to their own evidence.
  • at present, most published findings are likely to be incorrect.
  • ...6 more annotations...
  • If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30 percent of the time.
  • The problem is being tackled head on in the field of psychology which was shaken by the Stapel affair in which one Dutch researcher fabricated data in over 50 fraudulent papers before being detected.
  • a problem know as publication bias or the file drawer problem.
  • The smaller the effect size, the less likely the findings are to be true.
  • The greater the number and the lesser the selection of tested relationships, the less likely the findings are to be true.
  • For scientists, the discussion over how to resolve the problem is rapidly heating up with calls for big changes to how researchers register, conduct, and publish research and a growing chorus from hundreds of global scientific organizations demanding that all clinical trials are published.
  •  
    As we learned in TOK, science is full of uncertainties. And in this article, the author suggests that even the publication of science paper is full of flaws. But the general population often cited science source that's in support of them. However, science findings are full of faults and the possibility is very high for the scientists to make a false claim. Sometimes, not the errors in experiments, but the fabrication of data lead to false scientific papers. And also, there are a lot of patterns behind the publication of false scientific papers.
Javier E

When A MOOC Exploits Its Learners: A Coursera Case Study | NeoAcademic - 0 views

  • To facilitate a 50,000:1 teacher-student ratio, they rely on an instructional model requiring minimal instructor involvement, potentially to the detriment of learners.
  • The only real change in the year following “the year of the MOOC” is that these companies have now begun to strike deals with private organizations to funnel in high performing students. To me, this seems like a terrifically clever way to circumvent labor laws. Instead of paying new employees during an onboarding and training period, business can now require employees to take a “free course” before paying them a dime.
  • why not reach out to an audience ready and eager to learn just because they are intrinsically motivated to develop their skills? This is what has motivated me to look into producing an I/O Psychology MOOC
  • ...3 more annotations...
  • in Week 4, the assignment was to complete this research study, which was not linked with any learning objectives in that week (at least in any way indicated to students).  If you didn’t complete the research study, you earned a zero for the assignment.  There was no apparent way around it.
  • I can tell you emphatically that this would not be considered an ethical course design choice in a real college classroom. Research participation must be voluntary and non-mandatory. If an instructor does require research participation (common in Psychology to build a subject pool), there must always be an alternative non-data-collection-oriented assignment in order to obtain the same credit. Anyone that doesn’t want to be the subject of research must always have a way to do exactly that – skip research and still get course credit.
  • , I will not be completing this MOOC, and I can only wonder how many others dropped because they, too, felt exploited by their instructors.
Javier E

Naomi Oreskes, a Lightning Rod in a Changing Climate - The New York Times - 0 views

  • Dr. Oreskes is fast becoming one of the biggest names in climate science — not as a climatologist, but as a defender who uses the tools of historical scholarship to counter what she sees as ideologically motivated attacks on the field.
  • Formally, she is a historian of science
  • Dr. Oreskes’s approach has been to dig deeply into the history of climate change denial, documenting its links to other episodes in which critics challenged a developing scientific consensus.
  • ...20 more annotations...
  • Her core discovery, made with a co-author, Erik M. Conway, was twofold. They reported that dubious tactics had been used over decades to cast doubt on scientific findings relating to subjects like acid rain, the ozone shield, tobacco smoke and climate change. And most surprisingly, in each case, the tactics were employed by the same group of people.
  • The central players were serious scientists who had major career triumphs during the Cold War, but in subsequent years apparently came to equate environmentalism with socialism, and government regulation with tyranny.
  • In a 2010 book, Dr. Oreskes and Dr. Conway called these men “Merchants of Doubt,” and this spring the book became a documentary film, by Robert Kenner. At the heart of both works is a description of methods that were honed by the tobacco industry in the 1960s and have since been employed to cast doubt on just about any science being cited to support new government regulations.
  • Dr. Oreskes, the more visible and vocal of the “Merchants” authors, has been threatened with lawsuits and vilified on conservative websites, and routinely gets hate mail calling her a communist or worse.
  • She established her career as a historian with a book-length study examining the role of dissent in the scientific method. As she put it a few months ago to an audience at Indiana University, she wanted to wrestle with this question: “How do you distinguish a maverick from a crank?”
  • Dr. Oreskes found that Wegener had been treated badly, particularly by American geologists. But he did not abandon his faith in the scientific method. He kept publishing until his death in 1930, trying to convince fellow scientists of his position, and was finally vindicated three decades later by oceanographic research conducted during the Cold War.
  • As she completed that study, Dr. Oreskes sought to understand how science was affected not only by the Cold War but by its end. In particular, she started wondering about climate science. Global warming had seemed to rise as an important issue around the time the Iron Curtain came down. Was this just a way for scientists to scare up research money that would no longer be coming their way through military channels?
  • the widespread public impression was that scientists were still divided over whether humans were primarily responsible for the warming of the planet. But how sharp was the split, she wondered?
  • She decided to do something no climate scientist had thought to do: count the published scientific papers. Pulling 928 of them, she was startled to find that not one dissented from the basic findings that warming was underway and human activity was the main reason.
  • She published that finding in a short paper in the journal Science in 2004, and the reaction was electric. Advocates of climate action seized on it as proof of a level of scientific consensus that most of them had not fully perceived. Just as suddenly, Dr. Oreskes found herself under political attack.
  • Some of the voices criticizing her — scientists like Dr. Singer and groups like the George C. Marshall Institute in Washington — were barely known to her at the time, Dr. Oreskes said in an interview. Just who were they?
  • It did not take them long to document that this group, which included prominent Cold War scientists, had been attacking environmental research for decades, challenging the science of the ozone layer and acid rain, even the finding that breathing secondhand tobacco smoke was harmful. Trying to undermine climate science was simply the latest project.
  • Dr. Oreskes and Dr. Conway came to believe that the attacks were patterned on the strategy employed by the tobacco industry when evidence of health risks first emerged. Documents pried loose by lawyers showed that the industry had paid certain scientists to contrive dubious research, had intimidated reputable scientists, and had cherry-picked evidence to present a misleading pictur
  • The tobacco industry had used these tactics in defense of profits. But Dr. Oreskes and Dr. Conway wrote that the so-called merchants of doubt had adopted them for a deep ideological reason: contempt for government regulation. The insight gave climate scientists a new way of understanding the politics that had engulfed their field.
  • Following Dr. Oreskes’s cue, researchers have in recent years developed a cottage industry of counting scientific papers and polling scientists. The results typically show that about 97 percent of working climate scientists accept that global warming is happening, that humans are largely responsible, and that the situation poses long-term risks, though the severity of those risks is not entirely clear. That wave of evidence has prompted many national news organizations to stop portraying the field as split evenly between scientists who are convinced and unconvinced.
  • Dr. Oreskes’s critics have taken delight in searching out errors in her books and other writings, prompting her to post several corrections. They have generally been minor, though, like describing a pH of six as neutral, when the correct number is seven. Dr. Oreskes described that as a typographical error.
  • In the leaked emails, Dr. Singer told a group of his fellow climate change denialists that he felt that Dr. Oreskes and Dr. Conway had libeled him. But in an interview, when pressed for specific errors in the book that might constitute libel, he listed none. Nor did he provide such a list in response to a follow-up email request.
  • However much she might be hated by climate change denialists, Dr. Oreskes is often welcomed on college campuses these days. She usually outlines the decades of research supporting the idea that human emissions pose serious risks.
  • “One of the things that should always be asked about scientific evidence is, how old is it?” Dr. Oreskes said. “It’s like wine. If the science about climate change were only a few years old, I’d be a skeptic, too.”
  • Dr. Oreskes and Dr. Conway keep looking for ways to reach new audiences. Last year, they published a short work of science fiction, written as a historical essay from the distant future. “The Collapse of Western Civilization: A View From the Future” argues that conservatives, by fighting sensible action to cope with the climate crisis, are essentially guaranteeing the long-term outcome they fear, a huge expansion of government.
catbclark

Why Do Many Reasonable People Doubt Science? - National Geographic Magazine - 0 views

  • Actually fluoride is a natural mineral that, in the weak concentrations used in public drinking water systems, hardens tooth enamel and prevents tooth decay—a cheap and safe way to improve dental health for everyone, rich or poor, conscientious brusher or not. That’s the scientific and medical consensus.
  • when Galileo claimed that the Earth spins on its axis and orbits the sun, he wasn’t just rejecting church doctrine. He was asking people to believe something that defied common sense
  • all manner of scientific knowledge—from the safety of fluoride and vaccines to the reality of climate change—faces organized and often furious opposition.
  • ...61 more annotations...
  • Empowered by their own sources of information and their own interpretations of research, doubters have declared war on the consensus of experts.
  • Our lives are permeated by science and technology as never before. For many of us this new world is wondrous, comfortable, and rich in rewards—but also more complicated and sometimes unnerving. We now face risks we can’t easily analyze.
  • The world crackles with real and imaginary hazards, and distinguishing the former from the latter isn’t easy.
  • In this bewildering world we have to decide what to believe and how to act on that. In principle that’s what science is for.
  • “Science is not a body of facts,” says geophysicist Marcia McNutt,
  • “Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not.”
  • The scientific method leads us to truths that are less than self-evident, often mind-blowing, and sometimes hard to swallow.
  • We don’t believe you.
  • Galileo was put on trial and forced to recant. Two centuries later Charles Darwin escaped that fate. But his idea that all life on Earth evolved from a primordial ancestor and that we humans are distant cousins of apes, whales, and even deep-sea mollusks is still a big ask for a lot of people. So is another 19th-century notion: that carbon dioxide, an invisible gas that we all exhale all the time and that makes up less than a tenth of one percent of the atmosphere, could be affecting Earth’s climate.
  • we intellectually accept these precepts of science, we subconsciously cling to our intuitions
  • Shtulman’s research indicates that as we become scientifically literate, we repress our naive beliefs but never eliminate them entirely. They lurk in our brains, chirping at us as we try to make sense of the world.
  • Most of us do that by relying on personal experience and anecdotes, on stories rather than statistics.
  • We have trouble digesting randomness; our brains crave pattern and meaning.
  • we can deceive ourselves.
  • Even for scientists, the scientific method is a hard discipline. Like the rest of us, they’re vulnerable to what they call confirmation bias—the tendency to look for and see only evidence that confirms what they already believe. But unlike the rest of us, they submit their ideas to formal peer review before publishing them
  • other scientists will try to reproduce them
  • Scientific results are always provisional, susceptible to being overturned by some future experiment or observation. Scientists rarely proclaim an absolute truth or absolute certainty. Uncertainty is inevitable at the frontiers of knowledge.
  • Many people in the United States—a far greater percentage than in other countries—retain doubts about that consensus or believe that climate activists are using the threat of global warming to attack the free market and industrial society generally.
  • news media give abundant attention to such mavericks, naysayers, professional controversialists, and table thumpers. The media would also have you believe that science is full of shocking discoveries made by lone geniuses
  • science tells us the truth rather than what we’d like the truth to be. Scientists can be as dogmatic as anyone else—but their dogma is always wilting in the hot glare of new research.
  • But industry PR, however misleading, isn’t enough to explain why only 40 percent of Americans, according to the most recent poll from the Pew Research Center, accept that human activity is the dominant cause of global warming.
  • “science communication problem,”
  • yielded abundant new research into how people decide what to believe—and why they so often don’t accept the scientific consensus.
  • higher literacy was associated with stronger views—at both ends of the spectrum. Science literacy promoted polarization on climate, not consensus. According to Kahan, that’s because people tend to use scientific knowledge to reinforce beliefs that have already been shaped by their worldview.
  • “egalitarian” and “communitarian” mind-set are generally suspicious of industry and apt to think it’s up to something dangerous that calls for government regulation; they’re likely to see the risks of climate change.
  • “hierarchical” and “individualistic” mind-set respect leaders of industry and don’t like government interfering in their affairs; they’re apt to reject warnings about climate change, because they know what accepting them could lead to—some kind of tax or regulation to limit emissions.
  • For a hierarchical individualist, Kahan says, it’s not irrational to reject established climate science: Accepting it wouldn’t change the world, but it might get him thrown out of his tribe.
  • Science appeals to our rational brain, but our beliefs are motivated largely by emotion, and the biggest motivation is remaining tight with our peers.
  • organizations funded in part by the fossil fuel industry have deliberately tried to undermine the public’s understanding of the scientific consensus by promoting a few skeptics.
  • Internet makes it easier than ever for climate skeptics and doubters of all kinds to find their own information and experts
  • Internet has democratized information, which is a good thing. But along with cable TV, it has made it possible to live in a “filter bubble” that lets in only the information with which you already agree.
  • How to convert climate skeptics? Throwing more facts at them doesn’t help.
  • people need to hear from believers they can trust, who share their fundamental values.
  • We believe in scientific ideas not because we have truly evaluated all the evidence but because we feel an affinity for the scientific community.
  • “Believing in evolution is just a description about you. It’s not an account of how you reason.”
  • evolution actually happened. Biology is incomprehensible without it. There aren’t really two sides to all these issues. Climate change is happening. Vaccines really do save lives. Being right does matter—and the science tribe has a long track record of getting things right in the end. Modern society is built on things it got right.
  • Doubting science also has consequences.
  • In the climate debate the consequences of doubt are likely global and enduring. In the U.S., climate change skeptics have achieved their fundamental goal of halting legislative action to combat global warming.
  • “That line between science communication and advocacy is very hard to step back from,”
  • It’s their very detachment, what you might call the cold-bloodedness of science, that makes science the killer app.
  • that need to fit in is so strong that local values and local opinions are always trumping science.
  • not a sin to change your mind when the evidence demands it.
  • for the best scientists, the truth is more important than the tribe.
  • Students come away thinking of science as a collection of facts, not a method.
  • Shtulman’s research has shown that even many college students don’t really understand what evidence is.
  • “Everybody should be questioning,” says McNutt. “That’s a hallmark of a scientist. But then they should use the scientific method, or trust people using the scientific method, to decide which way they fall on those questions.”
  • science has made us the dominant organisms,
  • incredibly rapid change, and it’s scary sometimes. It’s not all progress.
  • But the notion of a vaccine-autism connection has been endorsed by celebrities and reinforced through the usual Internet filters. (Anti-vaccine activist and actress Jenny McCarthy famously said on the Oprah Winfrey Show, “The University of Google is where I got my degree from.”)
    • catbclark
       
      Power of celebraties, internet as a source 
  • The scientific method doesn’t come naturally—but if you think about it, neither does democracy. For most of human history neither existed. We went around killing each other to get on a throne, praying to a rain god, and for better and much worse, doing things pretty much as our ancestors did.
  • We need to get a lot better at finding answers, because it’s certain the questions won’t be getting any simpler.
  • That the Earth is round has been known since antiquity—Columbus knew he wouldn’t sail off the edge of the world—but alternative geographies persisted even after circumnavigations had become common
  • We live in an age when all manner of scientific knowledge—from climate change to vaccinations—faces furious opposition.Some even have doubts about the moon landing.
  • Why Do Many Reasonable People Doubt Science?
  • science doubt itself has become a pop-culture meme.
  • Flat-Earthers held that the planet was centered on the North Pole and bounded by a wall of ice, with the sun, moon, and planets a few hundred miles above the surface. Science often demands that we discount our direct sensory experiences—such as seeing the sun cross the sky as if circling the Earth—in favor of theories that challenge our beliefs about our place in the universe.
  • . Yet just because two things happened together doesn’t mean one caused the other, and just because events are clustered doesn’t mean they’re not still random.
  • Sometimes scientists fall short of the ideals of the scientific method. Especially in biomedical research, there’s a disturbing trend toward results that can’t be reproduced outside the lab that found them, a trend that has prompted a push for greater transparency about how experiments are conducted
  • “Science will find the truth,” Collins says. “It may get it wrong the first time and maybe the second time, but ultimately it will find the truth.” That provisional quality of science is another thing a lot of people have trouble with.
  • scientists love to debunk one another
  • they will continue to trump science, especially when there is no clear downside to ignoring science.”
Javier E

In Defense of Naïve Reading - NYTimes.com - 1 views

  • Clearly, poems and novels and paintings were not produced as objects for future academic study; there is no a priori reason to think that they could be suitable objects of  “research.” By and large they were produced for the pleasure and enlightenment of those who enjoyed them.
  • But just as clearly, the teaching of literature in universities ─ especially after the 19th-century research model of Humboldt University of Berlin was widely copied ─ needed a justification consistent with the aims of that academic setting
  • The main aim was research: the creating and accumulation and transmission of knowledge. And the main model was the natural science model of collaborative research: define problems, break them down into manageable parts, create sub-disciplines and sub-sub-disciplines for the study of these, train students for such research specialties and share everything. With that model, what literature and all the arts needed was something like a general “science of meaning” that could eventually fit that sort of aspiration. Texts or art works could be analyzed as exemplifying and so helping establish such a science. Results could be published in scholarly journals, disputed by others, consensus would eventually emerge and so on.
  • ...3 more annotations...
  • literature study in a university education requires some method of evaluation of whether the student has done well or poorly. Students’ papers must be graded and no faculty member wants to face the inevitable “that’s just your opinion” unarmed, as it were. Learning how to use a research methodology, providing evidence that one has understood and can apply such a method, is understandably an appealing pedagogy
  • Literature and the arts have a dimension unique in the academy, not shared by the objects studied, or “researched” by our scientific brethren. They invite or invoke, at a kind of “first level,” an aesthetic experience that is by its nature resistant to restatement in more formalized, theoretical or generalizing language. This response can certainly be enriched by knowledge of context and history, but the objects express a first-person or subjective view of human concerns that is falsified if wholly transposed to a more “sideways on” or third person view.
  • such works also can directly deliver a  kind of practical knowledge and self-understanding not available from a third person or more general formulation of such knowledge. There is no reason to think that such knowledge — exemplified in what Aristotle said about the practically wise man (the phronimos)or in what Pascal meant by the difference between l’esprit géometrique and l’esprit de finesse — is any less knowledge because it cannot be so formalized or even taught as such.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

How a dose of MDMA transformed a white supremacist - BBC Future - 0 views

  • February 2020, Harriet de Wit, a professor of psychiatry and behavioural science at the University of Chicago, was running an experiment on whether the drug MDMA increased the pleasantness of social touch in healthy volunteers
  • The latest participant in the double-blind trial, a man named Brendan, had filled out a standard questionnaire at the end. Strangely, at the very bottom of the form, Brendan had written in bold letters: "This experience has helped me sort out a debilitating personal issue. Google my name. I now know what I need to do."
  • They googled Brendan's name, and up popped a disturbing revelation: until just a couple of months before, Brendan had been the leader of the US Midwest faction of Identity Evropa, a notorious white nationalist group rebranded in 2019 as the American Identity Movement. Two months earlier, activists at Chicago Antifascist Action had exposed Brendan's identity, and he had lost his job.
  • ...40 more annotations...
  • "Go ask him what he means by 'I now know what I need to do,'" she instructed Bremmer. "If it's a matter of him picking up an automatic rifle or something, we have to intervene."
  • As he clarified to Bremmer, love is what he had just realised he had to do. "Love is the most important thing," he told the baffled research assistant. "Nothing matters without
  • When de Wit recounted this story to me nearly two years after the fact, she still could hardly believe it. "Isn't that amazing?" she said. "It's what everyone says about this damn drug, that it makes people feel love. To think that a drug could change somebody's beliefs and thoughts without any expectations – it's mind-boggling."
  • Over the past few years, I've been investigating the scientific research and medical potential of MDMA for a book called "I Feel Love: MDMA and the Quest for Connection in a Fractured World". I learnt how this once-vilified drug is now remerging as a therapeutic agent – a role it previously played in the 1970s and 1980s, prior to its criminalisation
  • He attended the notorious "Unite the Right" rally in Charlottesville and quickly rose up the ranks of his organisation, first becoming the coordinator for Illinois and then the entire Midwest. He travelled to Europe and around the US to meet other white nationalist groups, with the ultimate goal of taking the movement mainstream
  • some researchers have begun to wonder if it could be an effective tool for pushing people who are already somehow primed to reconsider their ideology toward a new way of seeing things
  • While MDMA cannot fix societal-level drivers of prejudice and disconnection, on an individual basis it can make a difference. In certain cases, the drug may even be able to help people see through the fog of discrimination and fear that divides so many of us.
  • in December 2021 I paid Brendan a visit
  • What I didn't expect was how ordinary the 31-year-old who answered the door would appear to be: blue plaid button-up shirt, neatly cropped hair, and a friendly smile.
  • Brendan grew up in an affluent Chicago suburb in an Irish Catholic family. He leaned liberal in high school but got sucked into white nationalism at the University of Illinois Urbana-Champaign, where he joined a fraternity mostly composed of conservative Republican men, began reading antisemitic conspiracy books, and fell down a rabbit hole of racist, sexist content online. Brendan was further emboldened by the populist rhetoric of Donald Trump during his presidential campaign. "His speech talking about Mexicans being rapists, the fixation on the border wall and deporting everyone, the Muslim ban – I didn't really get white nationalism until Trump started running for president," Brendan said.
  • If this comes to pass, MDMA – and other psychedelics-assisted therapy – could transform the field of mental health through widespread clinical use in the US and beyond, for addressing trauma and possibly other conditions as well, including substance use disorders, depression and eating disorders.
  • A group of anti-fascist activists published identifying information about him and more than 100 other people in Identity Evropa. He was immediately fired from his job and ostracised by his siblings and friends outside white nationalism.
  • When Brendan saw a Facebook ad in early 2020 for some sort of drug trial at the University of Chicago, he decided to apply just to have something to do and to earn a little money
  • At the time, Brendan was "still in the denial stage" following his identity becoming public, he said. He was racked with regret – not over his bigoted views, which he still held, but over the missteps that had landed him in this predicament.
  • About 30 minutes after taking the pill, he started to feel peculiar. "Wait a second – why am I doing this? Why am I thinking this way?" he began to wonder. "Why did I ever think it was okay to jeopardise relationships with just about everyone in my life?"
  • Just then, Bremmer came to collect Brendan to start the experiment. Brendan slid into an MRI, and Bremmer started tickling his forearm with a brush and asked him to rate how pleasant it felt. "I noticed it was making me happier – the experience of the touch," Brendan recalled. "I started progressively rating it higher and higher." As he relished in the pleasurable feeling, a single, powerful word popped into his mind: connection.
  • It suddenly seemed so obvious: connections with other people were all that mattered. "This is stuff you can't really put into words, but it was so profound," Brendan said. "I conceived of my relationships with other people not as distinct boundaries with distinct entities, but more as we-are-all-on
  • I realised I'd been fixated on stuff that doesn't really matter, and is just so messed up, and that I'd been totally missing the point. I hadn't been soaking up the joy that life has to offer."
  • Brendan hired a diversity, equity, and inclusion consultant to advise him, enrolled in therapy, began meditating, and started working his way through a list of educational books. S still regularly communicates with Brendan and, for his part, thinks that Brendan is serious in his efforts to change
  • "I think he is trying to better himself and work on himself, and I do think that experience with MDMA had an impact on him. It's been a touchstone for growth, and over time, I think, the reflection on that experience has had a greater impact on him than necessarily the experience itself."
  • Brendan is still struggling, though, to make the connections with others that he craves. When I visited him, he'd just spent Thanksgiving alone
  • He also has not completely abandoned his bigoted ideology, and is not sure that will ever be possible. "There are moments when I have racist or antisemitic thoughts, definitely," he said. "But now I can recognise that those kinds of thought patterns are harming me more than anyone else."
  • it's not without precedent. In the 1980s, for example, an acquaintance of early MDMA-assisted therapy practitioner Requa Greer administered the drug to a pilot who had grown up in a racist home and had inherited those views. The pilot had always accepted his bigoted way of thinking as being a normal, accurate reflection of the way things were. MDMA, however, "gave him a clear vision that unexamined racism was both wrong and mean," Greer says
  • Encouraging stories of seemingly spontaneous change appear to be exceptions to the norm, however, and from a neurological point of view, this makes sense
  • Research shows that oxytocin – one of the key hormones that MDMA triggers neurons to release – drives a "tend and defend" response across the animal kingdom. The same oxytocin that causes a mother bear to nurture her newborn, for example, also fuels her rage when she perceives a threat to her cub. In people, oxytocin likewise strengthens caregiving tendencies toward liked members of a person's in-group and strangers perceived to belong to the same group, but it increases hostility toward individuals from disliked groups
  • In a 2010 study published in Science, for example, men who inhaled oxytocin were three times more likely to donate money to members of their team in an economic game, as well as more likely to harshly punish competing players for not donating enough. (Read more: "The surprising downsides of empathy.")
  • According to research published this week in Nature by Johns Hopkins University neuroscientist Gül Dölen, MDMA and other psychedelics – including psilocybin, LSD, ketamine and ibogaine – work therapeutically by reopening a critical period in the brain. Critical periods are finite windows of impressionability that typically occur in childhood, when our brains are more malleable and primed to learn new things
  • Dölen and her colleagues' findings likewise indicate that, without the proper set and setting, MDMA and other psychedelics probably do not reopen critical periods, which means they will not have a spontaneous, revelatory effect for ridding someone of bigoted beliefs.
  • In the West, plenty of members of right-wing authoritarian political movements, including neo-Nazi groups, also have track records of taking MDMA and other psychedelics
  • This suggests, researchers write, that psychedelics are nonspecific, "politically pluripotent" amplifiers of whatever is going on in somebody's head, with no particular directional leaning "on the axes of conservatism-liberalism or authoritarianism-egalitarianism."
  • That said, a growing body of scientific evidence indicates that the human capacity for compassion, kindness, empathy, gratitude, altruism, fairness, trust, and cooperation are core features of our natures
  • As Emory University primatologist Frans de Waal wrote, "Empathy is the one weapon in the human repertoire that can rid us of the curse of xenophobia."
  • Ginsberg also envisions using the drug in workshops aimed at eliminating racism, or as a means of bringing people together from opposite sides of shared cultural histories to help heal intergenerational trauma. "I think all psychedelics have a role to play, but I think MDMA has a particularly key role because you're both expanded and present, heart-open and really able to listen in a new way," Ginsberg says. "That's something really powerful."
  • "If you give MDMA to hard-core haters on each side of an issue, I don't think it'll do a lot of good,"
  • if you start with open-minded people on both sides, then I think it can work. You can improve communications and build empathy between groups, and help people be more capable of analysing the world from a more balanced perspective rather than from fear-based, anxiety-based distrust."
  • In 2021, Ginsberg and Doblin were coauthors on a study investigating the possibility of using ayahuasca – a plant-based psychedelic – in group contexts to bridge divides between Palestinians and Israelis, with positive findings
  • "I kind of have a fantasy that maybe as we get more reacquainted with psychedelics, there could be group-based experiences that build community resiliency and are intentionally oriented toward breaking down barriers between people, having people see things from other perspectives and detribalising our society,
  • "But that's not going to happen on its own. It would have to be intentional, and – if it happens – it would probably take multiple generations."
  • Based on his experience with extremism, Brendan agreed with expert takes that no drug, on its own, will spontaneously change the minds of white supremacists or end political conflict in the US
  • he does think that, with the right framing and mindset, MDMA could be useful for people who are already at least somewhat open to reconsidering their ideologies, just as it was for him. "It helped me see things in a different way that no amount of therapy or antiracist literature ever would have done," he said. "I really think it was a breakthrough experience."
‹ Previous 21 - 40 of 1885 Next › Last »
Showing 20 items per page