Skip to main content

Home/ TOK Friends/ Group items tagged analysis

Rss Feed Group items tagged

Javier E

Wine-tasting: it's junk science | Life and style | The Observer - 0 views

  • google_ad_client = 'ca-guardian_js'; google_ad_channel = 'lifeandstyle'; google_max_num_ads = '3'; // Comments Click here to join the discussion. We can't load the discussion on guardian.co.uk because you don't have JavaScript enabled. if (!!window.postMessage) { jQuery.getScript('http://discussion.guardian.co.uk/embed.js') } else { jQuery('#d2-root').removeClass('hd').html( '' + 'Comments' + 'Click here to join the discussion.We can\'t load the ' + 'discussion on guardian.co.uk ' + 'because your web browser does not support all the features that we ' + 'need. If you cannot upgrade your browser to a newer version, you can ' + 'access the discussion ' + 'here.' ); } Wor
  • Hodgson approached the organisers of the California State Fair wine competition, the oldest contest of its kind in North America, and proposed an experiment for their annual June tasting sessions.Each panel of four judges would be presented with their usual "flight" of samples to sniff, sip and slurp. But some wines would be presented to the panel three times, poured from the same bottle each time. The results would be compiled and analysed to see whether wine testing really is scientific.
  • Results from the first four years of the experiment, published in the Journal of Wine Economics, showed a typical judge's scores varied by plus or minus four points over the three blind tastings. A wine deemed to be a good 90 would be rated as an acceptable 86 by the same judge minutes later and then an excellent 94.
  • ...9 more annotations...
  • Hodgson's findings have stunned the wine industry. Over the years he has shown again and again that even trained, professional palates are terrible at judging wine."The results are disturbing," says Hodgson from the Fieldbrook Winery in Humboldt County, described by its owner as a rural paradise. "Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year."Chance has a great deal to do with the awards that wines win."
  • why are ordinary drinkers and the experts so poor at tasting blind? Part of the answer lies in the sheer complexity of wine.For a drink made by fermenting fruit juice, wine is a remarkably sophisticated chemical cocktail. Dr Bryce Rankine, an Australian wine scientist, identified 27 distinct organic acids in wine, 23 varieties of alcohol in addition to the common ethanol, more than 80 esters and aldehydes, 16 sugars, plus a long list of assorted vitamins and minerals that wouldn't look out of place on the ingredients list of a cereal pack. There are even harmless traces of lead and arsenic that come from the soil.
  • In 2011 Professor Richard Wiseman, a psychologist (and former professional magician) at Hertfordshire University invited 578 people to comment on a range of red and white wines, varying from £3.49 for a claret to £30 for champagne, and tasted blind.People could tell the difference between wines under £5 and those above £10 only 53% of the time for whites and only 47% of the time for reds. Overall they would have been just as a successful flipping a coin to guess.
  • French academic Frédéric Brochet tested the effect of labels in 2001. He presented the same Bordeaux superior wine to 57 volunteers a week apart and in two different bottles – one for a table wine, the other for a grand cru.The tasters were fooled.When tasting a supposedly superior wine, their language was more positive – describing it as complex, balanced, long and woody. When the same wine was presented as plonk, the critics were more likely to use negatives such as weak, light and flat.
  • "People underestimate how clever the olfactory system is at detecting aromas and our brain is at interpreting them," says Hutchinson."The olfactory system has the complexity in terms of its protein receptors to detect all the different aromas, but the brain response isn't always up to it. But I'm a believer that everyone has the same equipment and it comes down to learning how to interpret it." Within eight tastings, most people can learn to detect and name a reasonable range of aromas in wine
  • People struggle with assessing wine because the brain's interpretation of aroma and bouquet is based on far more than the chemicals found in the drink. Temperature plays a big part. Volatiles in wine are more active when wine is warmer. Serve a New World chardonnay too cold and you'll only taste the overpowering oak. Serve a red too warm and the heady boozy qualities will be overpowering.
  • Colour affects our perceptions too. In 2001 Frédérick Brochet of the University of Bordeaux asked 54 wine experts to test two glasses of wine – one red, one white. Using the typical language of tasters, the panel described the red as "jammy' and commented on its crushed red fruit.The critics failed to spot that both wines were from the same bottle. The only difference was that one had been coloured red with a flavourless dye
  • Other environmental factors play a role. A judge's palate is affected by what she or he had earlier, the time of day, their tiredness, their health – even the weather.
  • Robert Hodgson is determined to improve the quality of judging. He has developed a test that will determine whether a judge's assessment of a blind-tasted glass in a medal competition is better than chance. The research will be presented at a conference in Cape Town this year. But the early findings are not promising."So far I've yet to find someone who passes," he says.
Javier E

McSweeney's Internet Tendency: It's Not You, It's Quantitative Cost-Benefit Analysis. - 0 views

  • Susan, we need to talk. I’ve been doing a lot of thinking lately. About us. I really like you, but ever since we met in that econ class in college I knew there was something missing from how I felt: quantitative reasoning. We can say we love each other all we want, but I just can’t trust it without the data. And after performing an in-depth cost-benefit analysis of our relationship, I just don’t think this is working out.
charlottedonoho

How can we best assess the neuropsychological effects of violent video game play? | Pet... - 0 views

  • Every time a research paper about violent video games makes it into the news, it feels like we’re in a time loop. Any claims that the study makes about the potential positive or (usually) negative effects of playing games tend to get over-egged to the point of ridiculousness.
  • At best, the measures of aggression that are used in such work are unstandardised; at worst, the field has been shown to be riddled with basic methodological and analytical flaws. These problems are further compounded by entrenched ideologies and a reluctance from some researchers to even talk to their ‘adversaries’, let alone discuss the potential for adversarial collaborations
  • All of this means that we’re stuck at an impasse with violent video games research; it feels like we’re no more clued up on what the actual behavioural effects are now than, say, five or ten years ago.
  • ...4 more annotations...
  • In stage 1, they submit the introduction, methods, proposed analysis, and if necessary, pilot data. This manuscript then goes through the usual peer review process, and is assessed on criteria such as the soundness of the methods and analysis, and overall plausibility of the stated hypotheses.
  • Once researchers have passed through stage 1, they can then move on to data collection. In stage 2, they then submit the full manuscript – the introduction and agreed methods from stage 1, plus results and discussion sections. The results must include the outcome of the analyses agreed in stage 1, but the researchers are allowed to include additional analyses in a separate, ‘exploratory’ section (as long as they are justified).
  • Pre-registering scientific articles in this way helps to protect against a number of undesirable practices (such as p-hacking and HARKing) that can exaggerate statistical findings and make non-existent effects seem real. While this is a problem across psychology generally, it is a particularly extreme problem for violent video game research.
  • By outlining the intended methods and analysis protocols beforehand, Registered Reports protect against these problems, as the review process concentrates on the robustness of the proposed methods. And Registered Reports offer an additional advantage: because manuscripts are never accepted based on the outcome of the data analysis, the process is immune to researcher party lines. It doesn’t matter which research ‘camp’ you are in; your data – and just as importantly, your methods - will speak for themselves.
kushnerha

New Critique Sees Flaws in Landmark Analysis of Psychology Studies - The New York Times - 0 views

  • A landmark 2015 report that cast doubt on the results of dozens of published psychology studies has exposed deep divisions in the field, serving as a reality check for many working researchers but as an affront to others who continue to insist the original research was sound.
  • On Thursday, a group of four researchers publicly challenged the report, arguing that it was statistically flawed and, as a result, wrong.The 2015 report, called the Reproducibility Project, found that less than 40 studies in a sample of 100 psychology papers in leading journals held up when retested by an independent team. The new critique by the four researchers countered that when that team’s statistical methodology was adjusted, the rate was closer to 100 percent.Neither the original analysis nor the critique found evidence of fraud or manipulation of data.
  • “That study got so much press, and the wrong conclusions were drawn from it,” said Timothy D. Wilson, a professor of psychology at the University of Virginia and an author of the new critique. “It’s a mistake to make generalizations from something that was done poorly, and this we think was done poorly.”
  • ...6 more annotations...
  • countered that the critique was highly biased: “They are making assumptions based on selectively interpreting data and ignoring data that’s antagonistic to their point of view.”
  • The challenge comes as the field of psychology is facing a generational change, with young researchers beginning to share their data and study designs before publication, to improve transparency. Still, the new critique is likely to feed an already lively debate about how best to conduct and evaluate so-called replication projects of studies. Such projects are underway in several fields, scientists on both sides of the debate said.
  • “On some level, I suppose it is appealing to think everything is fine and there is no reason to change the status quo,” said Sanjay Srivastava, a psychologist at the University of Oregon, who was not a member of either team. “But we know too much, from many other sources, to put too much credence in an analysis that supports that remarkable conclusion.”
  • One issue the critique raised was how faithfully the replication team had adhered to the original design of the 100 studies it retested. Small alterations in design can make the difference between whether a study replicates or not, scientists say.
  • Another issue that the critique raised had to do with statistical methods. When Dr. Nosek began his study, there was no agreed-upon protocol for crunching the numbers. He and his team settled on five measures
  • He said that the original replication paper and the critique use statistical approaches that are “predictably imperfect” for this kind of analysis.One way to think about the dispute, Dr. Simohnson said, is that the original paper found that the glass was about 40 percent full, and the critique argues that it could be 100 percent full. In fact, he said in an email, “State-of-the-art techniques designed to evaluate replications say it is 40 percent full, 30 percent empty, and the remaining 30 percent could be full or empty, we can’t tell till we get more data.”
sissij

Is Crime Forensics Flawed? | Big Think - 0 views

  • This is concerning because in recent years, time honored methods such as fingerprinting, hair and fiber analysis, firearm analysis, and others, have come under intense scrutiny.
  • Sessions plans to replace the commission with an internal body called the department crime task force, headed by a senior forensic adviser who will report to him directly. No one has been named for the position as of yet.
  • Since 1989, DNA evidence has exonerated 329 individuals. Bite mark and hair analysis—part of what is known as pattern forensics, helped convict 25% of them.
  •  
    I have long been interested in forensics. There was a late night show called forensic files that I really like to watch. As I learned in biology that human fingerprint is very unique, I have never imagined that there are actually serious flaws in forensics. As long as there is human involvement in this activity, it couldn't be one hundred percent reliable. --Sissi (4/21/2017)
Javier E

The Disease Detective - The New York Times - 1 views

  • What’s startling is how many mystery infections still exist today.
  • More than a third of acute respiratory illnesses are idiopathic; the same is true for up to 40 percent of gastrointestinal disorders and more than half the cases of encephalitis (swelling of the brain).
  • Up to 20 percent of cancers and a substantial portion of autoimmune diseases, including multiple sclerosis and rheumatoid arthritis, are thought to have viral triggers, but a vast majority of those have yet to be identified.
  • ...34 more annotations...
  • Globally, the numbers can be even worse, and the stakes often higher. “Say a person comes into the hospital in Sierra Leone with a fever and flulike symptoms,” DeRisi says. “After a few days, or a week, they die. What caused that illness? Most of the time, we never find out. Because if the cause isn’t something that we can culture and test for” — like hepatitis, or strep throat — “it basically just stays a mystery.”
  • It would be better, DeRisi says, to watch for rare cases of mystery illnesses in people, which often exist well before a pathogen gains traction and is able to spread.
  • Based on a retrospective analysis of blood samples, scientists now know that H.I.V. emerged nearly a dozen times over a century, starting in the 1920s, before it went global.
  • Zika was a relatively harmless illness before a single mutation, in 2013, gave the virus the ability to enter and damage brain cells.
  • The beauty of this approach” — running blood samples from people hospitalized all over the world through his system, known as IDseq — “is that it works even for things that we’ve never seen before, or things that we might think we’ve seen but which are actually something new.”
  • In this scenario, an undiscovered or completely new virus won’t trigger a match but will instead be flagged. (Even in those cases, the mystery pathogen will usually belong to a known virus family: coronaviruses, for instance, or filoviruses that cause hemorrhagic fevers like Ebola and Marburg.)
  • And because different types of bacteria require specific conditions in order to grow, you also need some idea of what you’re looking for in order to find it.
  • The same is true of genomic sequencing, which relies on “primers” designed to match different combinations of nucleotides (the building blocks of DNA and RNA).
  • Even looking at a slide under a microscope requires staining, which makes organisms easier to see — but the stains used to identify bacteria and parasites, for instance, aren’t the same.
  • The practice that DeRisi helped pioneer to skirt this problem is known as metagenomic sequencing
  • Unlike ordinary genomic sequencing, which tries to spell out the purified DNA of a single, known organism, metagenomic sequencing can be applied to a messy sample of just about anything — blood, mud, seawater, snot — which will often contain dozens or hundreds of different organisms, all unknown, and each with its own DNA. In order to read all the fragmented genetic material, metagenomic sequencing uses sophisticated software to stitch the pieces together by matching overlapping segments.
  • The assembled genomes are then compared against a vast database of all known genomic sequences — maintained by the government-run National Center for Biotechnology Information — making it possible for researchers to identify everything in the mix
  • Traditionally, the way that scientists have identified organisms in a sample is to culture them: Isolate a particular bacterium (or virus or parasite or fungus); grow it in a petri dish; and then examine the result under a microscope, or use genomic sequencing, to understand just what it is. But because less than 2 percent of bacteria — and even fewer viruses — can be grown in a lab, the process often reveals only a tiny fraction of what’s actually there. It’s a bit like planting 100 different kinds of seeds that you found in an old jar. One or two of those will germinate and produce a plant, but there’s no way to know what the rest might have grown into.
  • Such studies have revealed just how vast the microbial world is, and how little we know about it
  • “The selling point for researchers is: ‘Look, this technology lets you investigate what’s happening in your clinic, whether it’s kids with meningitis or something else,’” DeRisi said. “We’re not telling you what to do with it. But it’s also true that if we have enough people using this, spread out all around the world, then it does become a global network for detecting emerging pandemics
  • Metagenomic sequencing is especially good at what scientists call “environmental sampling”: identifying, say, every type of bacteria present in the gut microbiome, or in a teaspoon of seawater.
  • After the Biohub opened in 2016, one of DeRisi’s goals was to turn metagenomics from a rarefied technology used by a handful of elite universities into something that researchers around the world could benefit from
  • metagenomics requires enormous amounts of computing power, putting it out of reach of all but the most well-funded research labs. The tool DeRisi created, IDseq, made it possible for researchers anywhere in the world to process samples through the use of a small, off-the-shelf sequencer, much like the one DeRisi had shown me in his lab, and then upload the results to the cloud for analysis.
  • he’s the first to make the process so accessible, even in countries where lab supplies and training are scarce. DeRisi and his team tested the chemicals used to prepare DNA for sequencing and determined that using as little as half the recommended amount often worked fine. They also 3-D print some of the labs’ tools and replacement parts, and offer ongoing training and tech support
  • The metagenomic analysis itself — normally the most expensive part of the process — is provided free.
  • But DeRisi’s main innovation has been in streamlining and simplifying the extraordinarily complex computational side of metagenomics
  • IDseq is also fast, capable of doing analyses in hours that would take other systems weeks.
  • “What IDseq really did was to marry wet-lab work — accumulating samples, processing them, running them through a sequencer — with the bioinformatic analysis,”
  • “Without that, what happens in a lot of places is that the researcher will be like, ‘OK, I collected the samples!’ But because they can’t analyze them, the samples end up in the freezer. The information just gets stuck there.”
  • Meningitis itself isn’t a disease, just a description meaning that the tissues around the brain and spinal cord have become inflamed. In the United States, bacterial infections can cause meningitis, as can enteroviruses, mumps and herpes simplex. But a high proportion of cases have, as doctors say, no known etiology: No one knows why the patient’s brain and spinal tissues are swelling.
  • When Saha and her team ran the mystery meningitis samples through IDseq, though, the result was surprising. Rather than revealing a bacterial cause, as expected, a third of the samples showed signs of the chikungunya virus — specifically, a neuroinvasive strain that was thought to be extremely rare. “At first we thought, It cannot be true!” Saha recalls. “But the moment Joe and I realized it was chikungunya, I went back and looked at the other 200 samples that we had collected around the same time. And we found the virus in some of those samples as well.”
  • Until recently, chikungunya was a comparatively rare disease, present mostly in parts of Central and East Africa. “Then it just exploded through the Caribbean and Africa and across Southeast Asia into India and Bangladesh,” DeRisi told me. In 2011, there were zero cases of chikungunya reported in Latin America. By 2014, there were a million.
  • Chikungunya is a mosquito-borne virus, but when DeRisi and Saha looked at the results from IDseq, they also saw something else: a primate tetraparvovirus. Primate tetraparvoviruses are almost unknown in humans, and have been found only in certain regions. Even now, DeRisi is careful to note, it’s not clear what effect the virus has on people. “Maybe it’s dangerous, maybe it isn’t,” DeRisi says. “But I’ll tell you what: It’s now on my radar.
  • it reveals a landscape of potentially dangerous viruses that we would otherwise never find out about. “What we’ve been missing is that there’s an entire universe of pathogens out there that are causing disease in humans,” Imam notes, “ones that we often don’t even know exist.”
  • “The plan was, Let’s let researchers around the world propose studies, and we’ll choose 10 of them to start,” DeRisi recalls. “We thought we’d get, like, a couple dozen proposals, and instead we got 350.”
  • One study found more than 1,000 different kinds of viruses in a tiny amount of human stool; another found a million in a couple of pounds of marine sediment. And most were organisms that nobody had seen before.
  • “When you draw blood from someone who has a fever in Ghana, you really don’t know very much about what would normally be in their blood without fever — let alone about other kinds of contaminants in the environment. So how do you interpret the relevance of all the things you’re seeing?”
  • Such criticisms have led some to say that metagenomics simply isn’t suited to the infrastructure of developing countries. Along with the problem of contamination, many labs struggle to get the chemical reagents needed for sequencing, either because of the cost or because of shipping and customs holdups
  • we’re less likely to be caught off-guard. “With Ebola, there’s always an issue: Where’s the virus hiding before it breaks out?” DeRisi explains. “But also, once we start sampling people who are hospitalized more widely — meaning not just people in Northern California or Boston, but in Uganda, and Sierra Leone, and Indonesia — the chance of disastrous surprises will go down. We’ll start seeing what’s hidden.”
peterconnelly

Your Bosses Could Have a File on You, and They May Misinterpret It - The New York Times - 0 views

  • The company you work for may want to know. Some corporate employers fear that employees could leak information, allow access to confidential files, contact clients inappropriately or, in the extreme, bring a gun to the office.
  • at times using behavioral science tools like psychology.
  • But in spite of worries that workers might be, reasonably, put off by a feeling that technology and surveillance are invading yet another sphere of their lives, employers want to know which clock-punchers may harm their organizations.
  • ...13 more annotations...
  • “There is so much technology out there that employers are experimenting with or investing in,” said Edgar Ndjatou
  • Software can watch for suspicious computer behavior or it can dig into an employee’s credit reports, arrest records and marital-status updates. It can check to see if Cheryl is downloading bulk cloud data or run a sentiment analysis on Tom’s emails to see if he’s getting testier over time. Analysis of this data, say the companies that monitor insider risk, can point to potential problems in the workplace.
  • Organizations that produce monitoring software and behavioral analysis for the feds also may offer conceptually similar tools to private companies, either independently or packaged with broader cybersecurity tools.
  • But corporations are moving forward with their own software-enhanced surveillance. While private-sector workers may not be subjected to the rigors of a 136-page clearance form, private companies help build these “continuous vetting” technologies for the federal government, said Lindy Kyzer of ClearanceJobs. Then, she adds, “Any solution would have private-sector applications.”
  • “Can we build a system that checks on somebody and keeps checking on them and is aware of that person’s disposition as they exist in the legal systems and the public record systems on a continuous basis?” said Chris Grijalva
  • But the interest in anticipating insider threats in the private sector raises ethical questions about what level of monitoring nongovernmental employees should be subject to.
  • “People are starting to understand that the insider threat is a business problem and should be handled accordingly,” said Mr. Grijalva.
  • The linguistic software package they developed, called SCOUT, uses psycholinguistic analysis to seek flags that, among other things, indicate feelings of disgruntlement, like victimization, anger and blame.
  • “The language changes in subtle ways that you’re not aware of,” Mr. Stroz said.
  • There’s not enough information, in other words, to construct algorithms about trustworthiness from the ground up. And that would hold in either the private or the public sector.
  • Even if all that dystopian data did exist, it would still be tricky to draw individual — rather than simply aggregate — conclusions about which behavioral indicators potentially presaged ill actions.
  • “Depending too heavily on personal factors identified using software solutions is a mistake, as we are unable to determine how much they influence future likelihood of engaging in malicious behaviors,” Dr. Cunningham said.
  • “I have focused very heavily on identifying indicators that you can actually measure, versus those that require a lot of interpretation,” Dr. Cunningham said. “Especially those indicators that require interpretation by expert psychologists or expert so-and-sos. Because I find that it’s a little bit too dangerous, and I don’t know that it’s always ethical.”
karenmcgregor

Unraveling the Mysteries of Wireshark: A Beginner's Guide - 2 views

In the vast realm of computer networking, understanding the flow of data packets is crucial. Whether you're a seasoned network administrator or a curious enthusiast, the tool known as Wireshark hol...

education student university assignment help packet tracer

started by karenmcgregor on 14 Mar 24 no follow-up yet
Javier E

McSweeney's Internet Tendency: Nate Silver Offers Up a Statistical Analysis of Your Fai... - 1 views

  • Nate Silver Offers Up a Statistical Analysis of Your Failing Relationship.
  • Ultimately, please don’t give me too much credit for this accumulated data. Although 0.0 percent of your mutual friends were willing to say anything, 93.9 percent of them saw this coming from the start.
Javier E

Nate Silver, Artist of Uncertainty - 0 views

  • In 2008, Nate Silver correctly predicted the results of all 35 Senate races and the presidential results in 49 out of 50 states. Since then, his website, fivethirtyeight.com (now central to The New York Times’s political coverage), has become an essential source of rigorous, objective analysis of voter surveys to predict the Electoral College outcome of presidential campaigns. 
  • Political junkies, activists, strategists, and journalists will gain a deeper and more sobering sense of Silver’s methods in The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t (Penguin Press). A brilliant analysis of forecasting in finance, geology, politics, sports, weather, and other domains, Silver’s book is also an original fusion of cognitive psychology and modern statistical theory.
  • Its most important message is that the first step toward improving our predictions is learning how to live with uncertainty.
  • ...7 more annotations...
  • he blends the best of modern statistical analysis with research on cognition biases pioneered by Princeton psychologist and Nobel laureate in economics  Daniel Kahneman and the late Stanford psychologist Amos Tversky. 
  • Silver’s background in sports and poker turns out to be invaluable. Successful analysts in gambling and sports are different from fans and partisans—far more aware that “sure things” are likely to be illusions,
  • The second step is starting to understand why it is that big data, super computers, and mathematical sophistication haven’t made us better at separating signals (information with true predictive value) from noise (misleading information). 
  • One of the biggest problems we have in separating signal from noise is that when we look too hard for certainty that isn’t there, we often end up attracted to noise, either because it is more prominent or because it confirms what we would like to believe.
  • In discipline after discipline, Silver shows in his book that when you look at even the best single forecast, the average of all independent forecasts is 15 to 20 percent more accurate. 
  • Silver has taken the next major step: constantly incorporating both state polls and national polls into Bayesian models that also incorporate economic data.
  • Silver explains why we will be misled if we only consider significance tests—i.e., statements that the margin of error for the results is, for example, plus or minus four points, meaning there is one chance in 20 that the percentages reported are off by more than four. Calculations like these assume the only source of error is sampling error—the irreducible error—while ignoring errors attributable to house effects, like the proportion of cell-phone users, one of the complex set of assumptions every pollster must make about who will actually vote. In other words, such an approach ignores context in order to avoid having to justify and defend judgments. 
Javier E

Who Needs Math? - The Monkey Cage - 1 views

  • by Larry Bartels on April 9, 2013
  • “When something new is encountered, the follow-up steps usually require mathematical and statistical methods to move the analysis forward.” At that point, he suggests finding a collaborator
  • But technical expertise in itself is of little avail: ”The annals of theoretical biology are clogged with mathematical models that either can be safely ignored or, when tested, fail. Possibly no more than 10% have any lasting value. Only those linked solidly to knowledge of real living systems have much chance of being used.”
  • ...5 more annotations...
  • . If you’re going to talk about economics at all, you need some sense of how magnitudes play off against each other, which is the only way to have a chance of seeing how the pieces fit together.
  • [M]aybe the thing to say is that higher math isn’t usually essential; arithmetic is.
  • My own work has become rather less mathematical over the course of my career. When people ask why, I usually say that as I have come to learn more about politics, the “sophisticated” wrinkles have seemed to distract more than they adde
  • “Seeing how the pieces fit together” requires “some sense of how magnitudes play off against each other.” But, paradoxically, ”higher math” can get in the way of “mathematical intuition” about magnitudes. Formal theory is often couched in purely qualitative terms: under such and such conditions, more X should produce more Y. And quantitative analysis—which ought to focus squarely on magnitudes—is less likely to do so the more it is justified and valued on technical rather than substantive grounds.
  • I recently spent some time doing an informal meta-analysis of studies of the impact of campaign advertising. At the heart of that literature is a pretty simple question: how much does one more ad contribute to the sponsoring candidate’s vote share? Alas, most of the studies I reviewed provided no intelligible answer to that question; and the correlation between methodological “sophistication” (logarithmic transformations, multinomial logits, fixed effects, distributed lag models) and intelligibility was decidedly negative. The authors of these studies rarely seemed to know or care what their results implied about the magnitude of the effect, as long as those results could be billed as “statistically significant.
Javier E

Eric A. Posner Reviews Jim Manzi's "Uncontrolled" | The New Republic - 0 views

  • Most urgent questions of public policy turn on empirical imponderables, and so policymakers fall back on ideological predispositions or muddle through. Is there a better way?
  • The gold standard for empirical research is the randomized field trial (RFT).
  • The RFT works better than most other types of empirical investigation. Most of us use anecdotes or common sense empiricism to make inferences about the future, but psychological biases interfere with the reliability of these methods
  • ...15 more annotations...
  • Serious empiricists frequently use regression analysis.
  • Regression analysis is inferior to RFT because of the difficulty of ruling out confounding factors (for example, that a gene jointly causes baldness and a preference for tight hats) and of establishing causation
  • RFT has its limitations as well. It is enormously expensive because you must (usually) pay a large number of people to participate in an experiment, though one can obtain a discount if one uses prisoners, especially those in a developing country. In addition, one cannot always generalize from RFTs.
  • academic research proceeds in fits and starts, using RFT when it can, but otherwise relying on regression analysis and similar tools, including qualitative case studies,
  • businesses also use RFT whenever they can. A business such as Wal-Mart, with thousands of stores, might try out some innovation like a new display in a random selection of stores, using the remaining stores as a control group
  • Manzi argues that the RFT—or more precisely, the overall approach to empirical investigation that the RFT exemplifies—provides a way of thinking about public policy. Thi
  • the universe is shaky even where, as in the case of physics, “hard science” plays the dominant role. The scientific method cannot establish truths; it can only falsify hypotheses. The hypotheses come from our daily experience, so even when science prunes away intuitions that fail the experimental method, we can never be sure that the theories that remain standing reflect the truth or just haven’t been subject to the right experiment. And even within its domain, the experimental method is not foolproof. When an experiment contradicts received wisdom, it is an open question whether the wisdom is wrong or the experiment was improperly performed.
  • The book is less interested in the RFT than in the limits of empirical knowledge. Given these limits, what attitude should we take toward government?
  • Much of scientific knowledge turns out to depend on norms of scientific behavior, good faith, convention, and other phenomena that in other contexts tend to provide an unreliable basis for knowledge.
  • Under this view of the world, one might be attracted to the cautious conservatism associated with Edmund Burke, the view that we should seek knowledge in traditional norms and customs, which have stood the test of time and presumably some sort of Darwinian competition—a human being is foolish, the species is wise. There are hints of this worldview in Manzi’s book, though he does not explicitly endorse it. He argues, for example, that we should approach social problems with a bias for the status quo; those who seek to change it carry the burden of persuasion. Once a problem is identified, we should try out our ideas on a small scale before implementing them across society
  • Pursuing the theme of federalism, Manzi argues that the federal government should institutionalize policy waivers, so states can opt out from national programs and pursue their own initiatives. A state should be allowed to opt out of federal penalties for drug crimes, for example.
  • It is one thing to say, as he does, that federalism is useful because we can learn as states experiment with different policies. But Manzi takes away much of the force of this observation when he observes, as he must, that the scale of many of our most urgent problems—security, the economy—is at the national level, so policymaking in response to these problems cannot be left to the states. He also worries about social cohesion, which must be maintained at a national level even while states busily experiment. Presumably, this implies national policy of some sort
  • Manzi’s commitment to federalism and his technocratic approach to policy, which relies so heavily on RFT, sit uneasily together. The RFT is a form of planning: the experimenter must design the RFT and then execute it by recruiting subjects, paying them, and measuring and controlling their behavior. By contrast, experimentation by states is not controlled: the critical element of the RFT—randomization—is absent.
  • The right way to go would be for the national government to conduct experiments by implementing policies in different states (or counties or other local units) by randomizing—that is, by ordering some states to be “treatment” states and other states to be “control” states,
  • Manzi’s reasoning reflects the top-down approach to social policy that he is otherwise skeptical of—although, to be sure, he is willing to subject his proposals to RFTs.
Javier E

New Thinking and Old Books Revisited - NYTimes.com - 0 views

  • Mark Thoma’s classic crack — “I’ve learned that new economic thinking means reading old books” — has a serious point to it. We’ve had a couple of centuries of economic thought at this point, and quite a few smart people doing the thinking. It’s possible to come up with truly new concepts and approaches, but it takes a lot more than good intentions and casual observation to get there.
  • There is definitely a faction within economics that considers it taboo to introduce anything into its analysis that isn’t grounded in rational behavior and market equilibrium
  • what I do, and what everyone I’ve just named plus many others does, is a more modest, more eclectic form of analysis. You use maximization and equilibrium where it seems reasonably consistent with reality, because of its clarifying power, but you introduce ad hoc deviations where experience seems to demand them — downward rigidity of wages, balance-sheet constraints, bubbles (which are hard to predict, but you can say a lot about their consequences).
  • ...4 more annotations...
  • You may say that what we need is reconstruction from the ground up — an economics with no vestige of equilibrium analysis. Well, show me some results. As it happens, the hybrid, eclectic approach I’ve just described has done pretty well in this crisis, so you had better show me some really superior results before it gets thrown out the window.
  • if you think you’ve found a fundamental logical flaw in one of our workhorse economic models, the odds are very strong that you’ve just made a mistake.
  • it’s quite clear that the teaching of macroeconomics has gone seriously astray. As Saraceno says, the simple models that have proved so useful since 2008 are by and large taught only at the undergrad level — they’re treated as too simple, too ad hoc, whatever, to make it into the grad courses even at places that aren’t very ideological.
  • to temper your modeling with a sense of realism you need to know something about reality — and not just the statistical properties of U.S. time series since 1947. Economic history — global economic history — should be a core part of the curriculum. Nobody should be making pronouncements on macro without knowing a fair bit about the collapse of the gold standard in the 1930s, what actually happened in the stagflation of the 1970s, the Asian financial crisis of the 90s, and, looking forward, the euro crisis.
Javier E

Look At Me by Patricia Snow | Articles | First Things - 0 views

  • Maurice stumbles upon what is still the gold standard for the treatment of infantile autism: an intensive course of behavioral therapy called applied behavioral analysis that was developed by psychologist O. Ivar Lovaas at UCLA in the 1970s
  • in a little over a year’s time she recovers her daughter to the point that she is indistinguishable from her peers.
  • Let Me Hear Your Voice is not a particularly religious or pious work. It is not the story of a miracle or a faith healing
  • ...54 more annotations...
  • Maurice discloses her Catholicism, and the reader is aware that prayer undergirds the therapy, but the book is about the therapy, not the prayer. Specifically, it is about the importance of choosing methods of treatment that are supported by scientific data. Applied behavioral analysis is all about data: its daily collection and interpretation. The method is empirical, hard-headed, and results-oriented.
  • on a deeper level, the book is profoundly religious, more religious perhaps than its author intended. In this reading of the book, autism is not only a developmental disorder afflicting particular individuals, but a metaphor for the spiritual condition of fallen man.
  • Maurice’s autistic daughter is indifferent to her mother
  • In this reading of the book, the mother is God, watching a child of his wander away from him into darkness: a heartbroken but also a determined God, determined at any cost to bring the child back
  • the mother doesn’t turn back, concedes nothing to the condition that has overtaken her daughter. There is no political correctness in Maurice’s attitude to autism; no nod to “neurodiversity.” Like the God in Donne’s sonnet, “Batter my heart, three-personed God,” she storms the walls of her daughter’s condition
  • Like God, she sets her sights high, commits both herself and her child to a demanding, sometimes painful therapy (life!), and receives back in the end a fully alive, loving, talking, and laughing child
  • the reader realizes that for God, the harrowing drama of recovery is never a singular, or even a twice-told tale, but a perennial one. Every child of his, every child of Adam and Eve, wanders away from him into darkness
  • we have an epidemic of autism, or “autism spectrum disorder,” which includes classic autism (Maurice’s children’s diagnosis); atypical autism, which exhibits some but not all of the defects of autism; and Asperger’s syndrome, which is much more common in boys than in girls and is characterized by average or above average language skills but impaired social skills.
  • At the same time, all around us, we have an epidemic of something else. On the street and in the office, at the dinner table and on a remote hiking trail, in line at the deli and pushing a stroller through the park, people go about their business bent over a small glowing screen, as if praying.
  • This latter epidemic, or experiment, has been going on long enough that people are beginning to worry about its effects.
  • for a comprehensive survey of the emerging situation on the ground, the interested reader might look at Sherry Turkle’s recent book, Reclaiming Conversation: The Power of Talk in a Digital Age.
  • she also describes in exhaustive, chilling detail the mostly horrifying effects recent technology has had on families and workplaces, educational institutions, friendships and romance.
  • many of the promises of technology have not only not been realized, they have backfired. If technology promised greater connection, it has delivered greater alienation. If it promised greater cohesion, it has led to greater fragmentation, both on a communal and individual level.
  • If thinking that the grass is always greener somewhere else used to be a marker of human foolishness and a temptation to be resisted, today it is simply a possibility to be checked out. The new phones, especially, turn out to be portable Pied Pipers, irresistibly pulling people away from the people in front of them and the tasks at hand.
  • all it takes is a single phone on a table, even if that phone is turned off, for the conversations in the room to fade in number, duration, and emotional depth.
  • an infinitely malleable screen isn’t an invitation to stability, but to restlessness
  • Current media, and the fear of missing out that they foster (a motivator now so common it has its own acronym, FOMO), drive lives of continual interruption and distraction, of virtual rather than real relationships, and of “little” rather than “big” talk
  • if you may be interrupted at any time, it makes sense, as a student explains to Turkle, to “keep things light.”
  • we are reaping deficits in emotional intelligence and empathy; loneliness, but also fears of unrehearsed conversations and intimacy; difficulties forming attachments but also difficulties tolerating solitude and boredom
  • consider the testimony of the faculty at a reputable middle school where Turkle is called in as a consultant
  • The teachers tell Turkle that their students don’t make eye contact or read body language, have trouble listening, and don’t seem interested in each other, all markers of autism spectrum disorder
  • Like much younger children, they engage in parallel play, usually on their phones. Like autistic savants, they can call up endless information on their phones, but have no larger context or overarching narrative in which to situate it
  • Students are so caught up in their phones, one teacher says, “they don’t know how to pay attention to class or to themselves or to another person or to look in each other’s eyes and see what is going on.
  • “It is as though they all have some signs of being on an Asperger’s spectrum. But that’s impossible. We are talking about a schoolwide problem.”
  • Can technology cause Asperger’
  • “It is not necessary to settle this debate to state the obvious. If we don’t look at our children and engage them in conversation, it is not surprising if they grow up awkward and withdrawn.”
  • In the protocols developed by Ivar Lovaas for treating autism spectrum disorder, every discrete trial in the therapy, every drill, every interaction with the child, however seemingly innocuous, is prefaced by this clear command: “Look at me!”
  • If absence of relationship is a defining feature of autism, connecting with the child is both the means and the whole goal of the therapy. Applied behavioral analysis does not concern itself with when exactly, how, or why a child becomes autistic, but tries instead to correct, do over, and even perhaps actually rewire what went wrong, by going back to the beginning
  • Eye contact—which we know is essential for brain development, emotional stability, and social fluency—is the indispensable prerequisite of the therapy, the sine qua non of everything that happens.
  • There are no shortcuts to this method; no medications or apps to speed things up; no machines that can do the work for us. This is work that only human beings can do
  • it must not only be started early and be sufficiently intensive, but it must also be carried out in large part by parents themselves. Parents must be trained and involved, so that the treatment carries over into the home and continues for most of the child’s waking hours.
  • there are foundational relationships that are templates for all other relationships, and for learning itself.
  • Maurice’s book, in other words, is not fundamentally the story of a child acquiring skills, though she acquires them perforce. It is the story of the restoration of a child’s relationship with her parents
  • it is also impossible to overstate the time and commitment that were required to bring it about, especially today, when we have so little time, and such a faltering, diminished capacity for sustained engagement with small children
  • The very qualities that such engagement requires, whether our children are sick or well, are the same qualities being bred out of us by technologies that condition us to crave stimulation and distraction, and by a culture that, through a perverse alchemy, has changed what was supposed to be the freedom to work anywhere into an obligation to work everywhere.
  • In this world of total work (the phrase is Josef Pieper’s), the work of helping another person become fully human may be work that is passing beyond our reach, as our priorities, and the technologies that enable and reinforce them, steadily unfit us for the work of raising our own young.
  • in Turkle’s book, as often as not, it is young people who are distressed because their parents are unreachable. Some of the most painful testimony in Reclaiming Conversation is the testimony of teenagers who hope to do things differently when they have children, who hope someday to learn to have a real conversation, and so o
  • it was an older generation that first fell under technology’s spell. At the middle school Turkle visits, as at many other schools across the country, it is the grown-ups who decide to give every child a computer and deliver all course content electronically, meaning that they require their students to work from the very medium that distracts them, a decision the grown-ups are unwilling to reverse, even as they lament its consequences.
  • we have approached what Turkle calls the robotic moment, when we will have made ourselves into the kind of people who are ready for what robots have to offer. When people give each other less, machines seem less inhuman.
  • robot babysitters may not seem so bad. The robots, at least, will be reliable!
  • If human conversations are endangered, what of prayer, a conversation like no other? All of the qualities that human conversation requires—patience and commitment, an ability to listen and a tolerance for aridity—prayer requires in greater measure.
  • this conversation—the Church exists to restore. Everything in the traditional Church is there to facilitate and nourish this relationship. Everything breathes, “Look at me!”
  • there is a second path to God, equally enjoined by the Church, and that is the way of charity to the neighbor, but not the neighbor in the abstract.
  • “Who is my neighbor?” a lawyer asks Jesus in the Gospel of Luke. Jesus’s answer is, the one you encounter on the way.
  • Virtue is either concrete or it is nothing. Man’s path to God, like Jesus’s path on the earth, always passes through what the Jesuit Jean Pierre de Caussade called “the sacrament of the present moment,” which we could equally call “the sacrament of the present person,” the way of the Incarnation, the way of humility, or the Way of the Cross.
  • The tradition of Zen Buddhism expresses the same idea in positive terms: Be here now.
  • Both of these privileged paths to God, equally dependent on a quality of undivided attention and real presence, are vulnerable to the distracting eye-candy of our technologies
  • Turkle is at pains to show that multitasking is a myth, that anyone trying to do more than one thing at a time is doing nothing well. We could also call what she was doing multi-relating, another temptation or illusion widespread in the digital age. Turkle’s book is full of people who are online at the same time that they are with friends, who are texting other potential partners while they are on dates, and so on.
  • This is the situation in which many people find themselves today: thinking that they are special to someone because of something that transpired, only to discover that the other person is spread so thin, the interaction was meaningless. There is a new kind of promiscuity in the world, in other words, that turns out to be as hurtful as the old kind.
  • Who can actually multitask and multi-relate? Who can love everyone without diluting or cheapening the quality of love given to each individual? Who can love everyone without fomenting insecurity and jealousy? Only God can do this.
  • When an individual needs to be healed of the effects of screens and machines, it is real presence that he needs: real people in a real world, ideally a world of God’s own making
  • Nature is restorative, but it is conversation itself, unfolding in real time, that strikes these boys with the force of revelation. More even than the physical vistas surrounding them on a wilderness hike, unrehearsed conversation opens up for them new territory, open-ended adventures. “It was like a stream,” one boy says, “very ongoing. It wouldn’t break apart.”
  • in the waters of baptism, the new man is born, restored to his true parent, and a conversation begins that over the course of his whole life reminds man of who he is, that he is loved, and that someone watches over him always.
  • Even if the Church could keep screens out of her sanctuaries, people strongly attached to them would still be people poorly positioned to take advantage of what the Church has to offer. Anxious people, unable to sit alone with their thoughts. Compulsive people, accustomed to checking their phones, on average, every five and a half minutes. As these behaviors increase in the Church, what is at stake is man’s relationship with truth itself.
oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

Reasons for COVID-19 Optimism on T-Cells and Herd Immunity - 0 views

  • It may well be the case that some amount of community protection kicks in below 60 percent exposure, and possibly quite a bit below that threshold, and that those who exhibit a cross-reactive T-cell immune response, while still susceptible to infection, may also have some meaningful amount of protection against severe disease.
  • early returns suggest that while the maximalist interpretation of each hypothesis is not very credible — herd immunity has probably not been reached in many places, and cross-reactive T-cell response almost certainly does not functionally immunize those who have it — more modest interpretations appear quite plausible.
  • Friston suggested that the truly susceptible portion of the population was certainly not 100 percent, as most modelers and conventional wisdom had it, but a much smaller share — surely below 50 percent, he said, and likely closer to about 20 percent. The analysis was ongoing, he said, but, “I suspect, once this has been done, it will look like the effective non-susceptible portion of the population will be about 80 percent. I think that’s what’s going to happen.”
  • ...31 more annotations...
  • one of the leading modelers, Gabriela Gomes, suggested the entire area of research was being effectively blackballed out of fear it might encourage a relaxation of pandemic vigilance. “This is the very sad reason for the absence of more optimistic projections on the development of this pandemic in the scientific literature,” she wrote on Twitter. “Our analysis suggests that herd-immunity thresholds are being achieved despite strict social-distancing measures.”
  • Gomes suggested, herd immunity could happen with as little as one quarter of the population of a community exposed — or perhaps just 20 percent. “We just keep running the models, and it keeps coming back at less than 20 percent,” she told Hamblin. “It’s very striking.” Such findings, if they held up, would be very instructive, as Hamblin writes: “It would mean, for instance, that at 25 percent antibody prevalence, New York City could continue its careful reopening without fear of another major surge in cases.”
  • But for those hoping that 25 percent represents a true ceiling for pandemic spread in a given community, well, it almost certainly does not, considering that recent serological surveys have shown that perhaps 93 percent of the population of Iquitos, Peru, has contracted the disease; as have more than half of those living in Indian slums; and as many as 68 percent in particular neighborhoods of New York City
  • overshoot of that scale would seem unlikely if the “true” threshold were as low as 20 or 25 percent.
  • But, of course, that threshold may not be the same in all places, across all populations, and is surely affected, to some degree, by the social behavior taken to protect against the spread of the disease.
  • we probably err when we conceive of group immunity in simplistically binary terms. While herd immunity is a technical term referring to a particular threshold at which point the disease can no longer spread, some amount of community protection against that spread begins almost as soon as the first people are exposed, with each case reducing the number of unexposed and vulnerable potential cases in the community by one
  • you would not expect a disease to spread in a purely exponential way until the point of herd immunity, at which time the spread would suddenly stop. Instead, you would expect that growth to slow as more people in the community were exposed to the disease, with most of them emerging relatively quickly with some immune response. Add to that the effects of even modest, commonplace protections — intuitive social distancing, some amount of mask-wearing — and you could expect to get an infection curve that tapers off well shy of 60 percent exposure.
  • Looking at the data, we see that transmissions in many severely impacted states began to slow down in July, despite limited interventions. This is especially notable in states like Arizona, Florida, and Texas. While we believe that changes in human behavior and changes in policy (such as mask mandates and closing of bars/nightclubs) certainly contributed to the decrease in transmission, it seems unlikely that these were the primary drivers behind the decrease. We believe that many regions obtained a certain degree of temporary herd immunity after reaching 10-35 percent prevalence under the current conditions. We call this 10-35 percent threshold the effective herd immunity threshold.
  • Indeed, that is more or less what was recently found by Youyang Gu, to date the best modeler of pandemic spread in the U.S
  • he cautioned again that he did not mean to imply that the natural herd-immunity level was as low as 10 percent, or even 35 percent. Instead, he suggested it was a plateau determined in part by better collective understanding of the disease and what precautions to take
  • Gu estimates national prevalence as just below 20 percent (i.e., right in the middle of his range of effective herd immunity), it still counts, I think, as encouraging — even if people in hard-hit communities won’t truly breathe a sigh of relief until vaccines arrive.
  • If you can get real protection starting at 35 percent, it means that even a mediocre vaccine, administered much more haphazardly to a population with some meaningful share of vaccination skeptics, could still achieve community protection pretty quickly. And that is really significant — making both the total lack of national coordination on rollout and the likely “vaccine wars” much less consequential.
  • At least 20 percent of the public, and perhaps 50 percent, had some preexisting, cross-protective T-cell response to SARS-CoV-2, according to one much-discussed recent paper. An earlier paper had put the figure at between 40 and 60 percent. And a third had found an even higher prevalence: 81 percent.
  • The T-cell story is similarly encouraging in its big-picture implications without being necessarily paradigm-changing
  • These numbers suggest their own heterogeneity — that different populations, with different demographics, would likely exhibit different levels of cross-reactive T-cell immune response
  • The most optimistic interpretation of the data was given to me by Francois Balloux, a somewhat contrarian disease geneticist and the director of the University College of London’s Genetics Institute
  • According to him, a cross-reactive T-cell response wouldn’t prevent infection, but would probably mean a faster immune response, a shorter period of infection, and a “massively” reduced risk of severe illness — meaning, he guessed, that somewhere between a third and three-quarters of the population carried into the epidemic significant protection against its scariest outcomes
  • the distribution of this T-cell response could explain at least some, and perhaps quite a lot, of COVID-19’s age skew when it comes to disease severity and mortality, since the young are the most exposed to other coronaviruses, and the protection tapers as you get older and spend less time in environments, like schools, where these viruses spread so promiscuously.
  • Balloux told me he believed it was also possible that the heterogeneous distribution of T-cell protection also explains some amount of the apparent decline in disease severity over time within countries on different pandemic timelines — a phenomenon that is more conventionally attributed to infection spreading more among the young, better treatment, and more effective protection of the most vulnerable (especially the old).
  • Going back to Youyang Gu’s analysis, what he calls the “implied infection fatality rate” — essentially an estimated ratio based on his modeling of untested cases — has fallen for the country as a whole from about one percent in March to about 0.8 percent in mid-April, 0.6 percent in May, and down to about 0.25 percent today.
  • even as we have seemed to reach a second peak of coronavirus deaths, the rate of death from COVID-19 infection has continued to decline — total deaths have gone up, but much less than the number of cases
  • In other words, at the population level, the lethality of the disease in America has fallen by about three-quarters since its peak. This is, despite everything that is genuinely horrible about the pandemic and the American response to it, rather fantastic.
  • there may be some possible “mortality displacement,” whereby the most severe cases show up first, in the most susceptible people, leaving behind a relatively protected population whose experience overall would be more mild, and that T-cell response may play a significant role in determining that susceptibility.
  • That, again, is Balloux’s interpretation — the most expansive assessment of the T-cell data offered to me
  • The most conservative assessment came from Sarah Fortune, the chair of Harvard’s Department of Immunology
  • Fortune cautioned not to assume that cross-protection was playing a significant role in determining severity of illness in a given patient. Those with such a T-cell response, she told me, would likely see a faster onset of robust response, yes, but that may or may not yield a shorter period of infection and viral shedding
  • Most of the scientists, doctors, epidemiologists, and immunologists I spoke to fell between those two poles, suggesting the T-cell cross-immunity findings were significant without necessarily being determinative — that they may help explain some of the shape of pandemic spread through particular populations, but only some of the dynamics of that spread.
  • he told me he believed, in the absence of that data, that T-cell cross-immunity from exposure to previous coronaviruses “might explain different disease severity in different people,” and “could certainly be part of the explanation for the age skew, especially for why the very young fare so well.”
  • the headline finding was quite clear and explicitly stated: that preexisting T-cell response came primarily via the variety of T-cells called CD4 T-cells, and that this dynamic was consistent with the hypothesis that the mechanism was inherited from previous exposure to a few different “common cold” coronaviruses
  • “This potential preexisting cross-reactive T-cell immunity to SARS-CoV-2 has broad implications,” the authors wrote, “as it could explain aspects of differential COVID-19 clinical outcomes, influence epidemiological models of herd immunity, or affect the performance of COVID-19 candidate vaccines.”
  • “This is at present highly speculative,” they cautioned.
Javier E

Opinion | A Nobel Prize for the Economics of Panic - The New York Times - 0 views

  • Obviously, Bernanke, Diamond and Dybvig weren’t the first economists to notice that bank runs happen
  • Diamond and Dybvig provided the first really clear analysis of why they happen — and why, destructive as they are, they can represent rational behavior on the part of bank depositors. Their analysis was also full of implications for financial policy.
  • Bernanke provided evidence on why bank runs matter and, although he avoided saying so directly, why Milton Friedman was wrong about the causes of the Great Depression.
  • ...20 more annotations...
  • Diamond and Dybvig offered a stylized but insightful model of what banks do. They argued that there is always a tension between individuals’ desire for liquidity — ready access to funds — and the economy’s need to make long-term investments that can’t easily be converted into cash.
  • Banks square that circle by taking money from depositors who can withdraw their funds at will — making those deposits highly liquid — and investing most of that money in illiquid assets, such as business loans.
  • So banking is a productive activity that makes the economy richer by reconciling otherwise incompatible desires for liquidity and productive investment. And it normally works because only a fraction of a bank’s depositors want to withdraw their funds at any given time.
  • This does, however, make banks vulnerable to runs. Suppose that for some reason many depositors come to believe that many other depositors are about to cash out, and try to beat the pack by withdrawing their own funds. To meet these demands for liquidity, a bank will have to sell off its illiquid assets at fire sale prices, and doing so can drive an institution that should be solvent into bankruptcy
  • If that happens, people who didn’t withdraw their funds will be left with nothing. So during a panic, the rational thing to do is to panic along with everyone else.
  • There was, of course, a huge wave of banking panics in 1930-31. Many banks failed, and those that survived made far fewer business loans than before, holding cash instead, while many families shunned banks altogether, putting their cash in safes or under their mattresses. The result was a diversion of wealth into unproductive uses. In his 1983 paper, Bernanke offered evidence that this diversion played a large role in driving the economy into a depression and held back the subsequent recovery.
  • In the story told by Friedman and Anna Schwartz, the banking crisis of the early 1930s was damaging because it led to a fall in the money supply — currency plus bank deposits. Bernanke asserted that this was at most only part of the stor
  • a government backstop — either deposit insurance, the willingness of the central bank to lend money to troubled banks or both — can short-circuit potential crises.
  • But providing such a backstop raises the possibility of abuse; banks may take on undue risks because they know they’ll be bailed out if things go wrong.
  • So banks need to be regulated as well as backstopped. As I said, the Diamond-Dybvig analysis had remarkably large implications for policy.
  • From an economic point of view, banking is any form of financial intermediation that offers people seemingly liquid assets while using their wealth to make illiquid investments.
  • This insight was dramatically validated in the 2008 financial crisis.
  • By the eve of the crisis, however, the financial system relied heavily on “shadow banking” — banklike activities that didn’t involve standard bank deposits
  • Such arrangements offered a higher yield than conventional deposits. But they had no safety net, which opened the door to an old-style bank run and financial panic.
  • And the panic came. The conventionally measured money supply didn’t plunge in 2008 the way it did in the 1930s — but repo and other money-like liabilities of financial intermediaries did:
  • Fortunately, by then Bernanke was chair of the Federal Reserve. He understood what was going on, and the Fed stepped in on an immense scale to prop up the financial system.
  • a sort of meta point about the Diamond-Dybvig work: Once you’ve understood and acknowledged the possibility of self-fulfilling banking crises, you become aware that similar things can happen elsewhere.
  • Perhaps the most notable case in relatively recent times was the euro crisis of 2010-12. Market confidence in the economies of southern Europe collapsed, leading to huge spreads between the interest rates on, for example, Portuguese bonds and those on German bonds. The conventional wisdom at the time — especially in Germany — was that countries were being justifiably punished for taking on excessive debt
  • the Belgian economist Paul De Grauwe argued that what was actually happening was a self-fulfilling panic — basically a run on the bonds of countries that couldn’t provide a backstop because they no longer had their own currencies.
  • Sure enough, when Mario Draghi, the president of the European Central Bank at the time, finally did provide a backstop in 2012 — he said the magic words “whatever it takes,” implying that the bank would lend money to the troubled governments if necessary — the spreads collapsed and the crisis came to an end:
Javier E

'It Was Like A Sucker Punch' - Ta-Nehisi Coates - The Atlantic - 1 views

  • Much of the conservative media is simply far more cozy with the Republican Party than its Democratic counterparts (as exemplified by the numerous Fox hosts and contributors who moonlight as Republican fundraisers), which makes necessary detachment difficult. Having an opinion isn't an obstacle to good journalism or analysis, but no one wants to derail their own gravy train. Departing from the party line, particularly if one does so in a manner that seems favorable to Obama, would be to reveal one as an apostate, a tool of liberalism.
  • my original tweet, blaming the conservative media for misleading the readers who depend on them, doesn't capture the fullness of the problem. Conservative media lies to its audience because much of its audience wants to be lied to. 
  • The best way to understand the difference between liberal and conservative media and expertise is to think about the response, within  Obama's campaign and within liberal media, to his first debate performance. There certainly were liberals who thought he actually hadn't done that bad, and that the press had given him a raw deal. But there were others who thought he'd performed poorly. And the Obama campaign, itself, thought he'd performed poorly. My point here is there was debate, a fight, within liberal circles which didn't devolve into indictments of DINOs. There was no attempt to "unskew" reality. 
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
1 - 20 of 297 Next › Last »
Showing 20 items per page