Skip to main content

Home/ TOK Friends/ Group items tagged Bayesian

Rss Feed Group items tagged

Javier E

Why Is It So Hard to Be Rational? | The New Yorker - 0 views

  • an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio).
  • When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.
  • And yet rationality has sharp edges that make it hard to put at the center of one’s life
  • ...43 more annotations...
  • You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“RATIONAL, adj.: Devoid of all delusions save those of observation, experience and reflection,”
  • You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus.
  • Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect.
  • modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.”
  • Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.
  • Rationality is one of humanity’s superpowers. How do we keep from misusing it?
  • Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.
  • Bayesian reasoning implies a few “best practices.”
  • Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat
  • We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust.
  • But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities.
  • Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful.
  • the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size.
  • In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization.
  • Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive.
  • You can know what’s right but still struggle to do it.
  • Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds.
  • For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. 
  • I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart.
  • between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.
  • Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved.
  • in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together.
  • The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula.
  • The real challenge isn’t being right but knowing how wrong you might be.By Joshua RothmanAugust 16, 2021
  • Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?).
  • Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?
  • For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest.
  • In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently
  • Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.)
  • Galef tends to see rationality as a method for acquiring more accurate views.
  • Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want.
  • Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.”
  • A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.
  • In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before
  • The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”
  • metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational
  • There are many calibration methods
  • nowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge.
  • Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps.
  • The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes
  • So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.
  • the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.
  • Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
Javier E

The Signal and the Noise: Why So Many Predictions Fail-but Some Don't: Nate Silver: 978... - 0 views

  • Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. The New York Times now publishes FiveThirtyEight.com, where Silver is one of the nation’s most influential political forecasters.
  • Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.
  • the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.
  • ...3 more annotations...
  • Baseball, weather forecasting, earthquake prediction, economics, and polling: In all of these areas, Silver finds predictions gone bad thanks to biases, vested interests, and overconfidence. But he also shows where sophisticated forecasters have gotten it right (and occasionally been ignored to boot)
  • This is the best general-readership book on applied statistics that I've read. Short review: if you're interested in science, economics, or prediction: read it. It's full of interesting cases, builds intuition, and is a readable example of Bayesian thinking.
  • The core concept is this: prediction is a vital part of science, of business, of politics, of pretty much everything we do. But we're not very good at it, and fall prey to cognitive biases and other systemic problems such as information overload that make things worse. However, we are simultaneously learning more about how such things occur and that knowledge can be used to make predictions better -- and to improve our models in science, politics, business, medicine, and so many other areas.
Javier E

Nate Silver, Artist of Uncertainty - 0 views

  • In 2008, Nate Silver correctly predicted the results of all 35 Senate races and the presidential results in 49 out of 50 states. Since then, his website, fivethirtyeight.com (now central to The New York Times’s political coverage), has become an essential source of rigorous, objective analysis of voter surveys to predict the Electoral College outcome of presidential campaigns. 
  • Political junkies, activists, strategists, and journalists will gain a deeper and more sobering sense of Silver’s methods in The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t (Penguin Press). A brilliant analysis of forecasting in finance, geology, politics, sports, weather, and other domains, Silver’s book is also an original fusion of cognitive psychology and modern statistical theory.
  • Its most important message is that the first step toward improving our predictions is learning how to live with uncertainty.
  • ...7 more annotations...
  • he blends the best of modern statistical analysis with research on cognition biases pioneered by Princeton psychologist and Nobel laureate in economics  Daniel Kahneman and the late Stanford psychologist Amos Tversky. 
  • Silver’s background in sports and poker turns out to be invaluable. Successful analysts in gambling and sports are different from fans and partisans—far more aware that “sure things” are likely to be illusions,
  • The second step is starting to understand why it is that big data, super computers, and mathematical sophistication haven’t made us better at separating signals (information with true predictive value) from noise (misleading information). 
  • One of the biggest problems we have in separating signal from noise is that when we look too hard for certainty that isn’t there, we often end up attracted to noise, either because it is more prominent or because it confirms what we would like to believe.
  • In discipline after discipline, Silver shows in his book that when you look at even the best single forecast, the average of all independent forecasts is 15 to 20 percent more accurate. 
  • Silver has taken the next major step: constantly incorporating both state polls and national polls into Bayesian models that also incorporate economic data.
  • Silver explains why we will be misled if we only consider significance tests—i.e., statements that the margin of error for the results is, for example, plus or minus four points, meaning there is one chance in 20 that the percentages reported are off by more than four. Calculations like these assume the only source of error is sampling error—the irreducible error—while ignoring errors attributable to house effects, like the proportion of cell-phone users, one of the complex set of assumptions every pollster must make about who will actually vote. In other words, such an approach ignores context in order to avoid having to justify and defend judgments. 
Javier E

The varieties of denialism | Scientia Salon - 1 views

  • a stimulating conference at Clark University about “Manufacturing Denial,” which brought together scholars from wildly divergent disciplines — from genocide studies to political science to philosophy — to explore the idea that “denialism” may be a sufficiently coherent phenomenon underlying the willful disregard of factual evidence by ideologically motivated groups or individuals.
  • the Oxford defines a denialist as “a person who refuses to admit the truth of a concept or proposition that is supported by the majority of scientific or historical evidence,” which represents a whole different level of cognitive bias or rationalization. Think of it as bias on steroids.
  • First, as a scientist: it’s just not about the facts, indeed — as Brendan showed data in hand during his presentation — insisting on facts may have counterproductive effects, leading the denialist to double down on his belief.
  • ...22 more annotations...
  • if I think that simply explaining the facts to the other side is going to change their mind, then I’m in for a rude awakening.
  • As a philosopher, I found to be somewhat more disturbing the idea that denialism isn’t even about critical thinking.
  • what the large variety of denialisms have in common is a very strong, overwhelming, ideological commitment that helps define the denialist identity in a core manner. This commitment can be religious, ethnical or political in nature, but in all cases it fundamentally shapes the personal identity of the people involved, thus generating a strong emotional attachment, as well as an equally strong emotional backlash against critics.
  • To begin with, of course, they think of themselves as “skeptics,” thus attempting to appropriate a word with a venerable philosophical pedigree and which is supposed to indicate a cautiously rational approach to a given problem. As David Hume put it, a wise person (i.e., a proper skeptic) will proportion her beliefs to the evidence. But there is nothing of the Humean attitude in people who are “skeptical” of evolution, climate change, vaccines, and so forth.
  • Denialists have even begun to appropriate the technical language of informal logic: when told that a majority of climate scientists agree that the planet is warming up, they are all too happy to yell “argument from authority!” When they are told that they should distrust statements coming from the oil industry and from “think tanks” in their pockets they retort “genetic fallacy!” And so on. Never mind that informal fallacies are such only against certain background information, and that it is eminently sensible and rational to trust certain authorities (at the least provisionally), as well as to be suspicious of large organizations with deep pockets and an obvious degree of self-interest.
  • What commonalities can we uncover across instances of denialism that may allow us to tackle the problem beyond facts and elementary logic?
  • the evidence from the literature is overwhelming that denialists have learned to use the vocabulary of critical thinking against their opponents.
  • Another important issue to understand is that denialists exploit the inherently tentative nature of scientific or historical findings to seek refuge for their doctrines.
  • . Scientists have been wrong before, and doubtlessly will be again in the future, many times. But the issue is rather one of where it is most rational to place your bets as a Bayesian updater: with the scientific community or with Faux News?
  • Science should be portrayed as a human story of failure and discovery, not as a body of barely comprehensible facts arrived at by epistemic priests.
  • Is there anything that can be done in this respect? I personally like the idea of teaching “science appreciation” classes in high school and college [2], as opposed to more traditional (usually rather boring, both as a student and as a teacher) science instruction
  • Denialists also exploit the media’s self imposed “balanced” approach to presenting facts, which leads to the false impression that there really are two approximately equal sides to every debate.
  • This is a rather recent phenomenon, and it is likely the result of a number of factors affecting the media industry. One, of course, is the onset of the 24-hr media cycle, with its pernicious reliance on punditry. Another is the increasing blurring of the once rather sharp line between reporting and editorializing.
  • The problem with the media is of course made far worse by the ongoing crisis in contemporary journalism, with newspapers, magazines and even television channels constantly facing an uncertain future of revenues,
  • he push back against denialism, in all its varied incarnations, is likely to be more successful if we shift the focus from persuading individual members of the public to making political and media elites accountable.
  • This is a major result coming out of Brendan’s research. He showed data set after data set demonstrating two fundamental things: first, large sections of the general public do not respond to the presentation of even highly compelling facts, indeed — as mentioned above — are actually more likely to entrench further into their positions.
  • Second, whenever one can put pressure on either politicians or the media, they do change their tune, becoming more reasonable and presenting things in a truly (as opposed to artificially) balanced way.
  • Third, and most crucially, there is plenty of evidence from political science studies that the public does quickly rally behind a unified political leadership. This, as much as it is hard to fathom now, has happened a number of times even in somewhat recent times
  • when leaders really do lead, the people follow. It’s just that of late the extreme partisan bickering in Washington has made the two major parties entirely incapable of working together on the common ground that they have demonstrably had in the past.
  • Another thing we can do about denialism: we should learn from the detailed study of successful cases and see what worked and how it can be applied to other instances
  • Yet another thing we can do: seek allies. In the case of evolution denial — for which I have the most first-hand experience — it has been increasingly obvious to me that it is utterly counterproductive for a strident atheist like Dawkins (or even a relatively good humored one like yours truly) to engage creationists directly. It is far more effective when we have clergy (Barry Lynn of Americans United for the Separation of Church and State [6] comes to mind) and religious scientists
  • Make no mistake about it: denialism in its various forms is a pernicious social phenomenon, with potentially catastrophic consequences for our society. It requires a rallying call for all serious public intellectuals, academic or not, who have the expertise and the stamina to join the fray to make this an even marginally better world for us all. It’s most definitely worth the fight.
Javier E

Journal's Article on ESP Is Expected to Prompt Outrage - NYTimes.com - 0 views

  • Dr. Bem is far from typical. He is widely respected for his clear, original thinking in social psychology, and some people familiar with the case say his reputation may have played a role in the paper’s acceptance.
  • Peer review is usually an anonymous process, with authors and reviewers unknown to one another. But all four reviewers of this paper were social psychologists, and all would have known whose work they were checking and would have been responsive to the way it was reasoned.
  • Perhaps more important, none were topflight statisticians. “The problem was that this paper was treated like any other,” said an editor at the journal, Laura King, a psychologist at the University of Missouri. “And it wasn’t.” Many statisticians say that conventional social-science techniques for analyzing data make an assumption that is disingenuous and ultimately self-deceiving: that researchers know nothing about the probability of the so-called null hypothesis. In this case, the null hypothesis would be that ESP does not exist. Refusing to give that hypothesis weight makes no sense, these experts say; if ESP exists, why aren’t people getting rich by reliably predicting the movement of the stock market or the outcome of football games? Instead, these statisticians prefer a technique called Bayesian analysis, which seeks to determine whether the outcome of a particular experiment “changes the odds that a hypothesis is true,”
  • ...1 more annotation...
  • So far, at least three efforts to replicate the experiments have failed.
Javier E

Opinion | Nate Silver on Kamala Harris's Chances and the Mistakes of the 'Indigo Blob' ... - 0 views

  • You’ve also called it the indigo blob in different ways. You began to see it as a set of aligned cognitive tendencies that you disagreed with. What were they?
  • one of them is the failure to do what I call decoupling. It’s not my term. Decoupling is the act of separating an issue from the context. The example I gave in the book is that if you’re able to say, “I abhor the Chick-fil-A’s C.E.O.’s position on gay marriage” — I don’t know if it’s changed or not, but he was anti-gay marriage at least for some period of time — “but they make a really delicious chicken sandwich.” That’s decoupling
  • Or, you can say, you know, Michael Jackson, Woody Allen, separate the art from the artist kind of thing. That tendency goes against the tendency on the progressive left to care a lot about the identity of the speaker, in terms of racial or gender identity and in terms of their credentials.
  • ...34 more annotations...
  • In this other world that I call “the river,” the kind of gambling, risk-taking world, all that matters is that you’re right. It doesn’t matter who you are; it matters that you’re right and you’re able to prove it or bet on it in some way.
  • And that’s very against the kind of credentialism that you have within the progressive Democratic left, which I also call the “indigo blob” because it’s a fusion of purple and blue. There’s not a clear separation between the nonpartisan centrist media and the left-leaning progressive media rooting for Democrats. Different parts of The New York Times have both those functions.
  • I think people are exploiting the trust that institutions have earned for political gain.
  • You have the C.E.O. of OpenAI saying, yeah, this might destroy the universe, but it’s a good gamble to take.
  • On the one hand, there are lots of signs that risk tolerance is going down, among young people in particular.
  • They’re smoking less, drinking less, doing fewer drugs, having less sex — a different type of risk tolerance.
  • They are less willing to defend free speech norms if it potentially would cause injury to someone. Free speech is kind of a pro-risk take in some ways because speech can cause effects, of course.
  • On the other hand, you have various booms and busts in crypto. You have Las Vegas bringing in record revenue. You have record revenue in sports betting
  • What you seem to be doing in the book is making an interesting cut in society between people with different forms of risk tolerance and thinking about risk.
  • it just seems to me we are in a world now where institutions are less trusted.
  • some people respond to that by saying, OK, I make my own rules now, and this is great and I have lots of agency.
  • some respond by withdrawing into an online world or clinging on to beliefs and experts that have lost their credibility or just by becoming more risk averse.
  • you spend time with people whose approach to risk you find sophisticated and interesting. One of them is Peter Thiel. What did you learn spending time with him?
  • There’s a good book by Max Chafkin about Peter Thiel called “The Contrarian,” which is convincing that Thiel is actually quite conservative more than libertarian and probably quite religious.
  • I mean, the amounts of wealth and success and power that Silicon Valley has — I do think some people pinch themselves and wonder if they have been one of the chosen ones in some ways, or been blessed in some ways, or maybe the nerdy version of it, think they’re living in a simulation of some kind.
  • Peter Thiel is a sort of template of the V.C. mind, which is oriented toward being right in important and counterintuitive ways three out of 20 times and doesn’t care about being wrong 17 out of 20 times. You want big payouts, not a high betting average.
  • The two things that you hear from every V.C. — one is the importance of the longer time horizons. You’re making investments that might not pay off for 10 or 15 years.
  • No. 2, even more important, is the asymmetric ability to bet on the upside. They are all terrified because they all had an experience early in their career where Mark Zuckerberg or Larry Page or Sergey Brin walked through their door, and they didn’t give them funding and then they wound up missing on an investment that paid out at 100x or 1,000x or 10,000x. And so if you can only lose 1x your money, but you can make 1,000x if you have a successful company, then that changes your mind-set about everything. You want to avoid false negatives. You want to avoid missed opportunities.
  • There’s a bias in the world you’re describing — an aesthetic around talking in probabilities. I see this a lot in Silicon Valley. I would call it faux Bayesian reasoning, where they’re given some probability, but they have no reason to base the probability. And it makes you sound much more precise. It makes you sound like you know what you’re talking abou
  • SBF was known for always talking in terms of expected value, which is very appealing to the kinds of people you’re describing. But it can become a costume of sloppy thinking. I’m curious how you think about it.
  • There’s two things here. One is there is a jargon, where there’re just a lot of shared cultural norms and unspoken discursive tendencies. It’s just the way we communicate, I think, in the river. But also, it’s really easy to build bad models.
  • Look, in some ways, these V.C.s are obviously incredibly deeply flawed people. So why do they succeed despite that?
  • I think because the idea of having a longer time horizon, No. 1, and being willing to make these positive expected value, high-risk, but very high upside bets and gathering a portfolio of them repeatedly and making enough of these bets that you effectively do hedge your risk, right? Those two ideas are so good that it makes up for the fact that these guys often have terrible judgment and are kind of vainglorious assholes, half of them, right?
  • I want to end on a part of your book I found really interesting, which is about the physical experience of risk in gambling but in other things, too. You talk about pain tolerance, you talk about how the body feels when you’re behind on a hand and you’re losing your chips
  • You’ve talked about being on tilt. But I see it in politics, too. There is a physical question that comes into the decisions you make. Tell me a bit about how you think about this relationship between the body and the ability to act under pressure to make intuitive decisions in moments of very high stress.
  • Human beings have tens of thousands of years of evolutionary pressure, which is inclined to respond in a heightened way to moments that are high stakes, that are high stress moments. Your body will know when you’re playing a $100, $200 game where it really matters. You will just know
  • You’ll experience that stress. Even if you suppress it consciously, it will still affect the way that you’re literally ingesting your five senses. So if your heart rate goes up, that has discernible effects.
  • But actually your body’s providing you with more information. You’re taking in more in these short bursts of time. People who can master that zone — and I use the term zone intentionally because it’s very related to being in the zone, like Michael Jordan used to talk about — learning to master that and relish that is a very powerful skill because you are experiencing physical stress whether you want to or not
  • How much is that learnable, and how much of it is a kind of natural physical intelligence that some people have and some people don’t?
  • I think it’s actually quite learnable
  • you can tone it up or tone it down. I mean, it’s terrifying the first time it happens, but when you start to recognize it and you make a conscious effort to slow down a little bit and take your time and try to execute the basics
  • It’s not as much about trying to be a hero. It’s about trying to execute the basics. Because if everyone is losing their [expletive], if you can do your basic ABC blocking and tackling, then you’re ahead of 95 percent of the people.
  • is gut instinct overrated or underrated? It depends on how much experience you have. The best poker players can have uncannily good instincts based on reading physical tells, just the kind of vibe someone gives off.
  • I played a lot of poker in writing this book, and you develop a sixth sense for whether someone has a strong hand. And you can test it because you can say, I know that I’m supposed to fold this hand here, it’s a little bit too weak to call against a bluff, but I just have a sense that he’s bluffing. And lo and behold, you’re right, more often than you think.
1 - 7 of 7
Showing 20 items per page