Skip to main content

Home/ TOK Friends/ Group items tagged calibration

Rss Feed Group items tagged

Javier E

Why Is It So Hard to Be Rational? | The New Yorker - 0 views

  • an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio).
  • When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.
  • And yet rationality has sharp edges that make it hard to put at the center of one’s life
  • ...43 more annotations...
  • You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“RATIONAL, adj.: Devoid of all delusions save those of observation, experience and reflection,”
  • You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus.
  • Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect.
  • modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.”
  • Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.
  • Rationality is one of humanity’s superpowers. How do we keep from misusing it?
  • Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.
  • Bayesian reasoning implies a few “best practices.”
  • Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat
  • We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust.
  • But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities.
  • Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful.
  • the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size.
  • In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization.
  • Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive.
  • You can know what’s right but still struggle to do it.
  • Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds.
  • For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. 
  • I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart.
  • between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.
  • Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved.
  • in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together.
  • The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula.
  • The real challenge isn’t being right but knowing how wrong you might be.By Joshua RothmanAugust 16, 2021
  • Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?).
  • Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?
  • For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest.
  • In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently
  • Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.)
  • Galef tends to see rationality as a method for acquiring more accurate views.
  • Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want.
  • Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.”
  • A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.
  • In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before
  • The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”
  • metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational
  • There are many calibration methods
  • nowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge.
  • Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps.
  • The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes
  • So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.
  • the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.
  • Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

Are the New 'Golden Age' TV Shows the New Novels? - NYTimes.com - 0 views

  • it’s become common to hear variations on the idea that quality cable TV shows are the new novels.
  • Thomas Doherty, writing in The Chronicle of Higher Education, called the new genre “Arc TV” — because its stories follow long, complex arcs of development — and insisted that “at its best, the world of Arc TV is as exquisitely calibrated as the social matrix of a Henry James novel.”
  • Mixed feelings about literature — the desire to annex its virtues while simultaneously belittling them — are typical of our culture today, which doesn’t know quite how to deal with an art form, like the novel, that is both democratic and demanding.
  • ...6 more annotations...
  • comparing even the best TV shows with Dickens, or Henry James, also suggests how much the novel can achieve that TV doesn’t even attempt.
  • Television gives us something that looks like a small world, made by a group of people who are themselves a small world. The novel gives us sounds pinned down by hieroglyphs, refracted flickerings inside an individual.
  • Spectacle and melodrama remain at the heart of TV, as they do with all arts that must reach a large audience in order to be economically viable. But it is voice, tone, the sense of the author’s mind at work, that are the essence of literature, and they exist in language, not in images.
  • At this point in our technological evolution, to read a novel is to engage in probably the second-largest single act of pleasure-based data transfer that can take place between two human beings, exceeded only by sex. Novels are characterized by their intimacy, which is extreme, by their scale, which is vast, and by their form, which is linguistic and synesthetic. The novel is a kinky beast.
  • Televised evil, for instance, almost always takes melodramatic form: Our anti-heroes are mobsters, meth dealers or terrorists. But this has nothing to do with the way we encounter evil in real life, which is why a character like Gilbert Osmond, in “The Portrait of a Lady,” is more chilling in his bullying egotism than Tony Soprano
  • television and the novel travel in opposite directions.
Javier E

To Justify Every 'A,' Some Professors Hand Over Grading Power to Outsiders - Technology... - 0 views

  • The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don't worry that students will punish harsh grades with poor reviews. That's the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
  • These efforts raise the question: What if professors aren't that good at grading? What if the model of giving instructors full control over grades is fundamentally flawed? As more observers call for evidence of college value in an era of ever-rising tuition costs, game-changing models like these are getting serious consideration.
  • Professors do score poorly when it comes to fair grading, according to a study published in July in the journal Teachers College Record. After crunching the numbers on decades' worth of grade reports from about 135 colleges, the researchers found that average grades have risen for 30 years, and that A is now the most common grade given at most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue that a "consumer-based approach" to higher education has created subtle incentives for professors to give higher marks than deserved. "The standard practice of allowing professors free rein in grading has resulted in grades that bear little relation to actual performance," the two professors concluded.
  • ...13 more annotations...
  • Western Governors is entirely online, for one thing. Technically it doesn't offer courses; instead it provides mentors who help students prepare for a series of high-stakes homework assignments. Those assignments are designed by a team of professional test-makers to prove competence in various subject areas. The idea is that as long as students can leap all of those hurdles, they deserve degrees, whether or not they've ever entered a classroom, watched a lecture video, or participated in any other traditional teaching experience. The model is called "competency-based education."
  • Ms. Johnson explains that Western Governors essentially splits the role of the traditional professor into two jobs. Instructional duties fall to a group the university calls "course mentors," who help students master material. The graders, or evaluators, step in once the homework is filed, with the mind-set of, "OK, the teaching's done, now our job is to find out how much you know," says Ms. Johnson. They log on to a Web site called TaskStream and pluck the first assignment they see. The institution promises that every assignment will be graded within two days of submission.
  • Western Governors requires all evaluators to hold at least a master's degree in the subject they're grading.
  • Evaluators are required to write extensive comments on each task, explaining why the student passed or failed to prove competence in the requisite skill. No letter grades are given—students either pass or fail each task.
  • Another selling point is the software's fast response rate. It can grade a batch of 1,000 essay tests in minutes. Professors can set the software to return the grade immediately and can give students the option of making revisions and resubmitting their work on the spot.
  • All evaluators initially receive a month of training, conducted online, about how to follow each task's grading guidelines, which lay out characteristics of a passing score.
  • Other evaluators want to push talented students to do more than the university's requirements for a task, or to allow a struggling student to pass if he or she is just under the bar. "Some people just can't acclimate to a competency-based environment," says Ms. Johnson. "I tell them, If they don't buy this, they need to not be here.
  • She and some teaching assistants scored the tests by hand and compared their performance with the computer's.
  • The graduate students became fatigued and made mistakes after grading several tests in a row, she told me, "but the machine was right-on every time."
  • He argues that students like the idea that their tests are being evaluated in a consistent way.
  • The graders must regularly participate in "calibration exercises," in which they grade a simulated assignment to make sure they are all scoring consistently. As the phrase suggests, the process is designed to run like a well-oiled machine.
  • He said once students get essays back instantly, they start to view essay tests differently. "It's almost like a big math problem. You don't expect to get everything right the first time, but you work through it.
  • robot grading is the hottest trend in testing circles, says Jacqueline Leighton, a professor of educational psychology at the University of Alberta who edits the journal Educational Measurement: Issues and Practice. Companies building essay-grading robots include the Educational Testing Service, which sells e-rater, and Pearson Education, which makes Intelligent Essay Assessor. "The research is promising, but they're still very much in their infancy," Ms. Leighton says.
Javier E

A Vote for Reason - NYTimes.com - 1 views

  • In Haidt’s view, the philosophers’ dream of reason isn’t just naïve, it is radically unfounded, the product of what he calls “the rationalist delusion.” As he puts it, “Anyone who values truth should stop worshiping reason
  • According to Haidt, not only are value judgments less often a product of rational deliberation than we’d like to think, that is how we are supposed to function. That it is how we are hardwired by evolution. In the neuroscientist Drew Westen’s words, the political brain is the emotional brain.
  • Indeed, reason sometimes seems simply beside the point. Consider some of Haidt’s own well-known research on “moral dumbfounding.”
  • ...11 more annotations...
  • Haidt suggests that this means that whatever reasons they could come up with seem to be just along for the ride: it was their feelings doing the work of judgment.
  • The inability for people — in particular young college students like those in Haidt’s study — to be immediately articulate about why they’ve made an intuitive judgment doesn’t necessarily show that their judgment is the outcome of non-rational process, or even that they lack reasons for their view. Intuitions, moral or otherwise, can be the result of sources that can be rationally evaluated and calibrated.
  • Moreover, rational deliberation is not a switch to be thrown on or off. It is a process, and therefore many of its effects would have to be measured over time.
  • as other studies have suggested when people are given more time to reflect, they can change their beliefs to fit the evidence, even when those beliefs might be initially emotionally uncomfortable to them.
  • it seems downright likely that rational deliberation is going to be involved in the creation of new moral concepts — such as human rights. In short, to show that reasons have no role in value judgments, we would need to show that they have no role in changes in moral views over time.
  • Haidt takes from this a general lesson about the value of defending our views with reasons. Just as those who do the “right” thing are not really motivated by a desire for justice, those who defend their views with reasons are not “really” after the truth.
  • even if appeals to evidence are sometimes effective in changing our political values over time, that’s only because reasons themselves are aimed at manipulating others into agreeing with us, not uncovering the fact
  • Even if we could start seeing ourselves as giving reasons only to manipulate, it is unclear that we should.  To see ourselves as Glauconians is to treat the exchange of reasons as a slow-moving, less effective version of the political correctness drug I mentioned at the outset. And we are right to recoil from that. It is a profoundly undemocratic idea.
  • To engage in democratic politics means seeing your fellow citizens as equal autonomous agents capable of making up their own minds. And that means that in a functioning democracy, we owe one another reasons for our political actions. And obviously these reasons can’t be “reasons” of force and manipulation,
  • Glauconians are marketers; persuasion is the game and truth is beside the point. But once we begin to see ourselves — and everyone else — in this way, we cease seeing one another as equal participants in the democratic enterprise. We are only pieces to be manipulated on the board.
  • to see one another as reason-givers doesn’t mean we must perceive one another as emotionless, unintuitive robots. It is consistent with the idea, rightly emphasized by Haidt, that much rapid-fire decision making comes from the gut. But it is also consistent with the idea that we can get better at spotting when the gut is leading us astray, even if the process is slower and more ponderous than we’d like
Javier E

The teaching of economics gets an overdue overhaul - 0 views

  • Change, however, has been slow to reach the university economics curriculum. Many institutions still pump students through introductory courses untainted by recent economic history or the market shortcomings it illuminates.
  • A few plucky reformers are working to correct that: a grand and overdue idea. Overhauling the way economics is taught ought to produce students more able to understand the modern world. Even better, it should improve economics itself.
  • Yet the standard curriculum is hardly calibrated to impart these lessons. Most introductory texts begin with the simplest of models. Workers are paid according to their productivity; trade never makes anyone worse off; and government interventions in the market always generate a “deadweight loss”. Practising economists know that these statements are more true at some times than others
  • ...17 more annotations...
  • Economics teaches that incentives matter and trade-offs are unavoidable. It shows how naive attempts to fix social problems, from poverty to climate change, can have unintended consequences. Introductory economics, at its best, enables people to see the unstated assumptions and hidden costs behind the rosy promises of politicians and businessmen.
  • “The Economy”, as the book is economically titled, covers the usual subjects, but in a very different way. It begins with the biggest of big pictures, explaining how capitalism and industrialisation transformed the world, inviting students to contemplate how it arrived at where it is today.
  • Students pay $300 or more for textbooks explaining that in competitive markets the price of a good should fall to the cost of producing an additional unit, and unsurprisingly regurgitate the expected answers. A study of 170 economics modules taught at seven universities found that marks in exams favoured the ability to “operate a model” over proofs of independent judgment.
  • A Chilean professor, Oscar Landerretche, worked with other economists to design a new curriculum. He, Sam Bowles, of the Santa Fe Institute, Wendy Carlin, of University College London (UCL), and Margaret Stevens, of Oxford University, painstakingly knitted contributions from economists around the world into a text that is free, online and offers interactive charts and videos of star economists. That text is the basis of economics modules taught by a small but growing number of instructors.
  • That could mean, eventually, a broader array of perspectives within economics departments, bigger and bolder research questions—and fewer profession-shaking traumas in future.
  • Messy complications, from environmental damage to inequality, are placed firmly in the foreground.
  • It explains cost curves, as other introductory texts do, but in the context of the Industrial Revolution, thus exposing students to debates about why industrialisation kicked off when and where it did.
  • But the all-important exceptions are taught quite late in the curriculum—or, often, only in more advanced courses taken by those pursuing an economics degree.
  • “The Economy” does not dumb down economics; it uses maths readily, keeping students engaged through the topicality of the material. Quite early on, students have lessons in the weirdness in economics—from game theory to power dynamics within firms—that makes the subject fascinating and useful but are skimmed over in most introductory courses.
  • Homa Zarghamee, also at Barnard, appreciates having to spend less time “unteaching”, ie, explaining to students why the perfect-competition result they learned does not actually hold in most cases. A student who does not finish the course will not be left with a misleading idea of economics, she notes.
  • Thomas Malthus’s ideas are used to teach students the uses and limitations of economic models, combining technical instruction with a valuable lesson from the history of economic thought.
  • Far from an unintended result of ill-conceived policies, she argues, the roughly 4m deaths from hunger in 1932 and 1933 were part of a deliberate campaign by Josef Stalin and the Bolshevik leadership to crush Ukrainian national aspirations, literally starving actual or potential bearers of those aspirations into submission to the Soviet order
  • The politics in this case was the Sovietisation of Ukraine; the means was starvation. Food supply was not mismanaged by Utopian dreamers. It was weaponised.
  • . “Red Famine” presents a Bolshevik government so hell-bent on extracting wealth and controlling labour that it was willing to confiscate the last remaining grain from hungry peasants (mostly but not exclusively in Ukraine) and then block them from fleeing famine-afflicted areas to search for food.
  • . Stalin was not only aware of the ensuing mass death (amounting to roughly 13% of Ukraine’s population). He actively sought to suppress knowledge of it (including banning the publication of census data), so as not to distract from the campaign to collectivise Soviet agriculture and extend the Communist Party’s reach into the countryside—a campaign Ms Applebaum calls a “revolution...more profound and more shocking than the original Bolshevik revolution itself”
  • The book’s most powerful passages describe the moral degradation that resulted from sustained hunger, as family solidarity and village traditions of hospitality withered in the face of the overwhelming desire to eat. Under a state of siege by Soviet authorities, hunger-crazed peasants took to consuming, grass, animal hides, manure and occasionally each other. People became indifferent to the sight of corpses lying in streets, and eventually to their own demis
  • While stressing Stalin’s goal of crushing Ukrainian nationalism, moreover, Ms Applebaum passes over a subtler truth. For along with its efforts to root out “bourgeois” nationalisms, the Kremlin relentlessly promoted a Soviet version of Ukrainian identity, as it did with most other ethnic minorities. Eight decades on, that legacy has done even more to shape today’s Ukraine than the Holodomor.
kaylynfreeman

Opinion | How Fear Distorts Our Thinking About the Coronavirus - The New York Times - 0 views

  • When it comes to making decisions that involve risks, we humans can be irrational in quite systematic ways — a fact that the psychologists Amos Tversky and Daniel Kahneman famously demonstrated with the help of a hypothetical situation, eerily apropos of today’s coronavirus epidemic, that has come to be known as the Asian disease problem.
  • This is irrational because the two questions don’t differ mathematically. In both cases, choosing the first option means accepting the certainty that 200 people live, and choosing the second means embracing a one-third chance that all could be saved with an accompanying two-thirds chance that all will die. Yet in our minds, Professors Tversky and Kahneman explained, losses loom larger than gains, and so when the options are framed in terms of deaths rather than cures, we’ll accept more risks to try to avoid deaths.
  • Our decision making is bad enough when the disease is hypothetical. But when the disease is real — when we see actual death tolls climbing daily, as we do with the coronavirus — another factor besides our sensitivity to losses comes into play: fear.
  • ...1 more annotation...
  • The brain states we call emotions exist for one reason: to help us decide what to do next. They reflect our mind’s predictions for what’s likely to happen in the world and therefore serve as an efficient way to prepare us for it. But when the emotions we feel aren’t correctly calibrated for the threat or when we’re making judgments in domains where we have little knowledge or relevant information, our feelings become more likely to lead us astray.
pier-paolo

Opinion | How Fear Distorts Our Thinking About the Coronavirus - The New York Times - 0 views

  • When it comes to making decisions that involve risks, we humans can be irrational in quite systematic ways
  • asked people to imagine that the United States was preparing for an outbreak of an unusual Asian disease that was expected to kill 600 citizens. To combat the disease, people could choose between two options: a treatment that would ensure 200 people would be saved or one that had a 33 percent chance of saving all 600 but a 67 percent chance of saving none. Here, a clear favorite emerged: Seventy-two percent chose the former.
  • when Professors Tversky and Kahneman framed the question differently, such that the first option would ensure that only 400 people would die and the second option offered a 33 percent chance that nobody would perish and a 67 percent chance that all 600 would die, people’s preferences reversed. Seventy-eight percent now favored the second option.
  • ...5 more annotations...
  • But when the disease is real — when we see actual death tolls climbing daily, as we do with the coronavirus — another factor besides our sensitivity to losses comes into play: fear.
  • when the emotions we feel aren’t correctly calibrated for the threat or when we’re making judgments in domains where we have little knowledge or relevant information, our feelings become more likely to lead us astray.
  • Using a nationally representative sample in the months following Sept. 11, 2001, the decision scientist Jennifer Lerner showed that feeling fear led people to believe that certain anxiety-provoking possibilities (for example, a terrorist strike) were more likely to occur.
  • we presented sad, angry or emotionally neutral people with a government proposal to raise taxes. In one version of the proposal, we said the increased revenue would be used to reduce “depressing” problems (like poor conditions in nursing homes). In the other, we focused on “angering” problems (like increasing crime because of a shortage of police officers).
  • when the emotions people felt matched the emotion of the rationales for the tax increase, their attitudes toward the proposal became more positive. But the more effort they put into considering the proposal didn’t turn out to reduce this bias; it made it stronger.
ilanaprincilus06

How Elastic Is Your Brain? - The New York Times - 0 views

  • you are not merely your brain — your body and the broader circumstances of your life also make you who you are.
  • you are not merely your brain — your body and the broader circumstances of your life also make you who you are.
    • ilanaprincilus06
       
      The brain carries all of the information that determines who we are and also makes the decisions that determine who we are. Despite this, there are so many other organs, etc that make us our unique selves.
  • we mythologize brains, creating false boundaries that divorce them from bodies and the outside world, blinding us to the biological nature of the mind.
  • ...6 more annotations...
  • These divisions, Jasanoff contends, are why neuroscience has failed to make a real difference in anyone’s life.
    • ilanaprincilus06
       
      We believe that our brain doesn't have many limitations, so we indulge in the false truth that we can do anything as a result.
  • our bodies and the world around us affect our thoughts, feelings and actions, but not how body and world become biologically embedded to constitute a mind.
  • a discussion of how the workings of your body necessarily and irrevocably shape your brain’s structure and function, and vice versa.
  • the experiences we have from infancy onward impact the brain’s wiring. For example, childhood poverty and adversity fundamentally alter brain development, leaving an indelible mark that increases people’s risk of illness in adulthood.
  • she tries to reduce her anxiety, expand her creativity, improve her math ability, calibrate her inner GPS and take control over her perception of the passing of time, using various brain-hacking techniques. Each chapter is a mini-redemption story, with Williams starting out skeptical and ending victorious.
  • treat their discomfort not as a damper but as a signal to press on. This is what Mlodinow calls “elastic thinking”
knudsenlu

A Tantalizing Signal From the Early Universe - The Atlantic - 0 views

  • Near the beginning, not long after the Big Bang, the universe was a cold and dark place swirling with invisible gas, mostly hydrogen and helium. Over millions of years, gravity pulled some of this primordial gas into pockets. The pockets eventually became so dense they collapsed under their own weight and ignited, flooding the darkness with ultraviolet radiation. These were the very first stars in the universe, flashing into existence like popcorn kernels unfurling in the hot oil of an empty pan.
  • Everything flowed from this cosmic dawn. The first stars illuminated the universe, collapsed into the black holes that keep galaxies together, and produced the heavy elements that would make planets and moons and the human beings that evolved to gaze upon it all.
  • This epoch in our cosmic history has long fascinated scientists. They hoped that someday, using technology that was calibrated just right, they could detect faint signals from that moment. Now, they think they’ve done it.
  • ...4 more annotations...
  • Astronomers said Wednesday they have found, for the first time, evidence of the earliest stars.
  • The nature of this signal suggests a new estimate for when the first stars emerged: about 180 million years after the Big Bang, slightly earlier than many scientists expected, but still within the expectations of theoretical models.
  • The nature of the radio waves Bowman and his colleagues detected mostly matches theoretical predictions, but not everything lines up. When they tuned their instrument to listen to the frequency for hydrogen gas that models predicted, they didn’t hear anything. When they decided to search in a lower range, they got it. But the signal they found was stronger than expected. That meant that the hydrogen gas in the early universe was much much colder—perhaps nearly twice as cold—than previously estimated.
  • Bowman says other teams around the world have been working to build and design instruments to detect this signal from the early universe, and he expects they should be able to confirm the results in the coming months.
Javier E

Poker and Decision Making - 2 views

  • our tendency to judge decisions based on how they turn out, known in poker as “resulting.”
  • our strategy is often based on beliefs that can be biased or wrong. We are quick to form, and slow to update our beliefs. We tend towards absolutes, and indulge in “motivated reasoning,” seeking out confirmation while ignoring contradictory evidence
  • solution is to embrace uncertainty by calibrating our confidence
  • ...4 more annotations...
  • Duke offers a road map for creating a group “decision pod” that can provide us with feedback. Focus on accuracy, accountability, and openness to diverse views. Set clear rules: Court dissent and differing perspectives, and take responsibility even when doing so is painful.
  • formed to improve viewpoint diversity in academia: Commit to transparency and sharing information; apply consistent standards to claims made by separating information from who is providing it; cultivate disinterestedness; seek “outcome blindness” to the hypothesis being tested; and encourage skepticism and dissent.
  • Duke explores how we can reduce conflict by shifting perspective among our past, present and futures selves via “mental time travel.” She suggests several techniques, including backcasting, premortems, and Ulysses contracts.
  • Duke also addresses how we outweigh the present over the future. When we reach for a donut instead of an apple, we’re doing so at the expense of our future self
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
1 - 13 of 13
Showing 20 items per page