Skip to main content

Home/ TOK Friends/ Group items tagged critical theory

Rss Feed Group items tagged

Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 1 views

  • Skinner's approach stressed the historical associations between a stimulus and the animal's response -- an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past.
  • Chomsky's conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations.
  • Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment.
  • ...17 more annotations...
  • David Marr, a neuroscientist colleague of Chomsky's at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision,
  • a complex biological system can be understood at three distinct levels. The first level ("computational level") describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain's identification of the objects present in the image we had observed. The second level ("algorithmic level") describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level ("implementation level") describes how our own biological hardware of cells implements the procedure described by the algorithmic level.
  • The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the "black box" that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.
  • As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner's behaviorist paradigm -- an achievement commonly referred to as the "cognitive revolution,"
  • While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists
  • Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field's heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the "new AI" -- focused on using statistical learning techniques to better mine and predict data -- is unlikely to yield general principles about the nature of intelligent beings or about cognition.
  • Chomsky acknowledged that the statistical approach might have practical value, just as in the example of a useful search engine, and is enabled by the advent of fast computers capable of processing massive data. But as far as a science goes, Chomsky would argue it is inadequate, or more harshly, kind of shallow
  • An unlikely pair, systems biology and artificial intelligence both face the same fundamental task of reverse-engineering a highly complex system whose inner workings are largely a mystery
  • Implicit in this endeavor is the assumption that with enough sophisticated statistical tools and a large enough collection of data, signals of interest can be weeded it out from the noise in large and poorly understood biological systems.
  • Brenner, a contemporary of Chomsky who also participated in the same symposium on AI, was equally skeptical about new systems approaches to understanding the brain. When describing an up-and-coming systems approach to mapping brain circuits called Connectomics, which seeks to map the wiring of all neurons in the brain (i.e. diagramming which nerve cells are connected to others), Brenner called it a "form of insanity."
  • These debates raise an old and general question in the philosophy of science: What makes a satisfying scientific theory or explanation, and how ought success be defined for science?
  • Ever since Isaiah Berlin's famous essay, it has become a favorite pastime of academics to place various thinkers and scientists on the "Hedgehog-Fox" continuum: the Hedgehog, a meticulous and specialized worker, driven by incremental progress in a clearly defined field versus the Fox, a flashier, ideas-driven thinker who jumps from question to question, ignoring field boundaries and applying his or her skills where they seem applicable.
  • Chomsky's work has had tremendous influence on a variety of fields outside his own, including computer science and philosophy, and he has not shied away from discussing and critiquing the influence of these ideas, making him a particularly interesting person to interview.
  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • it has been argued in my view rather plausibly, though neuroscientists don't like it -- that neuroscience for the last couple hundred years has been on the wrong track.
  • neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
Javier E

Writing, Typing, and Economics - The Atlantic - 0 views

  • The first lesson would have to do with the all-important issue of inspiration. All writers know that on some golden mornings they are touched by the wand — are on intimate terms with poetry and cosmic truth. I have experienced those moments myself. Their lesson is simple: It's a total illusion.
  • And the danger in the illusion is that you will wait for those moments. Such is the horror of having to face the typewriter that you will spend all your time waiting. I am persuaded that most writers, like most shoemakers, are about as good one day as the next (a point which Trollope made), hangovers apart. The difference is the result of euphoria, alcohol, or imagination. The meaning is that one had better go to his or her typewriter every morning and stay there regardless of the seeming result. It will be much the same.
  • Writers, in contrast, do nothing because they are waiting for inspiration.In my own case there are days when the result is so bad that no fewer than five revisions are required. However, when I'm greatly inspired, only four revisions are needed before, as I've often said, I put in that note of spontaneity which even my meanest critics concede
  • ...13 more annotations...
  • It helps greatly in the avoidance of work to be in the company of others who are also waiting for the golden moment. The best place to write is by yourself, because writing becomes an escape from the terrible boredom of your own personality. It's the reason that for years I've favored Switzerland, where I look at the telephone and yearn to hear it ring.
  • There may be inspired writers for whom the first draft is just right. But anyone who is not certifiably a Milton had better assume that the first draft is a very primitive thing. The reason is simple: Writing is difficult work. Ralph Paine, who managed Fortune in my time, used to say that anyone who said writing was easy was either a bad writer or an unregenerate liar
  • Thinking, as Voltaire avowed, is also a very tedious thing which men—or women—will do anything to avoid. So all first drafts are deeply flawed by the need to combine composition with thought. Each later draft is less demanding in this regard. Hence the writing can be better
  • There does come a time when revision is for the sake of change—when one has become so bored with the words that anything that is different looks better. But even then it may be better.
  • the lesson of Harry Luce. No one who worked for him ever again escaped the feeling that he was there looking over one's shoulder. In his hand was a pencil; down on each page one could expect, any moment, a long swishing wiggle accompanied by the comment: "This can go." Invariably it could. It was written to please the author and not the reader. Or to fill in the space. The gains from brevity are obvious; in most efforts to achieve brevity, it is the worst and dullest that goes. It is the worst and dullest that spoils the rest.
  • as he grew older, he became less and less interested in theory, more and more interested in information.
  • Reluctantly, but from a long and terrible experience, I would urge my young writers to avoid all attempts at humor
  • Only a very foolish man will use a form of language that is wholly uncertain in its effect. That is the nature of humor
  • Finally, I would come to a matter of much personal interest, intensely self-serving. It concerns the peculiar pitfalls of the writer who is dealing with presumptively difficult or technical matters
  • Economics is an example, and within the field of economics the subject of money, with the history of which I have been much concerned, is an especially good case. Any specialist who ventures to write on money with a view to making himself intelligible works under a grave moral hazard. He will be accused of oversimplification. The charge will be made by his fellow professionals, however obtuse or incompetent
  • In the case of economics there are no important propositions that cannot be stated in plain language
  • Additionally, and especially in the social sciences, much unclear writing is based on unclear or incomplete thought
  • It is possible with safety to be technically obscure about something you haven't thought out. It is impossible to be wholly clear on something you do not understand. Clarity thus exposes flaws in the thought
Javier E

What's behind the confidence of the incompetent? This suddenly popular psychological ph... - 0 views

  • Someone who has very little knowledge in a subject claims to know a lot. That person might even boast about being an expert.
  • This phenomenon has a name: the Dunning-Kruger effect. It’s not a disease, syndrome or mental illness; it is present in everybody to some extent, and it’s been around as long as human cognition, though only recently has it been studied and documented in social psychology.
  • Charles Darwin followed that up in 1871 with “ignorance more frequently begets confidence than does knowledge.”
  • ...10 more annotations...
  • Put simply, incompetent people think they know more than they really do, and they tend to be more boastful about it.
  • To test Darwin’s theory, the researchers quizzed people on several topics, such as grammar, logical reasoning and humor. After each test, they asked the participants how they thought they did. Specifically, participants were asked how many of the other quiz-takers they beat.
  • Time after time, no matter the subject, the people who did poorly on the tests ranked their competence much higher
  • On average, test takers who scored as low as the 10th percentile ranked themselves near the 70th percentile. Those least likely to know what they were talking about believed they knew as much as the experts.
  • Dunning and Kruger’s results have been replicated in at least a dozen different domains: math skills, wine tasting, chess, medical knowledge among surgeons and firearm safety among hunters.
  • Even though President Trump’s statements are rife with errors, falsehoods or inaccuracies, he expresses great confidence in his aptitude. He says he does not read extensively because he solves problems “with very little knowledge other than the knowledge I [already] had.” He has said in interviews he doesn’t read lengthy reports because “I already know exactly what it is.”
  • He has “the best words” and cites his “high levels of intelligence” in rejecting the scientific consensus on climate change. Decades ago, he said he could end the Cold War: “It would take an hour and a half to learn everything there is to learn about missiles,” Trump told The Washington Post’s Lois Romano over dinner in 1984. “I think I know most of it anyway.”
  • Whether people want to understand “the other side” or they’re just looking for an epithet, the Dunning-Kruger effect works as both, Dunning said, which he believes explains the rise of interest.
  • Dunning says the effect is particularly dangerous when someone with influence or the means to do harm doesn’t have anyone who can speak honestly about their mistakes.
  • Not surprisingly (though no less concerning), Dunning’s follow-up research shows the poorest performers are also the least likely to accept criticism or show interest in self improvement.
Javier E

Praising Andy Warhol - NYTimes.com - 1 views

  • Peter Schjeldahl, for example, calls Warhol a “genius” and a “great artist” and even says that “the gold standard of Warhol exposes every inflated value in other currencies.”
  •   If Warhol is a great artist and these boxes are among his most important works, what am I missing?
  • Appreciations of Warhol’s boxes typically emphasize their effects rather than their appearance.  These appreciations take two quite different forms.
  • ...9 more annotations...
  • Warhol’s boxes are praised for subverting the distinction between mundane objects of everyday life and “art” in a museum.  As a result, we can enjoy and appreciate the things that make up our everyday life just as much as what we see in museums (and with far less effort).  Whereas the joys of traditional art typically require an initiation into an esoteric world of historical information and refined taste, Warhol’s “Pop Art” reveals the joys of what we all readily understand and appreciate.  As Danto put it, “Warhol’s intuition was that nothing an artist could do would give us more of what art sought than reality already gave us.”
  • Warhol’s work is also praised for posing a crucial philosophical question about art.  As Danto puts it: “Given two objects that look exactly alike, how is it possible for one of them to be a work of art and the other just an ordinary object?”  Answering this question requires realizing that there are no perceptual qualities that make something a work of art.  This in turn implies that anything, no matter how it looks, can be a work of art.
  • According to Danto, whether an object is a work of art depends on its relation to an “art world”:  “an atmosphere of artistic theory, a knowledge of the history of art” that exists at a particular time.
  • this explanation of Warhol’s greatness, contrary to the first one, makes art appreciation once again a matter of esoteric knowledge and taste, now focused on subtle philosophical puzzles about the nature of art.
  • it was Danto, not Warhol, who provided the intellectual/aesthetic excitement by formulating and developing a brilliant answer to the question.  To the extent that the philosophical question had artistic value in the context of the contemporary artworld,  Danto was more the artist than Warhol.
  • I agree that Warhol — along with many other artists from the 1950s on — opened up new ways of making art that traditional “high art” had excluded.  But new modes of artistic creation — commercial design techniques, performances, installations, conceptual art — do not guarantee a new kind or a higher quality of aesthetic experience. 
  • anything can be presented as a work of art.   But it does not follow that anything can produce a satisfying aesthetic experience.  The great works of the tradition do not circumscribe the sorts of things that can be art, but they are exemplars of what we expect a work of art to do to us.  (This is the sense in which, according to Kant, originally beautiful works of art are exemplary, yet without providing rules for further such works of art.)
  • Praise of Andy Warhol often emphasizes the new possibilities of artistic creation his work opened up.  That would make his work important in the history of art and for that reason alone of considerable interest.
  • as Jerrold Levinson and others have pointed out, a work can be an important artistic achievement without being an important aesthetic achievement.  This, I suggest, is how we should think about Warhol’s Brillo boxes.
Javier E

Declaration of Disruption - The New York Times - 0 views

  • A presidency characterized by pandemonium invades and infects that space, leaving people unsettled and on edge.
  • this, in turn, leads to greater polarization, to feelings of alienation and anger, to unrest and even to violence.
  • In short, chaotic leadership can inflict real trauma on political and civic culture.
  • ...4 more annotations...
  • Donald Trump, arguably the most disruptive and transgressive president in American history. He thrives on creating turbulence in every conceivable sphere. The blast radius of his tumultuous acts and chaotic temperament is vast
  • here’s the truly worrisome thing: The disruption is only going to increase, both because he’s facing criticism that seems to trigger him psychologically and because his theory of management involves the cultivation of chaos. He has shown throughout his life a defiant refusal to be disciplined. His disordered personality thrives on mayhem and upheaval, on vicious personal attacks and ceaseless conflict
  • We have as president the closest thing to a nihilist in our history — a man who believes in little or nothing, who has the impulse to burn down rather than to build up. When the president eventually faces a genuine crisis, his ignorance and inflammatory instincts will make everything worse.
  • Republican voters and politicians rallied around Mr. Trump in 2016, believing he was anti-establishment when in fact he was anti-order. He turns out to be an institutional arsonist. It is an irony of American history that the Republican Party, which has historically valued order and institutions, has become the conduit of chaos.
Javier E

Sex, Morality, and Modernity: Can Immanuel Kant Unite Us? - The Atlantic - 1 views

  • Before I jump back into the conversation about sexual ethics that has unfolded on the Web in recent days, inspired by Emily Witt's n+1 essay "What Do You Desire?" and featuring a fair number of my favorite writers, it's worth saying a few words about why I so value debate on this subject, and my reasons for running through some sex-life hypotheticals near the end of this article.
  • As we think and live, the investment required to understand one another increases. So do the stakes of disagreeing. 18-year-olds on the cusp of leaving home for the first time may disagree profoundly about how best to live and flourish, but the disagreements are abstract. It is easy, at 18, to express profound disagreement with, say, a friend's notions of child-rearing. To do so when he's 28, married, and raising a son or daughter is delicate, and perhaps best avoided
  • I have been speaking of friends. The gulfs that separate strangers can be wider and more difficult to navigate because there is no history of love and mutual goodwill as a foundation for trust. Less investment has been made, so there is less incentive to persevere through the hard parts.
  • ...27 more annotations...
  • I've grown very close to new people whose perspectives are radically different than mine.
  • It floors me: These individuals are all repositories of wisdom. They've gleaned it from experiences I'll never have, assumptions I don't share, and brains wired different than mine. I want to learn what they know.
  • Does that get us anywhere? A little ways, I think.
  • "Are we stuck with a passé traditionalism on one hand, and total laissez-faire on the other?" Is there common ground shared by the orthodox-Christian sexual ethics of a Rod Dreher and those who treat consent as their lodestar?
  • Gobry suggests that Emmanuel Kant provides a framework everyone can and should embrace, wherein consent isn't nearly enough to make a sexual act moral--we must, in addition, treat the people in our sex lives as ends, not means.
  • Here's how Kant put it: "Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end."
  • the disappearance of a default sexual ethic in America and the divergence of our lived experiences means we have more to learn from one another than ever, even as our different choices raise the emotional stakes.
  • Nor does it seem intuitively obvious that a suffering, terminally ill 90-year-old is regarding himself as a means, or an object, if he prefers to end his life with a lethal injection rather than waiting three months in semi-lucid agony for his lungs to slowly shut down and suffocate him. (Kant thought suicide impermissible.) The terminally ill man isn't denigrating his own worth or the preciousness of life or saying it's permissible "any time" it is difficult. He believes ending his life is permissible only because the end is nigh, and the interim affords no opportunity for "living" in anything except a narrow biological sense.
  • It seems to me that, whether we're talking about a three-week college relationship or a 60-year marriage, it is equally possible to treat one's partner as a means or as an end (though I would agree that "treating as means" is more common in hookups than marriage)
  • my simple definition is this: It is wrong to treat human persons in such a way that they are reduced to objects. This says nothing about consent: a person may consent to be used as an object, but it is still wrong to use them that way. It says nothing about utility: society may approve of using some people as objects; whether those people are actual slaves or economically oppressed wage-slaves it is still wrong to treat them like objects. What it says, in fact, is that human beings have intrinsic worth and dignity such that treating them like objects is wrong.
  • what it means to treat someone as a means, or as an object, turns out to be in dispute.
  • Years ago, I interviewed a sister who was acting as a surrogate for a sibling who couldn't carry her own child. The notion that either regarded the other (or themselves) as an object seems preposterous to me. Neither was treating the other as a means, because they both freely chose, desired and worked in concert to achieve the same end.
  • It seems to me that the Kantian insight is exactly the sort of challenge traditionalist Christians should make to college students as they try to persuade them to look more critically at hookup culture. I think a lot of college students casually mislead one another about their intentions and degree of investment, feigning romantic interest when actually they just want to have sex. Some would say they're transgressing against consent. I think Kant has a more powerful challenge. 
  • Ultimately, Kant only gets us a little way in this conversation because, outside the realm of sex, he thinks consent goes a long way toward mitigating the means problem, whereas in the realm of sex, not so much. This is inseparable from notions he has about sex that many of us just don't share.
  • two Biblical passages fit my moral intuition even better than Kant. "Love your neighbor as yourself." And "therefore all things whatsoever would that men should do to you, do ye even so to them.
  • "do unto others..." is extremely demanding, hard to live up to, and a very close fit with my moral intuitions.
  • "Do unto others" is also enough to condemn all sorts of porn, and to share all sorts of common ground with Dreher beyond consent. Interesting that it leaves us with so many disagreements too. "Do unto others" is core to my support for gay marriage.
  • Are our bones always to be trusted?) The sexual behavior parents would be mortified by is highly variable across time and cultures. So how can I regard it as a credible guide of inherent wrong? Professional football and championship boxing are every bit as violent and far more physically damaging to their participants than that basement scene, yet their cultural familiarity is such that most people don't feel them to be morally suspect. Lots of parents are proud, not mortified, when a son makes the NFL.
  • "Porn operates in fantasy the way boxing and football operate in fantasy. The injuries are quite real." He is, as you can see, uncomfortable with both. Forced at gunpoint to choose which of two events could proceed on a given night, an exact replica of the San Francisco porn shoot or an Ultimate Fighting Championship tournament--if I had to shut one down and grant the other permission to proceed--what would the correct choice be?
  • insofar as there is something morally objectionable here, it's that the audience is taking pleasure in the spectacle of someone being abused, whether that abuse is fact or convincing illusion. Violent sports and violent porn interact with dark impulses in humanity, as their producers well know.
  • If Princess Donna was failing to "do unto others" at all, the audience was arguably who she failed. Would she want others to entertain her by stoking her dark human impulses? Then again, perhaps she is helping to neuter and dissipate them in a harmless way. That's one theory of sports, isn't it? We go to war on the gridiron as a replacement for going to war? And the rise in violent porn has seemed to coincide with falling, not rising, incidence of sexual violence. 
  • On all sorts of moral questions I can articulate confident judgments. But I am confident in neither my intellect nor my gut when it comes to judging Princess Donna, or whether others are transgressing against themselves or "nature" when doing things that I myself wouldn't want to do. Without understanding their mindset, why they find that thing desirable, or what it costs them, if anything, I am loath to declare that it's grounded in depravity or inherently immoral just because it triggers my disgust instinct, especially if the people involved articulate a plausible moral code that they are following, and it even passes a widely held standard like "do unto others."
  • Here's another way to put it. Asked to render moral judgments about sexual behaviors, there are some I would readily label as immoral. (Rape is an extreme example. Showing the topless photo your girlfriend sent to your best friend is a milder one.) But I often choose to hold back and error on the side of not rendering a definitive judgment, knowing that occasionally means I'll fail to label as unethical some things that actually turn out to be morally suspect.
  • Partly I take that approach because, unlike Dreher, I don't see any great value or urgency in the condemnations, and unlike Douthat, I worry more about wrongful stigma than lack of rightful stigmas
  • In a society where notions of sexual morality aren't coercively enforced by the church or the state, what purpose is condemnation serving?
  • People are great! Erring on the side of failing to condemn permits at least the possibility of people from all of these world views engaging in conversation with one another.
  • Dreher worries about the fact that, despite our discomfort, neither Witt nor I can bring ourselves to say that the sexual acts performed during the S.F. porn shoot were definitely wrong. Does that really matter? My interlocutors perhaps see a cost more clearly than me, as well they might. My bias is that just arguing around the fire is elevating.
Javier E

What Is Wrong with the West's Economies? by Edmund S. Phelps | The New York Review of B... - 1 views

  • What is wrong with the economies of the West—and with economics?
  • With little or no effective policy initiative giving a lift to the less advantaged, the jarring market forces of the past four decades—mainly the slowdowns in productivity that have spread over the West and, of course, globalization, which has moved much low-wage manufacturing to Asia—have proceeded, unopposed, to drag down both employment and wage rates at the low end. The setback has cost the less advantaged not only a loss of income but also a loss of what economists call inclusion—access to jobs offering work and pay that provide self-respect.
  • The classical idea of political economy has been to let wage rates sink to whatever level the market takes them, and then provide everyone with the “safety net” of a “negative income tax,” unemployment insurance, and free food, shelter, clothing, and medical care
  • ...32 more annotations...
  • This failing in the West’s economies is also a failing of economics
  • many people have long felt the desire to do something with their lives besides consuming goods and having leisure. They desire to participate in a community in which they can interact and develop.
  • Our prevailing political economy is blind to the very concept of inclusion; it does not map out any remedy for the deficiency
  • injustice of another sort. Workers in decent jobs view the economy as unjust if they or their children have virtually no chance of climbing to a higher rung in the socioeconomic ladder
  • “Money is like blood. You need it to live but it isn’t the point of life.”4
  • justice is not everything that people need from their economy. They need an economy that is good as well as just. And for some decades, the Western economies have fallen short of any conception of a “good economy”—an economy offering a “good life,” or a life of “richness,” as some humanists call it
  • The good life as it is popularly conceived typically involves acquiring mastery in one’s work, thus gaining for oneself better terms—or means to rewards, whether material, like wealth, or nonmaterial—an experience we may call “prospering.”
  • As humanists and philosophers have conceived it, the good life involves using one’s imagination, exercising one’s creativity, taking fascinating journeys into the unknown, and acting on the world—an experience I call “flourishing.”
  • prospering and flourishing became prevalent in the nineteenth century when, in Europe and America, economies emerged with the dynamism to generate their own innovation.
  • What is the mechanism of the slowdown in productivity
  • prospering
  • In nineteenth-century Britain and America, and later Germany and France, a culture of exploration, experimentation, and ultimately innovation grew out of the individualism of the Renaissance, the vitalism of the Baroque era, and the expressionism of the Romantic period.
  • What made innovating so powerful in these economies was that it was not limited to elites. It permeated society from the less advantaged parts of the population on up.
  • High-enough wages, low-enough unemployment, and wide-enough access to engaging work are necessary for a “good-enough” economy—though far from sufficient. The material possibilities of the economy must be adequate for the nonmaterial possibilities to be widespread—the satisfactions of prospering and of flourishing through adventurous, creative, and even imaginative work.
  • today’s standard economics. This economics, despite its sophistication in some respects, makes no room for economies in which people are imagining new products and using their creativity to build them. What is most fundamentally “wrong with economics” is that it takes such an economy to be the norm—to be “as good as it gets.”
  • ince around 1970, or earlier in some cases, most of the continental Western European economies have come to resemble more completely the mechanical model of standard economics. Most companies are highly efficient. Households, apart from the very low-paid or unemployed, have gone on saving
  • In most of Western Europe, economic dynamism is now at lows not seen, I would judge, since the advent of dynamism in the nineteenth century. Imagining and creating new products has almost disappeared from the continent
  • The bleak levels of both unemployment and job satisfaction in Europe are testimony to its dreary economies.
  • a recent survey of household attitudes found that, in “happiness,” the median scores in Spain (54), France (51), Italy (48), and Greece (37) are all below those in the upper half of the nations labeled “emerging”—Mexico (79), Venezuela (74), Brazil (73), Argentina (66), Vietnam (64), Colombia (64), China (59), Indonesia (58), Chile (58), and Malaysia (56)
  • The US economy is not much better. Two economists, Stanley Fischer and Assar Lindbeck, wrote of a “Great Productivity Slowdown,” which they saw as beginning in the late 1960s.11 The slowdown in the growth of capital and labor combined—what is called “total factor productivity”—is star
  • though the injustices in the West’s economies are egregious, they ought not to be seen as a major cause of the productivity slowdowns and globalization. (For one thing, a slowdown of productivity started in the US in the mid-1960s and the sharp loss of manufacturing jobs to poorer countries occurred much later—from the late 1970s to the early 1990s.) Deeper causes must be at work.
  • The plausible explanation of the syndrome in America—the productivity slowdown and the decline of job satisfaction, among other things—is a critical loss of indigenous innovation in the established industries like traditional manufacturing and services that was not nearly offset by the innovation that flowered in a few new industries
  • hat then caused this narrowing of innovation? No single explanation is persuasive. Yet two classes of explanations have the ring of truth. One points to suppression of innovation by vested interests
  • some professions, such as those in education and medicine, have instituted regulation and licensing to curb experimentation and change, thus dampening innovation
  • established corporations—their owners and stakeholders—and entire industries, using their lobbyists, have obtained regulations and patents that make it harder for new firms to gain entry into the market and to compete with incumbents.
  • The second explanation points to a new repression of potential innovators by families and schools. As the corporatist values of control, solidarity, and protection are invoked to prohibit innovation, traditional values of conservatism and materialism are often invoked to inhibit a young person from undertaking an innovation.
  • ow might Western nations gain—or regain—widespread prospering and flourishing? Taking concrete actions will not help much without fresh thinking: people must first grasp that standard economics is not a guide to flourishing—it is a tool only for efficiency.
  • Widespread flourishing in a nation requires an economy energized by its own homegrown innovation from the grassroots on up. For such innovation a nation must possess the dynamism to imagine and create the new—economic freedoms are not sufficient. And dynamism needs to be nourished with strong human values.
  • a reform of education stands out. The problem here is not a perceived mismatch between skills taught and skills in demand
  • The problem is that young people are not taught to see the economy as a place where participants may imagine new things, where entrepreneurs may want to build them and investors may venture to back some of them. It is essential to educate young people to this image of the economy.
  • It will also be essential that high schools and colleges expose students to the human values expressed in the masterpieces of Western literature, so that young people will want to seek economies offering imaginative and creative careers. Education systems must put students in touch with the humanities in order to fuel the human desire to conceive the new and perchance to achieve innovations
  • This reorientation of general education will have to be supported by a similar reorientation of economic education.
Javier E

You Think With the World, Not Just Your Brain - The Atlantic - 2 views

  • embodied or extended cognition: broadly, the theory that what we think of as brain processes can take place outside of the brain.
  • The octopus, for instance, has a bizarre and miraculous mind, sometimes inside its brain, sometimes extending beyond it in sucker-tipped trails. Neurons are spread throughout its body; the creature has more of them in its arms than in its brain itself. It’s possible that each arm might be, to some extent, an independently thinking creature, all of which are collapsed into an octopean superconsciousness in times of danger
  • Embodied cognition, though, tells us that we’re all more octopus-like than we realize. Our minds are not like the floating conceptual “I” imagined by Descartes. We’re always thinking with, and inseparable from, our bodies.
  • ...8 more annotations...
  • The body codes how the brain works, more than the brain controls the body. When we walk—whether taking a pleasant afternoon stroll, or storming off in tears, or trying to sneak into a stranger’s house late at night, with intentions that seem to have exploded into our minds from some distant elsewhere—the brain might be choosing where each foot lands, but the way in which it does so is always constrained by the shape of our legs
  • The way in which the brain approaches the task of walking is already coded by the physical layout of the body—and as such, wouldn’t it make sense to think of the body as being part of our decision-making apparatus? The mind is not simply the brain, as a generation of biological reductionists, clearing out the old wreckage of what had once been the soul, once insisted. It’s not a kind of software being run on the logical-processing unit of the brain. It’s bigger, and richer, and grosser, in every sense. It has joints and sinews. The rarefied rational mind sweats and shits; this body, this mound of eventually rotting flesh, is really you.
  • That’s embodied cognition.
  • Extended cognition is stranger.
  • The mind, they argue, has no reason to stop at the edges of the body, hemmed in by skin, flapping open and closed with mouths and anuses.
  • When we jot something down—a shopping list, maybe—on a piece of paper, aren’t we in effect remembering it outside our heads? Most of all, isn’t language itself something that’s always external to the individual mind?
  • Language sits hazy in the world, a symbolic and intersubjective ether, but at the same time it forms the substance of our thought and the structure of our understanding. Isn’t language thinking for us?
  • Writing, for Plato, is a pharmakon, a “remedy” for forgetfulness, but if taken in too strong a dose it becomes a poison: A person no longer remembers things for themselves; it’s the text that remembers, with an unholy autonomy. The same criticisms are now commonly made of smartphones. Not much changes.
katedriscoll

Translating Amanda Gorman - It Bears Mentioning - 0 views

  • The logic is supposed to be that only someone of Gorman’s race, and optimally gender, can effectively translate her expression into another language. But is that true? And are we not denying Gorman and black people basic humanity in – if I may jump the gun – pretending that it is?
  • Notice I didn’t mention Shakespeare translated into other languages. According to the Critical Race Theory paradigm that informs this performative take on translating Gorman, Shakespeare being a white man means that white translators of his work are akin to him, while non-white ones, minted in a world where they must always grapple with whiteness “centered,” are perfect bilinguals of a sort.
Javier E

A News Organization That Rejects the View From Nowhere - Conor Friedersdorf - The Atlantic - 1 views

  • For many years, Rosen has been a leading critic of what he calls The View From Nowhere, or the conceit that journalists bring no prior commitments to their work. On his long-running blog, PressThink, he's advocated for "The View From Somewhere"—an effort by journalists to be transparent about their priors, whether ideological or otherwise.  Rosen is just one of several voices who'll shape NewCo. Still, the new venture may well be a practical test of his View from Somewhere theory of journalism. I chatted with Rosen about some questions he'll face. 
  • The View from Nowhere won’t be a requirement for our journalists. Nor will a single ideology prevail. NewCo itself will have a view of the world: Accountability journalism, exposing abuses of power, revealing injustices will no doubt be part of it. Under that banner many “views from somewhere” can fit.
  • The way "objectivity" evolves historically is out of something much more defensible and interesting, which is in that phrase "Of No Party or Clique." That's the founders of The Atlantic saying they want to be independent of party politics. They don't claim to have no politics, do they? They simply say: We're not the voice of an existing faction or coalition. But they're also not the Voice of God.
  • ...10 more annotations...
  • NewCo will emulate the founders of The Atlantic. At some point "independent from" turned into "objective about." That was the wrong turn, made long ago, by professional journalism, American-style.
  • You've written that The View From Nowhere is, in part, a defense mechanism against charges of bias originating in partisan politics. If you won't be invoking it, what will your defense be when those charges happen? There are two answers to that. 1) We told you where we're coming from. 2) High standards of verification. You need both.
  • What about ideological diversity? The View from Somewhere obviously permits it. You've said you'll have it. Is that because it is valuable in itself?
  • The basic insight is correct: Since "news judgment" is judgment, the product is improved when there are multiple perspectives at the table ... But, if the people who are recruited to the newsroom because they add perspectives that might otherwise be overlooked are also taught that they should leave their politics at the door, or think like professional journalists rather than representatives or their community, or privilege something called "news values" over the priorities they had when they decided to become journalists, then these people are being given a fatally mixed message, if you see what I mean. They are valued for the perspective they bring, and then told that they should transcend that perspective.
  • When people talk about objectivity in journalism they have many different things in mind. Some of these I have no quarrel with. You could even say I’m a “fan.” For example, if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense. Don’t you? If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad. Is that objectivity? If so, I’m all for it, and I do that myself sometimes. 
  • By "we can do better than that" I mean: We can insist on the struggle to tell it like it is without also insisting on the View from Nowhere. The two are not connected. It was a mistake to think that they necessarily are. But why was this mistake made? To control people in the newsroom from "above." That's a big part of objectivity. Not truth. Control.
  • If it works out as you hope, if things are implemented well, etc., what's the potential payoff for readers? I think it's three things: First, this is a news site that is born into the digital world, but doesn't have to return profits to investors. That's not totally unique
  • Second: It's going to be a technology company as much as a news organization. That should result in better service.
  • a good formula for innovation is to start with something people want to do and eliminate some of the steps required to do it
  • The third upside is news with a human voice restored to it. This is the great lesson that blogging gives to journalism
anniina03

When Did Ancient Humans Start to Speak? - The Atlantic - 0 views

  • The larynx, also called the voice box, is where the trouble begins: Its location is, or was, supposed to be the key to language.
  • Scientists have agreed for a while that the organ is lower down the throat in humans than it is in any other primate, or was in our ancestors. And for decades, they thought that low-down larynx was a sort of secret ingredient to speech because it enabled its bearers to produce a variety of distinctive vowels, like the ones that make beet, bat, and boot sound like different words. That would mean that speech—and, therefore, language—couldn’t have evolved until the arrival of anatomically modern Homo sapiens about 200,000 years ago
  • Those speech abilities could include distinct vowels and consonants, syllables, or even syntax—all of which, according to LDT, should be impossible for any animal without a human vocal tract.
  • ...4 more annotations...
  • In fact, they propose that the necessary equipment—specifically, the throat shape and motor control that produce distinguishable vowels—has been around as long as 27 million years, when humans and Old World monkeys (baboons, mandrills, and the like) last shared a common ancestor.
  • As John Locke, a linguistics professor at Lehman College, put it, “Motor control rots when you die.” Soft tissues like tongues and nerves and brains generally don’t fossilize; DNA sequencing is impossible past a few hundred thousand years; no one has yet found a diary or rap track recorded by a teenage Australopithecus.
  • One of the quantitative models the new study relies on, he says, doesn’t properly represent the shape of the larynx, tongue, and other parts we use to talk: “It would convert a mailing tube into a human vocal tract.” And according to Lieberman, laryngeal descent theory “never claimed language was not possible” prior to the critical changes in our ancestors’ throat anatomy. “They’re trying to set up a straw man,” he said.
  • Rather than 27 million years, Hickok proposes that the earliest bound on any sort of speech ability would be nearer to human ancestors’ split with the Pan genus, which includes chimpanzees and bonobos, our closest living relatives. That split happened about 5 million to 7 million years ago—certainly longer than 200,000 years, but a far cry from 27 million. Lieberman argues that the precursors of speech might have emerged about a little more than 3 million years ago, when artifacts like jewelry appear in the archaeological record. The idea is that both language and jewelry are intimately related to the evolution of symbolic thinking.
johnsonel7

'Outlandish' competition seeks the brain's source of consciousness | Science | AAAS - 0 views

  • Brain scientists can watch neurons fire and communicate. They can map how brain regions light up during sensation, decision-making, and speech. What they can't explain is how all this activity gives rise to consciousness.
  • But understanding consciousness has become increasingly important for researchers seeking to communicate with locked-in patients, determine whether artificial intelligence systems can become conscious, or explore whether animals experience consciousness the way humans do.
  • The GWT says the brain's prefrontal cortex, which controls higher order cognitive processes like decision-making, acts as a central computer that collects and prioritizes information from sensory input. It then broadcasts the information to other parts of the brain that carry out tasks.
  • ...2 more annotations...
  • The project has drawn criticism, mostly because it includes the IIT. Anil Seth, a neuroscientist at the University of Sussex in Brighton, U.K., says the theory is too philosophical—attempting to explain why consciousness exists, rather than how the brain determines whether a stimulus is worthy of conscious attention—to be directly testable.
  • Despite his misgivings about the project's prospect for a decisive answer, Seth says it will spark discussion and collaboration among scientific rivals. "That itself is to be applauded," he says
Javier E

Andrew Sullivan: Nature, Nurture, and Weight Loss - 0 views

  • In his brilliant encyclopedia of “critical studies,” James Lindsay explains the core argument: “Like disability studies, fat studies draws on the work of Michel Foucault and queer theory to argue that negative attitudes to obesity are socially constructed and the result of systemic power that marginalizes and oppresses fat people (and fat perspectives) and of unjust medicalized narratives in order to justify prejudice against obese people.
  • Fatness — like race or gender — is not grounded in physical or biological reality. It is a function of systemic power. The task of fat studies is to “interrogate” this oppressive power and then dismantle it.
  • take the polar opposite position: Fatness is an unhealthy lifestyle that can be stopped by people just eating less and better. We haven’t always been this fat, and we should take responsibility for it, and the physical and psychological damage it brings. Some level of stigma is thereby inevitable, and arguably useful. Humans are not healthy when they are badly overweight; and the explosion in obesity in America has become a serious public-health issue.
  • ...7 more annotations...
  • “When did it become taboo in this country to talk about getting healthy?” my friend Bill Maher asked in a recent monologue. “Fat shaming doesn’t need to end; it needs to make a comeback. Some amount of shame is good. We shamed people out of smoking and into wearing seat belts. We shamed them out of littering and most of them out of racism.”
  • On one side are helpless victims, who react to any debate with cries of oppression, and take no responsibility for their own physical destiny; on the other are brutal realists, with a callous touch, often refusing to see the genetic, social, and psychological complexity of fatness, or that serious health issues are not universal among heavier types
  • This is our reality. We are neither angels nor beasts, but we partake of both. We can rarely make the ugly beautiful, and if we do, it’s a moral achievement. However much we try, we will never correct the core natural inequalities and differences of our mammalian existence. But we can hazard a moral middle, seeing beauty in many ways, acknowledging the humanity of all shapes and sizes, while managing our health and weight in ways that are not totally subject to the gaze of others.
  • is to grapple with complexity in a way that can be rigorously empirical and yet also humane.
  • We are all driven by instinctive attraction, but men are particularly subject to fixed and crude notions of hotness. Beauty will thereby always be the source of extraordinary and extraordinarily unfair advantage, even if it captures only a tiny slice of what being human is about.
  • the two stances reflect our two ideological poles — not so much left and right anymore as nurture and nature. One pole argues nature doesn’t independently exist and everything is social; and one blithely asserts that nature determines everything. Both are ruinous attempts to bludgeon uncomfortable reality into satisfying ideology.
  • What we needed, in some ways, for our collective mental health, was a catalyst for greater physical socialization, more human contact, and more meaningful community. What we’re getting, I fear, is the opposite
Javier E

How to Get Things Done When You Don't Want to Do Anything - The New York Times - 0 views

  • As you look for your motivation, it helps to think of it falling into two categories, said Stefano Di Domenico, a motivation researcher
  • First, there’s controlled motivation, when you feel you’re being ruled by outside forces like end-of-year bonuses and deadlines — or inner carrots and sticks, like guilt or people-pleasing.
  • Often when people say they’ve lost motivation, “what they really mean,” Dr. Di Domenico said, “is ‘I’m doing this because I have to, not because I want to.’”
  • ...15 more annotations...
  • The second kind, autonomous motivation, is what we’re seeking. This is when you feel like you’re self-directed, whether you have a natural affinity for the task at hand, or you’re doing something because you understand why it’s worthwhile.
  • Ms. Winder, who teaches workshops on reconnecting to your sense of purpose, often has students free write about what makes them come alive.
  • Clinical psychologist Richard M. Ryan, one of two scientists who developed a well-known approach to understanding motivation called self-determination theory, encourages those seeking lasting motivation to take a deep dive into their values.
  • when you connect the things that are important to you to the things you need to do — even the drudgeries — you can feel more in control of your actions. What do you love about your work? What core value does it meet?
  • Looking forward to a reward isn’t the best for long-term motivation. But several studies suggest that pairing small, immediate rewards to a task improves both motivation and fun.
  • Social connections like this are critical to rekindling motivation,
  • suggested considering how your motivation is tied to the people around you, whether that’s your family or your basketball team.
  • Reaching out lifts others, too. “Letting someone know that you are thinking of them is enough to kick-start their motivation,” and reminds them that you care,
  • People also motivate each other through competition.
  • Students in competitive groups exercised much more often than those in supportive social networks,
  • New athletic adventures can be motivational gold, too. A 2020 study suggested that trying out novel activities can help you stick with exercise.
  • Treating ourselves with compassion works much more effectively than beating ourselves up,
  • “People think they’re going to shame themselves into action,” yet self-compassion helps people stay focused on their goals, reduces fear of failure and improves self-confidence, which can also improve motivation, she said.
  • Students who were encouraged to be compassionate toward themselves after the test studied longer and performed better on a follow-up test, compared to students given either simple self-esteem-boosting comments or no instruction.
  • “The key thing about self-compassion and motivation is that it allows you to learn from your failures,”
Javier E

What's behind the confidence of the incompetent? This suddenly popular psychological ph... - 1 views

  • To test Darwin’s theory, the researchers quizzed people on several topics, such as grammar, logical reasoning and humor. After each test, they asked the participants how they thought they did. Specifically, participants were asked how many of the other quiz-takers they beat.
  • Dunning was shocked by the results, even though it confirmed his hypothesis. Time after time, no matter the subject, the people who did poorly on the tests ranked their competence much higher. On average, test takers who scored as low as the 10th percentile ranked themselves near the 70th percentile. Those least likely to know what they were talking about believed they knew as much as the experts.
  • Dunning and Kruger’s results have been replicated in at least a dozen different domains: math skills, wine tasting, chess, medical knowledge among surgeons and firearm safety among hunters.
  • ...8 more annotations...
  • Dunning-Kruger “offers an explanation for a kind of hubris,” said Steven Sloman, a cognitive psychologist at Brown University. “The fact is, that’s Trump in a nutshell. He’s a man with zero political skill who has no idea he has zero political skill. And it’s given him extreme confidence.”
  • What happens when the incompetent are unwilling to admit they have shortcomings? Are they so confident in their own perceived knowledge that they will reject the very idea of improvement? Not surprisingly (though no less concerning), Dunning’s follow-up research shows the poorest performers are also the least likely to accept criticism or show interest in self improvement.
  • Someone who has very little knowledge in a subject claims to know a lot.
  • the Dunning-Kruger effect. It’s not a disease, syndrome or mental illness; it is present in everybody to some extent, and it’s been around as long as human cognition, though only recently has it been studied and documented in social psychology
  • “Obviously it has to do with Trump and the various treatments that people have given him,” Dunning said, “So yeah, a lot of it is political. People trying to understand the other side. We have a massive rise in partisanship and it’s become more vicious and extreme, so people are reaching for explanations."
  • Even though President Trump’s statements are rife with errors, falsehoods or inaccuracies, he expresses great confidence in his aptitude. He says he does not read extensively because he solves problems “with very little knowledge other than the knowledge I [already] had.” He has said in interviews he doesn’t read lengthy reports because “I already know exactly what it is.”
  • the Dunning-Kruger effect has become popular outside of the research world because it is a simple phenomenon that could apply to all of us
  • The ramifications of the Dunning-Kruger effect are usually harmless. If you’ve ever felt confident answering questions on an exam, only to have the teacher mark them incorrect, you have firsthand experience with Dunning-Kruger.
peterconnelly

They Did Their Own 'Research.' Now What? - The New York Times - 0 views

  • the crash of two linked cryptocurrencies caused tens of billions of dollars in value to evaporate from digital wallets around the world.
  • People who thought they knew what they were getting into had, in the space of 24 hours, lost nearly everything. Messages of desperation flooded a Reddit forum for traders of one of the currencies, a coin called Luna, prompting moderators to share phone numbers for international crisis hotlines.
  • “DYOR” is shorthand for “do your own research,”
  • ...8 more annotations...
  • a reminder to stay informed and vigilant against groupthink.
  • A common refrain in battles about Covid-19 and vaccination, politics and conspiracy theories, parenting, drugs, food, stock trading and media, it signals not just a rejection of authority but often trust in another kind.
  • “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.
  • “There’s this idea that the goal of science is consensus,” Professor Carrion said. “The model they brought to it was that we didn’t need consensus.” She noted that the women she surveyed often used singular rather than plural pronouns. “It was ‘she needs to do her own research,” Professor Carrion said, rather than we need to do ours. Unlike some critical health movements in the past, this was an individualist endeavor.
  • One of the enticing aspects of cryptocurrencies, which pose an alternative to traditional financial institutions, is that expertise is available to anyone who wants to claim it.
  • In crypto, the uses of DYOR are various and contradictory, earnest and ironic sometimes within the same discussion. Breathless investment pitches for new coins are punctuated with “NFA/DYOR” (not financial advice), or admonitions not to invest more than you can afford to lose, which many people are obviously ignoring; stories about getting rich are prefaced with DYOR; requests for advice about which coins to hold are answered with DYOR. It is the siren song of crypto investing.
  • In that way — the momentum of a group — crypto investing isn’t altogether distinct from how people have invested in the stock market for decades. Though here it is tinged with a rebellious, anti-authoritarian streak: We’re outsiders, in this together; we’re doing something sort of ridiculous, but also sort of cool.
  • “Now it seems like DYOR can only do so much,” the user wrote. Eventually, the user said, you end up relying on “trust.”
peterconnelly

Elon Musk Hates Ads. Twitter Needs Them. That May Be a Problem. - The New York Times - 0 views

  • Since he started pursuing his $44 billion purchase of Twitter — and for years before that — the world’s richest man has made clear that advertising is not a priority.
  • Ads account for roughly 90 percent of Twitter’s revenue.
  • They have cited a litany of complaints, including that the company cannot target ads nearly as well as competitors like Facebook, Google and Amazon.
  • ...10 more annotations...
  • Now, numerous advertising executives say they’re willing to move their money elsewhere, especially if Mr. Musk eliminates the safeguards that allowed Twitter to remove racist rants and conspiracy theories.
  • “At the end of the day, it’s not the brands who need to be concerned, because they’ll just spend their budgets elsewhere — it’s Twitter that needs to be concerned,” said David Jones
  • On Wednesday night, at Twitter’s annual NewFronts presentation for advertisers at Pier 17 in New York, company representatives stressed Twitter’s value for marketers: as a top destination for people to gather and discuss major cultural moments like sporting events or the Met Gala, increasingly through video posts.
  • “Twitter’s done a better job than many platforms at building trust with advertisers — they’ve been more progressive, more responsive and more humble about initiating ways to learn,” said Mark Read
  • Twitter earns the vast majority of its ad revenue from brand awareness campaigns, whose effectiveness is much harder to evaluate than ads that target users based on their interests or that push for a direct response, such as clicking through to a website.
  • Twitter’s reach is also narrower than many rivals, with 229 million users who see ads, compared with 830 million users on LinkedIn and 1.96 billion daily users on Facebook.
  • “Even the likes of LinkedIn have eclipsed the ability for us to target consumers beyond what Twitter is providing,” he said. “We’re going to go where the results are, and with a lot of our clients, we haven’t seen the performance on Twitter from an ad perspective that we have with other platforms.”
  • Twitter differs from Facebook, whose millions of small and midsize advertisers generate the bulk of the company’s revenue and depend on its enormous size and targeting abilities to reach customers. Twitter’s clientele is heavily weighted with large, mainstream companies, which tend to be wary of their ads appearing alongside problematic content.
  • On Twitter, he has criticized ads as “manipulating public opinion” and discussed his refusal to “pay famous people to fake endorse.”
  • “There’s a fork in the road, where Path A leads to an unfiltered place with the worst of human behavior and no brands want to go anywhere near it,” said Mr. Jones of Brandtech. “And Path B has one of the world’s genius entrepreneurs, who knows a lot about running companies, unleashing a wave of innovation that has people looking back in a few years and saying, ‘Remember when everyone was worried about Musk coming in?’”
Javier E

Ian Hacking, Eminent Philosopher of Science and Much Else, Dies at 87 - The New York Times - 0 views

  • In an academic career that included more than two decades as a professor in the philosophy department of the University of Toronto, following appointments at Cambridge and Stanford, Professor Hacking’s intellectual scope seemed to know no bounds. Because of his ability to span multiple academic fields, he was often described as a bridge builder.
  • “Ian Hacking was a one-person interdisciplinary department all by himself,” Cheryl Misak, a philosophy professor at the University of Toronto, said in a phone interview. “Anthropologists, sociologists, historians and psychologists, as well as those working on probability theory and physics, took him to have important insights for their disciplines.”
  • Professor Hacking wrote several landmark works on the philosophy and history of probability, including “The Taming of Chance” (1990), which was named one of the best 100 nonfiction books of the 20th century by the Modern Library.
  • ...17 more annotations...
  • “I have long been interested in classifications of people, in how they affect the people classified, and how the effects on the people in turn change the classifications,” he wrote in “Making Up People
  • His work in the philosophy of science was groundbreaking: He departed from the preoccupation with questions that had long concerned philosophers. Arguing that science was just as much about intervention as it was about representation, be helped bring experimentation to center stage.
  • Regarding one such question — whether unseen phenomena like quarks and electrons were real or merely the theoretical constructs of physicists — he argued for reality in the case of phenomena that figured in experiments, citing as an example an experiment at Stanford that involved spraying electrons and positrons into a ball of niobium to detect electric charges. “So far as I am concerned,” he wrote, “if you can spray them, they’re real.”
  • His book “The Emergence of Probability” (1975), which is said to have inspired hundreds of books by other scholars, examined how concepts of statistical probability have evolved over time, shaping the way we understand not just arcane fields like quantum physics but also everyday life.
  • “I was trying to understand what happened a few hundred years ago that made it possible for our world to be dominated by probabilities,” he said in a 2012 interview with the journal Public Culture. “We now live in a universe of chance, and everything we do — health, sports, sex, molecules, the climate — takes place within a discourse of probabilities.”
  • Whatever the subject, whatever the audience, one idea that pervades all his work is that “science is a human enterprise,” Ragnar Fjelland and Roger Strand of the University of Bergen in Norway wrote when Professor Hacking won the Holberg Prize. “It is always created in a historical situation, and to understand why present science is as it is, it is not sufficient to know that it is ‘true,’ or confirmed. We have to know the historical context of its emergence.”
  • Hacking often argued that as the human sciences have evolved, they have created categories of people, and that people have subsequently defined themselves as falling into those categories. Thus does human reality become socially constructed.
  • In 2000, he became the first Anglophone to win a permanent position at the Collège de France, where he held the chair in the philosophy and history of scientific concepts until he retired in 2006.
  • “I call this the ‘looping effect,’” he added. “Sometimes, our sciences create kinds of people that in a certain sense did not exist before.”
  • In “Why Race Still Matters,” a 2005 article in the journal Daedalus, he explored how anthropologists developed racial categories by extrapolating from superficial physical characteristics, with lasting effects — including racial oppression. “Classification and judgment are seldom separable,” he wrote. “Racial classification is evaluation.”
  • Similarly, he once wrote, in the field of mental health the word “normal” “uses a power as old as Aristotle to bridge the fact/value distinction, whispering in your ear that what is normal is also right.”
  • In his influential writings about autism, Professor Hacking charted the evolution of the diagnosis and its profound effects on those diagnosed, which in turn broadened the definition to include a greater number of people.
  • Encouraging children with autism to think of themselves that way “can separate the child from ‘normalcy’ in a way that is not appropriate,” he told Public Culture. “By all means encourage the oddities. By no means criticize the oddities.”
  • His emphasis on historical context also illuminated what he called transient mental illnesses, which appear to be so confined 0cto their time 0c 0cthat they can vanish when times change.
  • “hysterical fugue” was a short-lived epidemic of compulsive wandering that emerged in Europe in the 1880s, largely among middle-class men who had become transfixed by stories of exotic locales and the lure of trave
  • His intellectual tendencies were unmistakable from an early age. “When he was 3 or 4 years old, he would sit and read the dictionary,” Jane Hacking said. “His parents were completely baffled.”
  • He wondered aloud, the interviewer noted, if the whole universe was governed by nonlocality — if “everything in the universe is aware of everything else.”“That’s what you should be writing about,” he said. “Not me. I’m a dilettante. My governing word is ‘curiosity.’”
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
« First ‹ Previous 81 - 100 of 102 Next ›
Showing 20 items per page