Skip to main content

Home/ TOK Friends/ Group items tagged prediction

Rss Feed Group items tagged

Javier E

The Signal and the Noise: Why So Many Predictions Fail-but Some Don't: Nate Silver: 978... - 0 views

  • Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. The New York Times now publishes FiveThirtyEight.com, where Silver is one of the nation’s most influential political forecasters.
  • Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.
  • the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.
  • ...3 more annotations...
  • Baseball, weather forecasting, earthquake prediction, economics, and polling: In all of these areas, Silver finds predictions gone bad thanks to biases, vested interests, and overconfidence. But he also shows where sophisticated forecasters have gotten it right (and occasionally been ignored to boot)
  • This is the best general-readership book on applied statistics that I've read. Short review: if you're interested in science, economics, or prediction: read it. It's full of interesting cases, builds intuition, and is a readable example of Bayesian thinking.
  • The core concept is this: prediction is a vital part of science, of business, of politics, of pretty much everything we do. But we're not very good at it, and fall prey to cognitive biases and other systemic problems such as information overload that make things worse. However, we are simultaneously learning more about how such things occur and that knowledge can be used to make predictions better -- and to improve our models in science, politics, business, medicine, and so many other areas.
Javier E

Reality is your brain's best guess - Big Think - 0 views

  • Andy Clark admits it’s strange that he took up “predictive processing,” an ambitious leading theory of how the brain works. A philosopher of mind at the University of Sussex, he has devoted his career to how thinking doesn’t occur just between the ears—that it flows through our bodies, tools, and environments. “The external world is functioning as part of our cognitive machinery
  • But 15 years ago, he realized that had to come back to the center of the system: the brain. And he found that predictive processing provided the essential links among the brain, body, and world.
  • There’s a traditional view that goes back at least to Descartes that perception was about the imprinting of the outside world onto the sense organs. In 20th-century artificial intelligence and neuroscience, vision was a feed-forward process in which you took in pixel-level information, refined it into a two and a half–dimensional sketch, and then refined that into a full world model.
  • ...9 more annotations...
  • a new book, The Experience Machine: How Our Minds Predict and Shape Reality, which is remarkable for how it connects the high-level concepts to everyday examples of how our brains make predictions, how that process can lead us astray, and what we can do about it.
  • being driven to stay within your own viability envelope is crucial to the kind of intelligence that we know about—the kind of intelligence that we are
  • If you ask what is a predictive brain for, the answer has to be: staying alive. Predictive brains are a way of staying within your viability envelope as an embodied biological organism: getting food when you need it, getting water when you need it.
  • in predictive processing, perception is structured around prediction. Perception is about the brain having a guess at what’s most likely to be out there and then using sensory information to refine the guess.
  • artificial curiosity. Predictive-processing systems automatically have that. They’re set up so that they predict the conditions of their own survival, and they’re always trying to get rid of prediction errors. But if they’ve solved all their practical problems and they’ve got nothing else to do, then they’ll just explore. Getting rid of any error is going to be a good thing for them. If you’re a creature like that, you’re going to be a really good learning system. You’re going to love to inhabit the environments that you can learn most from, where the problems are not too simple, not too hard, but just right.
  • It’s an effect that you also see in Marieke Jepma et al.’s work on pain. They showed that if you predict intense pain, the signal that you get will be interpreted as more painful than it would otherwise be, and vice versa. Then they asked why you don’t correct your misimpression. If it’s my expectation that is making it feel more painful, why don’t I get prediction errors that correct it?
  • The reason is that there are no errors. You’re expecting a certain level of pain, and your prediction helps bring that level about; there is nothing for you to correct. In fact, you’ve got confirmation of your own prediction. So it can be a vicious circle
  • Do you think this self-fulfilling loop in psychosis and pain perception helps to account for misinformation in our society’s and people’s susceptibility to certain narratives?Absolutely. We all have these vulnerabilities and self-fulfilling cycles. We look at the places that tend to support the models that we already have, because that’s often how we judge whether the information is good or not
  • Given that we know we’re vulnerable to self-fulfilling information loops, how can we make sure we don’t get locked into a belief?Unfortunately, it’s really difficult. The most potent intervention is to remind ourselves that we sample the world in ways that are guided by the models that we’ve currently got. The structures of science are there to push back against our natural tendency to cherry-pick.
douglasn89

The Simple Economics of Machine Intelligence - 0 views

  • The year 1995 was heralded as the beginning of the “New Economy.” Digital communication was set to upend markets and change everything. But economists by and large didn’t buy into the hype.
  • Today we are seeing similar hype about machine intelligence. But once again, as economists, we believe some simple rules apply. Technological revolutions tend to involve some important activity becoming cheap, like the cost of communication or finding information. Machine intelligence is, in its essence, a prediction technology, so the economic shift will center around a drop in the cost of prediction.
  • The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.
    • douglasn89
       
      This emphasis on prediction ties into the previous discussion and reading we had which included the idea that humans by nature are poor predictors, so because of that, they have begun to design machines to predict.
  • ...4 more annotations...
  • As machine intelligence lowers the cost of prediction, we will begin to use it as an input for things for which we never previously did. As a historical example, consider semiconductors, an area of technological advance that caused a significant drop in the cost of a different input: arithmetic. With semiconductors we could calculate cheaply, so activities for which arithmetic was a key input, such as data analysis and accounting, became much cheaper.
  • As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic.
  • Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.
  • But it yields two key implications: 1) an expanded role of prediction as an input to more goods and services, and 2) a change in the value of other inputs, driven by the extent to which they are complements to or substitutes for prediction. These changes are coming.
    • douglasn89
       
      This article agrees with the readings from Unit 5 Lesson 6 in its prediction of changes.
Javier E

Do Political Experts Know What They're Talking About? | Wired Science | Wired... - 1 views

  • I often joke that every cable news show should be forced to display a disclaimer, streaming in a loop at the bottom of the screen. The disclaimer would read: “These talking heads have been scientifically proven to not know what they are talking about. Their blather is for entertainment purposes only.” The viewer would then be referred to Tetlock’s most famous research project, which began in 1984.
  • He picked a few hundred political experts – people who made their living “commenting or offering advice on political and economic trends” – and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be re-elected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought process, so that he could better understand how they made up their minds.
  • Most of Tetlock’s questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. These results are summarized in his excellent Expert Political Judgment.
  • ...5 more annotations...
  • Some experts displayed a top-down style of reasoning: politics as a deductive art. They started with a big-idea premise about human nature, society, or economics and applied it to the specifics of the case. They tended to reach more confident conclusions about the future. And the positions they reached were easier to classify ideologically: that is the Keynesian prediction and that is the free-market fundamentalist prediction and that is the worst-case environmentalist prediction and that is the best case technology-driven growth prediction etc. Other experts displayed a bottom-up style of reasoning: politics as a much messier inductive art. They reached less confident conclusions and they are more likely to draw on a seemingly contradictory mix of ideas in reaching those conclusions (sometimes from the left, sometimes from the right). We called the big-idea experts “hedgehogs” (they know one big thing) and the more eclectic experts “foxes” (they know many, not so big things).
  • The most consistent predictor of consistently more accurate forecasts was “style of reasoning”: experts with the more eclectic, self-critical, and modest cognitive styles tended to outperform the big-idea people (foxes tended to outperform hedgehogs).
  • Lehrer: Can non-experts do anything to encourage a more effective punditocracy?
  • Tetlock: Yes, non-experts can encourage more accountability in the punditocracy. Pundits are remarkably skillful at appearing to go out on a limb in their claims about the future, without actually going out on one. For instance, they often “predict” continued instability and turmoil in the Middle East (predicting the present) but they virtually never get around to telling you exactly what would have to happen to disconfirm their expectations. They are essentially impossible to pin down. If pundits felt that their public credibility hinged on participating in level playing field forecasting exercises in which they must pit their wits against an extremely difficult-to-predict world, I suspect they would be learn, quite quickly, to be more flexible and foxlike in their policy pronouncements.
  • tweetmeme_style = 'compact'; Digg Stumble Upon Delicious Reddit if(typeof CN!=='undefined' && CN.dart){ CN.dart.call("blogsBody",{sz: "300x250", kws : ["bottom"]}); } Disqus Login About Disqus Like Dislike and 5 others liked this. Glad you liked it. Would you like to share? Facebook Twitter Share No thanks Sharing this page … Thanks! Close Login Add New Comment Post as … Image http://mediacdn.disqus.com/1312506743/build/system/upload.html#xdm_e=http%3A%2F%2Fwww.wired.com&xdm_c=default5471&xdm_p=1&f=wiredscience&t=do_political_experts_know_what_they8217re_talking_
Javier E

Predicting the Future Is Easier Than It Looks - By Michael D. Ward and Nils Metternich ... - 0 views

  • The same statistical revolution that changed baseball has now entered American politics, and no one has been more successful in popularizing a statistical approach to political analysis than New York Times blogger Nate Silver, who of course cut his teeth as a young sabermetrician. And on Nov. 6, after having faced a torrent of criticism from old-school political pundits -- Washington's rough equivalent of statistically illiterate tobacco chewing baseball scouts -- the results of the presidential election vindicated Silver's approach, which correctly predicted the electoral outcome in all 50 states.
  • Today, there are several dozen ongoing, public projects that aim to in one way or another forecast the kinds of things foreign policymakers desperately want to be able to predict: various forms of state failure, famines, mass atrocities, coups d'état, interstate and civil war, and ethnic and religious conflict. So while U.S. elections might occupy the front page of the New York Times, the ability to predict instances of extreme violence and upheaval represent the holy grail of statistical forecasting -- and researchers are now getting close to doing just that.
  • In 2010 scholars from the Political Instability Task Force published a report that demonstrated the ability to correctly predict onsets of instability two years in advance in 18 of 21 instances (about 85%)
  • ...5 more annotations...
  • Let's consider a case in which Ulfelder argues there is insufficient data to render a prediction -- North Korea. There is no official data on North Korean GDP, so what can we do? It turns out that the same data science approaches that were used to aggregate polls have other uses as well. One is the imputation of missing data. Yes, even when it is all missing. The basic idea is to use the general correlations among data that you do have to provide an aggregate way of estimating information that we don't have.
  • In 2012 there were two types of models: one type based on fundamentals such as economic growth and unemployment and another based on public opinion surveys
  • As it turned out, in this month's election public opinion polls were considerably more precise than the fundamentals. The fundamentals were not always providing bad predictions, but better is better.
  • There is a tradition in world politics to go either back until the Congress of Vienna (when there were fewer than two dozen independent countries) or to the early 1950s after the end of the Second World War. But in reality, there is no need to do this for most studies.
  • Ulfelder tells us that "when it comes to predicting major political crises like wars, coups, and popular uprisings, there are many plausible predictors for which we don't have any data at all, and much of what we do have is too sparse or too noisy to incorporate into carefully designed forecasting models." But this is true only for the old style of models based on annual data for countries. If we are willing to face data that are collected in rhythm with the phenomena we are studying, this is not the case
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

Opinion | What Do We Actually Know About the Economy? (Wonkish) - The New York Times - 0 views

  • Among economists more generally, a lot of the criticism seems to amount to the view that macroeconomics is bunk, and that we should stick to microeconomics, which is the real, solid stuff. As I’ll explain in a moment, that’s all wrong
  • in an important sense the past decade has been a huge validation for textbook macroeconomics; meanwhile, the exaltation of micro as the only “real” economics both gives microeconomics too much credit and is largely responsible for the ways macroeconomic theory has gone wrong.
  • Finally, many outsiders and some insiders have concluded from the crisis that economic theory in general is bunk, that we should take guidance from people immersed in the real world – say, business leaders — and/or concentrate on empirical results and skip the models
  • ...28 more annotations...
  • And while empirical evidence is important and we need more of it, the data almost never speak for themselves – a point amply illustrated by recent monetary events.
  • chwinger, as I remember the story, was never seen to use a Feynman diagram. But he had a locked room in his house, and the rumor was that that room was where he kept the Feynman diagrams he used in secret.
  • What’s the equivalent of Feynman diagrams? Something like IS-LM, which is the simplest model you can write down of how interest rates and output are jointly determined, and is how most practicing macroeconomists actually think about short-run economic fluctuations. It’s also how they talk about macroeconomics to each other. But it’s not what they put in their papers, because the journals demand that your model have “microfoundations.”
  • The Bernanke Fed massively expanded the monetary base, by a factor of almost five. There were dire warnings that this would cause inflation and “debase the dollar.” But prices went nowhere, and not much happened to broader monetary aggregates (a result that, weirdly, some economists seemed to find deeply puzzling even though it was exactly what should have been expected.)
  • What about fiscal policy? Traditional macro said that at the zero lower bound there would be no crowding out – that deficits wouldn’t drive up interest rates, and that fiscal multipliers would be larger than under normal conditions. The first of these predictions was obviously borne out, as rates stayed low even when deficits were very large. The second prediction is a bit harder to test, for reasons I’ll get into when I talk about the limits of empiricism. But the evidence does indeed suggest large positive multipliers.
  • The overall story, then, is one of overwhelming predictive success. Basic, old-fashioned macroeconomics didn’t fail in the crisis – it worked extremely well
  • In fact, it’s hard to think of any other example of economic models working this well – making predictions that most non-economists (and some economists) refused to believe, indeed found implausible, but which came true. Where, for example, can you find any comparable successes in microeconomics?
  • Meanwhile, the demand that macro become ever more rigorous in the narrow, misguided sense that it look like micro led to useful approaches being locked up in Schwinger’s back room, and in all too many cases forgotten. When the crisis struck, it was amazing how many successful academics turned out not to know things every economist would have known in 1970, and indeed resurrected 1930-vintage fallacies in the belief that they were profound insights.
  • mainly I think it reflected the general unwillingness of human beings (a category that includes many though not necessarily all economists) to believe that so many people can be so wrong about something so big.
  • . To normal human beings the study of international trade and that of international macroeconomics might sound like pretty much the same thing. In reality, however, the two fields used very different models, had very different intellectual cultures, and tended to look down on each other. Trade people tended to consider international macro people semi-charlatans, doing ad hoc stuff devoid of rigor. International macro people considered trade people boring, obsessed with proving theorems and offering little of real-world use.
  • does microeconomics really deserve its reputation of moral and intellectual superiority? No
  • Even before the rise of behavioral economics, any halfway self-aware economist realized that utility maximization – indeed, the very concept of utility — wasn’t a fact about the world; it was more of a thought experiment, whose conclusions should always have been stated in the subjunctive.
  • But, you say, we didn’t see the Great Recession coming. Well, what do you mean “we,” white man? OK, what’s true is that few economists realized that there was a huge housing bubble
  • True, a model doesn’t have to be perfect to provide hugely important insights. But here’s my question: where are the examples of microeconomic theory providing strong, counterintuitive, successful predictions on the same order as the success of IS-LM macroeconomics after 2008? Maybe there are some, but I can’t come up with any.
  • The point is not that micro theory is useless and we should stop doing it. But it doesn’t deserve to be seen as superior to macro modeling.
  • And the effort to make macro more and more like micro – to ground everything in rational behavior – has to be seen now as destructive. True, that effort did lead to some strong predictions: e.g., only unanticipated money should affect real output, transitory income changes shouldn’t affect consumer spending, government spending should crowd out private demand, etc. But all of those predictions have turned out to be wrong.
  • Kahneman and Tversky and Thaler and so on deserved all the honors they received for helping to document the specific ways in which utility maximization falls short, but even before their work we should never have expected perfect maximization to be a good description of reality.
  • But data never speak for themselves, for a couple of reasons. One, which is familiar, is that economists don’t get to do many experiments, and natural experiments are rare
  • The other problem is that even when we do get something like natural experiments, they often took place under economic regimes that aren’t relevant to current problems.
  • Both of these problems were extremely relevant in the years following the 2008 crisis.
  • you might be tempted to conclude that the empirical evidence is that monetary expansion is inflationary, indeed roughly one-for-one.
  • But the question, as the Fed embarked on quantitative easing, was what effect this would have on an economy at the zero lower bound. And while there were many historical examples of big monetary expansion, examples at the ZLB were much rarer – in fact, basically two: the U.S. in the 1930s and Japan in the early 2000
  • These examples told a very different story: that expansion would not, in fact, be inflationary, that it would work out the way it did.
  • The point is that empirical evidence can only do certain things. It can certainly prove that your theory is wrong! And it can also make a theory much more persuasive in those cases where the theory makes surprising predictions, which the data bear out. But the data can never absolve you from the necessity of having theories.
  • Over this past decade, I’ve watched a number of economists try to argue from authority: I am a famous professor, therefore you should believe what I say. This never ends well. I’ve also seen a lot of nihilism: economists don’t know anything, and we should tear the field down and start over.
  • Obviously I differ with both views. Economists haven’t earned the right to be snooty and superior, especially if their reputation comes from the ability to do hard math: hard math has been remarkably little help lately, if ever.
  • On the other hand, economists do turn out to know quite a lot: they do have some extremely useful models, usually pretty simple ones, that have stood up well in the face of evidence and events. And they definitely shouldn’t defer to important and/or rich people on polic
  • : compare Janet Yellen’s macroeconomic track record with that of the multiple billionaires who warned that Bernanke would debase the dollar. Or take my favorite Business Week headline from 2010: “Krugman or [John] Paulson: Who You Gonna Bet On?” Um.The important thing is to be aware of what we do know, and why.Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
anonymous

Pandemic-Proof Your Habits - The New York Times - 1 views

  • The good news is that much of what we miss about our routines and customs, and what makes them beneficial to us as a species, has more to do with their comforting regularity than the actual behaviors
    • anonymous
       
      Our brains have that much power over our emotions, and can change how we feel about the world when they experience a change in routine.
  • The key to coping during this, or any, time of upheaval is to quickly establish new routines so that, even if the world is uncertain, there are still things you can count on.
    • anonymous
       
      I haven't really thought of this, since I'm so set on getting back to old routines.
  • Human beings are prediction machines.
  • ...28 more annotations...
  • Our brains are statistical organs that are built simply to predict what will happen next
    • anonymous
       
      I don't know if we've talked about this specifically, more that we like and tend to make up patterns to "predict" the future and reassure ourselves. However, it's not real.
  • This makes sense because, in prehistoric times, faulty predictions could lead to some very unpleasant surprises — like a tiger eating you or sinking in quicksand.
  • So-called prediction errors (like finding salmon instead of turkey on your plate on Thanksgiving) send us into a tizzy because our brains interpret them as a potential threat.
    • anonymous
       
      We have talked about this- the survival aspect of this reaction to change.
  • Keep doing what you’ve been doing, because you did it before, and you didn’t die.
    • anonymous
       
      A good way of putting it.
  • all essentially subconscious efforts to make your world more predictable, orderly and safe.
  • Routines and rituals also conserve precious brainpower
  • It turns out our brains are incredibly greedy when it comes to energy consumption, sucking up 20 percent of calories while accounting for only 2 percent of overall body weight.
  • Our brains are literally overburdened with all the uncertainty caused by the pandemic.
  • Not only is there the seeming capriciousness of the virus, but we no longer have the routines that served as the familiar scaffolding of our lives
  • “It’s counterintuitive because we think of meaning in life as coming from these grandiose experiences
    • anonymous
       
      I've definitely felt this way.
  • grandiose
  • grandiose
  • Of course, you can always take routines and rituals too far, such as the extremely controlled and repetitive behaviors indicative of addiction, obsessive-compulsive disorder and various eating disorders.
  • it’s mundane routines that give us structure to help us pare things down and better navigate the world, which helps us make sense of things and feel that life has meaning.”
  • In the coronavirus era, people may resort to obsessive cleaning, hoarding toilet paper, stockpiling food or neurotically wearing masks when driving alone in their cars. On the other end of the spectrum are those who stubbornly adhere to their old routines because stopping feels more threatening than the virus.
  • You’re much better off establishing a new routine within the limited environment that we find ourselves in
  • Luckily, there is a vast repertoire of habits you can adopt and routines you can establish to structure your days no matter what crises are unfolding around you
  • The point is to find what works for you. It just needs to be regular and help you achieve your goals, whether intellectually, emotionally, socially or professionally. The best habits not only provide structure and order but also give you a sense of pleasure, accomplishment or confidence upon completion.
  • It could be as simple as making your bed as soon as you get up in the morning or committing to working the same hours in the same spot.
  • Pandemic-proof routines might include weekly phone or video calls with friends, Taco Tuesdays with the family, hiking with your spouse on weekends, regularly filling a bird feeder, set times for prayer or meditation, front yard happy hours with the neighbors or listening to an audiobook every night before bed.
  • The truth is that you cannot control what happens in life. But you can create a routine that gives your life a predictable rhythm and secure mooring.
    • anonymous
       
      It's all about changing your thoughts and not tricking exactly but helping your brain.
  • This frees your brain to develop perspective so you’re better able to take life’s surprises in stride.
  • I attended a Thanksgiving dinner several years ago where the hostess, without warning family and friends, broke with tradition and served salmon instead of turkey, roasted potatoes instead of mashed, raspberry coulis instead of cranberry sauce and … you get the idea.
  • Too many people are still longing for their old routines. Get some new ones instead.
  • It wasn’t that the meal itself was bad. In fact, the meal was outstanding. The problem was that it wasn’t the meal everyone was expecting.
  • When there are discrepancies between expectations and reality, all kinds of distress signals go off in the brain.
  • It doesn’t matter if it’s a holiday ritual or more mundane habit like how you tie your shoes; if you can’t do it the way you normally do it, you’re biologically engineered to get upset.
  • This in part explains people’s grief and longing for the routines that were the background melodies of their lives before the pandemic
sanderk

Coronavirus deaths in US: 200,000 could die, researchers predict - Business Insider - 1 views

  • Last week, the country saw its cases spike more than 40% in just 24 hours. This week, the number of daily cases continues to rise — even as Americans practice social distancing by working from home, limiting outdoor excursions, and staying 6 feet away from one another.
  • They estimated only 12% of coronavirus cases (including asymptomatic ones) had been reported in the US as of March 15, which would mean about 29,000 infections had gone undiagnosed by that time. The US has reported more than 69,000 cases and over 1,000 deaths as of Thursday.
  • The most extreme model predicted that up to 1.2 million people could die. By comparison, a typical flu season in the US kills between 11,000 and 95,000 people, according to the Centers for Disease Control and Prevention. 
  • ...5 more annotations...
  • Some estimated that the CDC had reported more than 20% of COVID-19 cases as of March 15, but others predicted that the agency had identified just 5% of cases. Some predicted that the US could see 1 million deaths by the end of 2020, while others predicted that the death toll would be in the thousands.
  • The New York Times recently used CDC data to model how the how the virus could spread if no actions were taken to stop transmission in the US. The models show that between 160 million and 214 million people could be infected and as many as 200,000 to 1.7 million people could die.
  • Even if all patients were able to receive treatment at hospitals, however, the researchers predicted that about 1.2 million people in the US could die.
  • But since this particular coronavirus hasn't been seen before in humans, scientists aren't certain whether it will behave the same way. Plus, it's spreading in places with high temperatures, like Australia.
  • A second outbreak could also arise after people resume normal activity. The US asked citizens to avoid international travel starting March 19, but opening its borders again could fuel the virus' spread. The same goes for allowing citizens to return to work or use mass transit.
Javier E

Covid-19 expert Karl Friston: 'Germany may have more immunological "dark matter"' | Wor... - 0 views

  • Our approach, which borrows from physics and in particular the work of Richard Feynman, goes under the bonnet. It attempts to capture the mathematical structure of the phenomenon – in this case, the pandemic – and to understand the causes of what is observed. Since we don’t know all the causes, we have to infer them. But that inference, and implicit uncertainty, is built into the models
  • That’s why we call them generative models, because they contain everything you need to know to generate the data. As more data comes in, you adjust your beliefs about the causes, until your model simulates the data as accurately and as simply as possible.
  • A common type of epidemiological model used today is the SEIR model, which considers that people must be in one of four states – susceptible (S), exposed (E), infected (I) or recovered (R). Unfortunately, reality doesn’t break them down so neatly. For example, what does it mean to be recovered?
  • ...12 more annotations...
  • SEIR models start to fall apart when you think about the underlying causes of the data. You need models that can allow for all possible states, and assess which ones matter for shaping the pandemic’s trajectory over time.
  • These techniques have enjoyed enormous success ever since they moved out of physics. They’ve been running your iPhone and nuclear power stations for a long time. In my field, neurobiology, we call the approach dynamic causal modelling (DCM). We can’t see brain states directly, but we can infer them given brain imaging data
  • Epidemiologists currently tackle the inference problem by number-crunching on a huge scale, making use of high-performance computers. Imagine you want to simulate an outbreak in Scotland. Using conventional approaches, this would take you a day or longer with today’s computing resources. And that’s just to simulate one model or hypothesis – one set of parameters and one set of starting conditions.
  • Using DCM, you can do the same thing in a minute. That allows you to score different hypotheses quickly and easily, and so to home in sooner on the best one.
  • This is like dark matter in the universe: we can’t see it, but we know it must be there to account for what we can see. Knowing it exists is useful for our preparations for any second wave, because it suggests that targeted testing of those at high risk of exposure to Covid-19 might be a better approach than non-selective testing of the whole population.
  • Our response as individuals – and as a society – becomes part of the epidemiological process, part of one big self-organising, self-monitoring system. That means it is possible to predict not only numbers of cases and deaths in the future, but also societal and institutional responses – and to attach precise dates to those predictions.
  • How well have your predictions been borne out in this first wave of infections?For London, we predicted that hospital admissions would peak on 5 April, deaths would peak five days later, and critical care unit occupancy would not exceed capacity – meaning the Nightingale hospitals would not be required. We also predicted that improvements would be seen in the capital by 8 May that might allow social distancing measures to be relaxed – which they were in the prime minister’s announcement on 10 May. To date our predictions have been accurate to within a day or two, so there is a predictive validity to our models that the conventional ones lack.
  • What do your models say about the risk of a second wave?The models support the idea that what happens in the next few weeks is not going to have a great impact in terms of triggering a rebound – because the population is protected to some extent by immunity acquired during the first wave. The real worry is that a second wave could erupt some months down the line when that immunity wears off.
  • the important message is that we have a window of opportunity now, to get test-and-trace protocols in place ahead of that putative second wave. If these are implemented coherently, we could potentially defer that wave beyond a time horizon where treatments or a vaccine become available, in a way that we weren’t able to before the first one.
  • We’ve been comparing the UK and Germany to try to explain the comparatively low fatality rates in Germany. The answers are sometimes counterintuitive. For example, it looks as if the low German fatality rate is not due to their superior testing capacity, but rather to the fact that the average German is less likely to get infected and die than the average Brit. Why? There are various possible explanations, but one that looks increasingly likely is that Germany has more immunological “dark matter” – people who are impervious to infection, perhaps because they are geographically isolated or have some kind of natural resistance
  • Any other advantages?Yes. With conventional SEIR models, interventions and surveillance are something you add to the model – tweaks or perturbations – so that you can see their effect on morbidity and mortality. But with a generative model these things are built into the model itself, along with everything else that matters.
  • Are generative models the future of disease modelling?That’s a question for the epidemiologists – they’re the experts. But I would be very surprised if at least some part of the epidemiological community didn’t become more committed to this approach in future, given the impact that Feynman’s ideas have had in so many other disciplines.
Javier E

Opinion | You Are the Object of Facebook's Secret Extraction Operation - The New York T... - 0 views

  • Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data
  • Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.
  • These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law.
  • ...56 more annotations...
  • The result has been a hidden revolution in how information is produced, circulated and acted upon
  • The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit.
  • The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.
  • These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.
  • There is no way to escape the machine systems that surveil u
  • All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.
  • Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”?
  • Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?
  • Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.
  • By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.”
  • Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.
  • Prediction was the first imperative that determined the second imperative: extraction.
  • Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors
  • User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.
  • When asked “What is Google?” the co-founder Larry Page laid it out in 2001,
  • “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”
  • Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data
  • Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.
  • As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.
  • This is the economic context in which disinformation wins
  • In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.
  • A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained
  • The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”
  • Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.
  • Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.
  • Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models.
  • In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.
  • As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information
  • Today all apps and software, no matter how benign they appear, are designed to maximize data collection.
  • Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic
  • The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.
  • Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.
  • With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.
  • The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.
  • Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement
  • Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.
  • Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance.
  • when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.
  • All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations.
  • We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications
  • The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.
  • Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth.
  • like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.
  • Where does that leave us?
  • Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution.
  • instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.
  • We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes
  • This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces
  • Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.
  • Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.
  • No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests
  • the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions:
  • How should we structure and govern information, connection and communication in a democratic digital century?
  • What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society?
  • What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?
  • The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.
Javier E

Opinion | Noam Chomsky: The False Promise of ChatGPT - The New York Times - 0 views

  • we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
  • OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought
  • if machine learning programs like ChatGPT continue to dominate the field of A.I
  • ...22 more annotations...
  • , we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
  • It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
  • The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question
  • the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations
  • such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case
  • Those are the ingredients of explanation, the mark of true intelligence.
  • Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.”
  • an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
  • The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws
  • any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.
  • ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
  • Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
  • For this reason, the predictions of machine learning systems will always be superficial and dubious.
  • some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscienc
  • While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
  • The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
  • This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)
  • True intelligence is also capable of moral thinking
  • To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content
  • In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
  • Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
  • In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Javier E

Assessing Kurzweil: the results - Less Wrong - 0 views

  • when talking about unprecedented future events such as nanotechnology or AI, the choice of the model is also dependent on expert judgement.
  • In various books, he's made predictions about what would happen in 2009, and we're now in a position to judge their accuracy. I haven't been satisfied by the various accuracy ratings I've found online, so I decided to do my own assessments.
  • Ray Kurzweil has a model of technological intelligence development where, broadly speaking, evolution, pre-computer technological development, post-computer technological development and future AIs all fit into the same exponential increase.
  • ...1 more annotation...
  • relying on a single assessor is unreliable, especially when some of the judgements are subjective. So I started a call for volunteers to get assessors. Meanwhile Malo Bourgon set up a separate assessment on Youtopia, harnessing the awesome power of altruists chasing after points. The results are now in, and they are fascinating. They are...
Javier E

Nate Silver, Artist of Uncertainty - 0 views

  • In 2008, Nate Silver correctly predicted the results of all 35 Senate races and the presidential results in 49 out of 50 states. Since then, his website, fivethirtyeight.com (now central to The New York Times’s political coverage), has become an essential source of rigorous, objective analysis of voter surveys to predict the Electoral College outcome of presidential campaigns. 
  • Political junkies, activists, strategists, and journalists will gain a deeper and more sobering sense of Silver’s methods in The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t (Penguin Press). A brilliant analysis of forecasting in finance, geology, politics, sports, weather, and other domains, Silver’s book is also an original fusion of cognitive psychology and modern statistical theory.
  • Its most important message is that the first step toward improving our predictions is learning how to live with uncertainty.
  • ...7 more annotations...
  • The second step is starting to understand why it is that big data, super computers, and mathematical sophistication haven’t made us better at separating signals (information with true predictive value) from noise (misleading information). 
  • Silver’s background in sports and poker turns out to be invaluable. Successful analysts in gambling and sports are different from fans and partisans—far more aware that “sure things” are likely to be illusions,
  • he blends the best of modern statistical analysis with research on cognition biases pioneered by Princeton psychologist and Nobel laureate in economics  Daniel Kahneman and the late Stanford psychologist Amos Tversky. 
  • One of the biggest problems we have in separating signal from noise is that when we look too hard for certainty that isn’t there, we often end up attracted to noise, either because it is more prominent or because it confirms what we would like to believe.
  • In discipline after discipline, Silver shows in his book that when you look at even the best single forecast, the average of all independent forecasts is 15 to 20 percent more accurate. 
  • Silver has taken the next major step: constantly incorporating both state polls and national polls into Bayesian models that also incorporate economic data.
  • Silver explains why we will be misled if we only consider significance tests—i.e., statements that the margin of error for the results is, for example, plus or minus four points, meaning there is one chance in 20 that the percentages reported are off by more than four. Calculations like these assume the only source of error is sampling error—the irreducible error—while ignoring errors attributable to house effects, like the proportion of cell-phone users, one of the complex set of assumptions every pollster must make about who will actually vote. In other words, such an approach ignores context in order to avoid having to justify and defend judgments. 
Javier E

How Reliable Are the Social Sciences? - NYTimes.com - 3 views

  • media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention. But this is hardly a reasonable view.  There is considerable distance between, say, the confidence we should place in astronomers’ calculations of eclipses and a small marketing study suggesting that consumers prefer laundry soap in blue boxes
  • A rational assessment of a scientific result must first take account of the broader context of the particular science involved.  Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science?
  • Second, and even more important, there is our overall assessment of work in a given science in comparison with other sciences.
  • ...12 more annotations...
  • The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive.
  • Even the best-developed social sciences like economics have nothing like this status.
  • when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events.  We may have a theory that explains all the known data, but that may be just the result of our having fitted the theory to that data.  The strongest support for a theory comes from its ability to correctly predict data that it was not designed to explain.
  • The case for a negative answer lies in the predictive power of the core natural sciences compared with even the most highly developed social sciences
  • Is there any work on the effectiveness of teaching that is solidly enough established to support major policy decisions?
  • While the physical sciences produce many detailed and precise predictions, the social sciences do not. 
  • most social science research falls far short of the natural sciences’ standard of controlled experiments.
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive.
  • Because of the many interrelated causes at work in social systems, many questions are simply “impervious to experimentation.”
  • even when we can get reliable experimental results, the causal complexity restricts us to “extremely conditional, statistical statements,” which severely limit the range of cases to which the results apply.
  • above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
margogramiak

We hear what we expect to hear -- ScienceDaily - 0 views

  • Despite senses being the only window to the outside world, people do rarely question how faithfully they represent the external physical reality.
  • Despite senses being the only window to the outside world, people do rarely question how faithfully they represent the external physical reality.
    • margogramiak
       
      We've questioned our senses A LOT in TOK!
  • the cerebral cortex constantly generates predictions on what will happen next, and that neurons in charge of sensory processing only encode the difference between our predictions and the actual reality.
    • margogramiak
       
      That's really interesting. We've touched on similar concepts, but nothing exactly like this.
  • ...5 more annotations...
  • that not only the cerebral cortex, but the entire auditory pathway, represents sounds according to prior expectations.
    • margogramiak
       
      So, multiple parts of our brain make predictions about what's going to happen next.
  • Although participants recognised the deviant faster when it was placed on positions where they expected it, the subcortical nuclei encoded the sounds only when they were placed in unexpected positions.
    • margogramiak
       
      That's interesting. How will this research affect medicine etc?
  • Predictive coding assumes that the brain is constantly generating predictions about how the physical world will look, sound, feel, and smell like in the next instant, and that neurons in charge of processing our senses save resources by representing only the differences between these predictions and the actual physical world.
    • margogramiak
       
      I remember from class that the brain looks for patterns with its senses. Does that apply here?
  • e have now shown that this process also dominates the most primitive and evolutionary conserved parts of the brain. All that we perceive might be deeply contaminated by our subjective beliefs on the physical world."
    • margogramiak
       
      Perception is crazy...
  • Developmental dyslexia, the most wide-spread learning disorder, has already been linked to altered responses in subcortical auditory pathway and to difficulties on exploiting stimulus regularities in auditory perception.
    • margogramiak
       
      That's interesting. I can see why that would affect learning.
kaylynfreeman

How Reliable Are the Social Sciences? - The New York Times - 1 views

  • How much authority should we give to such work in our policy decisions?  The question is important because media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention.
  • A rational assessment of a scientific result must first take account of the broader context of the particular science involved.  Where does the result lie on the continuum from preliminary studies, designed to suggest further directions of research, to maximally supported conclusions of the science? 
  • Second, and even more important, there is our overall assessment of work in a given science in comparison with other sciences.  The core natural sciences (e.g., physics, chemistry, biology) are so well established that we readily accept their best-supported conclusions as definitive. 
  • ...10 more annotations...
  • While the physical sciences produce many detailed and precise predictions, the social sciences do not.  The reason is that such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved.  For one thing, we are too complex: our behavior depends on an enormous number of tightly interconnected variables that are extraordinarily difficult to  distinguish and study separately
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
  • How much authority should we give to such work in our policy decisions?  The question is important because media reports often seem to assume that any result presented as “scientific” has a claim to our serious attention.
  • Without a strong track record of experiments leading to successful predictions, there is seldom a basis for taking social scientific results as definitive
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do
  • our policy discussions should simply ignore social scientific research.  We should, as Manzi himself proposes, find ways of injecting more experimental data into government decisions.  But above all, we need to develop a much better sense of the severely limited reliability of social scientific results.   Media reports of research should pay far more attention to these limitations, and scientists reporting the results need to emphasize what they don’t show as much as what they do.
  • Social sciences may be surrounded by the “paraphernalia” of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments. 
  • Given the limited predictive success and the lack of consensus in social sciences, their conclusions can seldom be primary guides to setting policy.  At best, they can supplement the general knowledge, practical experience, good sense and critical intelligence that we can only hope our political leaders will have.
Javier E

DeepMind uncovers structure of 200m proteins in scientific leap forward | DeepMind | Th... - 0 views

  • Highlighter
  • Proteins are the building blocks of life. Formed of chains of amino acids, folded up into complex shapes, their 3D structure largely determines their function. Once you know how a protein folds up, you can start to understand how it works, and how to change its behaviour
  • Although DNA provides the instructions for making the chain of amino acids, predicting how they interact to form a 3D shape was more tricky and, until recently, scientists had only deciphered a fraction of the 200m or so proteins known to science
  • ...7 more annotations...
  • In November 2020, the AI group DeepMind announced it had developed a program called AlphaFold that could rapidly predict this information using an algorithm. Since then, it has been crunching through the genetic codes of every organism that has had its genome sequenced, and predicting the structures of the hundreds of millions of proteins they collectively contain.
  • Last year, DeepMind published the protein structures for 20 species – including nearly all 20,000 proteins expressed by humans – on an open database. Now it has finished the job, and released predicted structures for more than 200m proteins.
  • “Essentially, you can think of it as covering the entire protein universe. It includes predictive structures for plants, bacteria, animals, and many other organisms, opening up huge new opportunities for AlphaFold to have an impact on important issues, such as sustainability, food insecurity, and neglected diseases,”
  • In May, researchers led by Prof Matthew Higgins at the University of Oxford announced they had used AlphaFold’s models to help determine the structure of a key malaria parasite protein, and work out where antibodies that could block transmission of the parasite were likely to bind.
  • “Previously, we’d been using a technique called protein crystallography to work out what this molecule looks like, but because it’s quite dynamic and moves around, we just couldn’t get to grips with it,” Higgins said. “When we took the AlphaFold models and combined them with this experimental evidence, suddenly it all made sense. This insight will now be used to design improved vaccines which induce the most potent transmission-blocking antibodies.”
  • AlphaFold’s models are also being used by scientists at the University of Portsmouth’s Centre for Enzyme Innovation, to identify enzymes from the natural world that could be tweaked to digest and recycle plastics. “It took us quite a long time to go through this massive database of structures, but opened this whole array of new three-dimensional shapes we’d never seen before that could actually break down plastics,” said Prof John McGeehan, who is leading the work. “There’s a complete paradigm shift. We can really accelerate where we go from here
  • “AlphaFold protein structure predictions are already being used in a myriad of ways. I expect that this latest update will trigger an avalanche of new and exciting discoveries in the months and years ahead, and this is all thanks to the fact that the data are available openly for all to use.”
Javier E

Opinion | Do You Live in a 'Tight' State or a 'Loose' One? Turns Out It Matters Quite a... - 0 views

  • Political biases are omnipresent, but what we don’t fully understand yet is how they come about in the first place.
  • In 2014, Michele J. Gelfand, a professor of psychology at the Stanford Graduate School of Business formerly at the University of Maryland, and Jesse R. Harrington, then a Ph.D. candidate, conducted a study designed to rank the 50 states on a scale of “tightness” and “looseness.”
  • titled “Tightness-Looseness Across the 50 United States,” the study calculated a catalog of measures for each state, including the incidence of natural disasters, disease prevalence, residents’ levels of openness and conscientiousness, drug and alcohol use, homelessness and incarceration rates.
  • ...64 more annotations...
  • Gelfand and Harrington predicted that “‘tight’ states would exhibit a higher incidence of natural disasters, greater environmental vulnerability, fewer natural resources, greater incidence of disease and higher mortality rates, higher population density, and greater degrees of external threat.”
  • The South dominated the tight states: Mississippi, Alabama Arkansas, Oklahoma, Tennessee, Texas, Louisiana, Kentucky, South Carolina and North Carolina
  • states in New England and on the West Coast were the loosest: California, Oregon, Washington, Maine, Massachusetts, Connecticut, New Hampshire and Vermont.
  • Cultural differences, Gelfand continued, “have a certain logic — a rationale that makes good sense,” noting that “cultures that have threats need rules to coordinate to survive (think about how incredibly coordinated Japan is in response to natural disasters).
  • “Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the World” in 2018, in which she described the results of a 2016 pre-election survey she and two colleagues had commissioned
  • The results were telling: People who felt the country was facing greater threats desired greater tightness. This desire, in turn, correctly predicted their support for Trump. In fact, desired tightness predicted support for Trump far better than other measures. For example, a desire for tightness predicted a vote for Trump with 44 times more accuracy than other popular measures of authoritarianism.
  • The 2016 election, Gelfand continued, “turned largely on primal cultural reflexes — ones that had been conditioned not only by cultural forces, but by a candidate who was able to exploit them.”
  • Gelfand said:Some groups have much stronger norms than others; they’re tight. Others have much weaker norms; they’re loose. Of course, all cultures have areas in which they are tight and loose — but cultures vary in the degree to which they emphasize norms and compliance with them.
  • In both 2016 and 2020, Donald Trump carried all 10 of the top “tight” states; Hillary Clinton and Joe Biden carried all 10 of the top “loose” states.
  • The tight-loose concept, Gelfand argued,is an important framework to understand the rise of President Donald Trump and other leaders in Poland, Hungary, Italy, and France,
  • cultures that don’t have a lot of threat can afford to be more permissive and loose.”
  • The gist is this: when people perceive threat — whether real or imagined, they want strong rules and autocratic leaders to help them survive
  • My research has found that within minutes of exposing study participants to false information about terrorist incidents, overpopulation, pathogen outbreaks and natural disasters, their minds tightened. They wanted stronger rules and punishments.
  • Gelfand writes that tightness encourages conscientiousness, social order and self-control on the plus side, along with close-mindedness, conventional thinking and cultural inertia on the minus side.
  • Looseness, Gelfand posits, fosters tolerance, creativity and adaptability, along with such liabilities as social disorder, a lack of coordination and impulsive behavior.
  • If liberalism and conservatism have historically played a complementary role, each checking the other to constrain extremism, why are the left and right so destructively hostile to each other now, and why is the contemporary political system so polarized?
  • Along the same lines, if liberals and conservatives hold differing moral visions, not just about what makes a good government but about what makes a good life, what turned the relationship between left and right from competitive to mutually destructive?
  • As a set, Niemi wrote, conservative binding values encompassthe values oriented around group preservation, are associated with judgments, decisions, and interpersonal orientations that sacrifice the welfare of individuals
  • She cited research thatfound 47 percent of the most extreme conservatives strongly endorsed the view that “The world is becoming a more and more dangerous place,” compared to 19 percent of the most extreme liberals
  • Conservatives and liberals, Niemi continued,see different things as threats — the nature of the threat and how it happens to stir one’s moral values (and their associated emotions) is a better clue to why liberals and conservatives react differently.
  • Unlike liberals, conservatives strongly endorse the binding moral values aimed at protecting groups and relationships. They judge transgressions involving personal and national betrayal, disobedience to authority, and disgusting or impure acts such as sexually or spiritually unchaste behavior as morally relevant and wrong.
  • Underlying these differences are competing sets of liberal and conservative moral priorities, with liberals placing more stress than conservatives on caring, kindness, fairness and rights — known among scholars as “individualizing values
  • conservatives focus more on loyalty, hierarchy, deference to authority, sanctity and a higher standard of disgust, known as “binding values.”
  • Niemi contended that sensitivity to various types of threat is a key factor in driving differences between the far left and far right.
  • For example, binding values are associated with Machiavellianism (e.g., status-seeking and lying, getting ahead by any means, 2013); victim derogation, blame, and beliefs that victims were causal contributors for a variety of harmful acts (2016, 2020); and a tendency to excuse transgressions of ingroup members with attributions to the situation rather than the person (2023).
  • Niemi cited a paper she and Liane Young, a professor of psychology at Boston College, published in 2016, “When and Why We See Victims as Responsible: The Impact of Ideology on Attitudes Toward Victims,” which tested responses of men and women to descriptions of crimes including sexual assaults and robberies.
  • We measured moral values associated with unconditionally prohibiting harm (“individualizing values”) versus moral values associated with prohibiting behavior that destabilizes groups and relationships (“binding values”: loyalty, obedience to authority, and purity)
  • Increased endorsement of binding values predicted increased ratings of victims as contaminated, increased blame and responsibility attributed to victims, increased perceptions of victims’ (versus perpetrators’) behaviors as contributing to the outcome, and decreased focus on perpetrators.
  • A central explanation typically offered for the current situation in American politics is that partisanship and political ideology have developed into strong social identities where the mass public is increasingly sorted — along social, partisan, and ideological lines.
  • What happened to people ecologically affected social-political developments, including the content of the rules people made and how they enforced them
  • Just as ecological factors differing from region to region over the globe produced different cultural values, ecological factors differed throughout the U.S. historically and today, producing our regional and state-level dimensions of culture and political patterns.
  • Joshua Hartshorne, who is also a professor of psychology at Boston College, took issue with the binding versus individualizing values theory as an explanation for the tendency of conservatives to blame victims:
  • I would guess that the reason conservatives are more likely to blame the victim has less to do with binding values and more to do with the just-world bias (the belief that good things happen to good people and bad things happen to bad people, therefore if a bad thing happened to you, you must be a bad person).
  • Belief in a just world, Hartshorne argued, is crucial for those seeking to protect the status quo:It seems psychologically necessary for anyone who wants to advocate for keeping things the way they are that the haves should keep on having, and the have-nots have got as much as they deserve. I don’t see how you could advocate for such a position while simultaneously viewing yourself as moral (and almost everyone believes that they themselves are moral) without also believing in the just world
  • Conversely, if you generally believe the world is not just, and you view yourself as a moral person, then you are likely to feel like you have an obligation to change things.
  • I asked Lene Aaroe, a political scientist at Aarhus University in Denmark, why the contemporary American political system is as polarized as it is now, given that the liberal-conservative schism is longstanding. What has happened to produce such intense hostility between left and right?
  • There is variation across countries in hostility between left and right. The United States is a particularly polarized case which calls for a contextual explanatio
  • I then asked Aaroe why surveys find that conservatives are happier than liberals. “Some research,” she replied, “suggests that experiences of inequality constitute a larger psychological burden to liberals because it is more difficult for liberals to rationalize inequality as a phenomenon with positive consequences.”
  • Numerous factors potentially influence the evolution of liberalism and conservatism and other social-cultural differences, including geography, topography, catastrophic events, and subsistence styles
  • Steven Pinker, a professor of psychology at Harvard, elaborated in an email on the link between conservatism and happiness:
  • t’s a combination of factors. Conservatives are likelier to be married, patriotic, and religious, all of which make people happier
  • They may be less aggrieved by the status quo, whereas liberals take on society’s problems as part of their own personal burdens. Liberals also place politics closer to their identity and striving for meaning and purpose, which is a recipe for frustration.
  • Some features of the woke faction of liberalism may make people unhappier: as Jon Haidt and Greg Lukianoff have suggested, wokeism is Cognitive Behavioral Therapy in reverse, urging upon people maladaptive mental habits such as catastrophizing, feeling like a victim of forces beyond one’s control, prioritizing emotions of hurt and anger over rational analysis, and dividing the world into allies and villains.
  • Why, I asked Pinker, would liberals and conservatives react differently — often very differently — to messages that highlight threat?
  • It may be liberals (or at least the social-justice wing) who are more sensitive to threats, such as white supremacy, climate change, and patriarchy; who may be likelier to moralize, seeing racism and transphobia in messages that others perceive as neutral; and being likelier to surrender to emotions like “harm” and “hurt.”
  • While liberals and conservatives, guided by different sets of moral values, may make agreement on specific policies difficult, that does not necessarily preclude consensus.
  • there are ways to persuade conservatives to support liberal initiatives and to persuade liberals to back conservative proposals:
  • While liberals tend to be more concerned with protecting vulnerable groups from harm and more concerned with equality and social justice than conservatives, conservatives tend to be more concerned with moral issues like group loyalty, respect for authority, purity and religious sanctity than liberals are. Because of these different moral commitments, we find that liberals and conservatives can be persuaded by quite different moral arguments
  • For example, we find that conservatives are more persuaded by a same-sex marriage appeal articulated in terms of group loyalty and patriotism, rather than equality and social justice.
  • Liberals who read the fairness argument were substantially more supportive of military spending than those who read the loyalty and authority argument.
  • We find support for these claims across six studies involving diverse political issues, including same-sex marriage, universal health care, military spending, and adopting English as the nation’s official language.”
  • In one test of persuadability on the right, Feinberg and Willer assigned some conservatives to read an editorial supporting universal health care as a matter of “fairness (health coverage is a basic human right)” or to read an editorial supporting health care as a matter of “purity (uninsured people means more unclean, infected, and diseased Americans).”
  • Conservatives who read the purity argument were much more supportive of health care than those who read the fairness case.
  • “political arguments reframed to appeal to the moral values of those holding the opposing political position are typically more effective
  • In “Conservative and Liberal Attitudes Drive Polarized Neural Responses to Political Content,” Willer, Yuan Chang Leong of the University of Chicago, Janice Chen of Johns Hopkins and Jamil Zaki of Stanford address the question of how partisan biases are encoded in the brain:
  • society. How do such biases arise in the brain? We measured the neural activity of participants watching videos related to immigration policy. Despite watching the same videos, conservative and liberal participants exhibited divergent neural responses. This “neural polarization” between groups occurred in a brain area associated with the interpretation of narrative content and intensified in response to language associated with risk, emotion, and morality. Furthermore, polarized neural responses predicted attitude change in response to the videos.
  • The four authors argue that their “findings suggest that biased processing in the brain drives divergent interpretations of political information and subsequent attitude polarization.” These results, they continue, “shed light on the psychological and neural underpinnings of how identical information is interpreted differently by conservatives and liberals.”
  • The authors used neural imaging to follow changes in the dorsomedial prefrontal cortex (known as DMPFC) as conservatives and liberals watched videos presenting strong positions, left and right, on immigration.
  • or each video,” they write,participants with DMPFC activity time courses more similar to that of conservative-leaning participants became more likely to support the conservative positio
  • Conversely, those with DMPFC activity time courses more similar to that of liberal-leaning participants became more likely to support the liberal position. These results suggest that divergent interpretations of the same information are associated with increased attitude polarizatio
  • Together, our findings describe a neural basis for partisan biases in processing political information and their effects on attitude change.
  • Describing their neuroimaging method, the authors point out that theysearched for evidence of “neural polarization” activity in the brain that diverges between people who hold liberal versus conservative political attitudes. Neural polarization was observed in the dorsomedial prefrontal cortex (DMPFC), a brain region associated with the interpretation of narrative content.
  • The question is whether the political polarization that we are witnessing now proves to be a core, encoded aspect of the human mind, difficult to overcome — as Leong, Chen, Zaki and Willer sugges
  • — or whether, with our increased knowledge of the neural basis of partisan and other biases, we will find more effective ways to manage these most dangerous of human predispositions.
1 - 20 of 399 Next › Last »
Showing 20 items per page