Skip to main content

Home/ TOK Friends/ Group items tagged reliable sources

Rss Feed Group items tagged

Javier E

Why it's as hard to escape an echo chamber as it is to flee a cult | Aeon Essays - 0 views

  • there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs.
  • they work in entirely different ways, and they require very different modes of intervention
  • An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
  • ...90 more annotations...
  • start with epistemic bubbles
  • That omission might be purposeful
  • But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests
  • An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited. Where an epistemic bubble merely omits contrary views, an echo chamber brings its members to actively distrust outsiders.
  • an echo chamber is something like a cult. A cult isolates its members by actively alienating them from any outside sources. Those outside are actively labelled as malignant and untrustworthy.
  • In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.
  • The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.
  • Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly
  • They have been in the limelight lately, most famously in Eli Pariser’s The Filter Bubble (2011) and Cass Sunstein’s #Republic: Divided Democracy in the Age of Social Media (2017).
  • The general gist: we get much of our news from Facebook feeds and similar sorts of social media. Our Facebook feed consists mostly of our friends and colleagues, the majority of whom share our own political and cultural views
  • various algorithms behind the scenes, such as those inside Google search, invisibly personalise our searches, making it more likely that we’ll see only what we want to see. These processes all impose filters on information.
  • Such filters aren’t necessarily bad. The world is overstuffed with information, and one can’t sort through it all by oneself: filters need to be outsourced.
  • That’s why we all depend on extended social networks to deliver us knowledge
  • any such informational network needs the right sort of broadness and variety to work
  • Each individual person in my network might be superbly reliable about her particular informational patch but, as an aggregate structure, my network lacks what Sanford Goldberg in his book Relying on Others (2010) calls ‘coverage-reliability’. It doesn’t deliver to me a sufficiently broad and representative coverage of all the relevant information.
  • Epistemic bubbles also threaten us with a second danger: excessive self-confidence.
  • An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission
  • Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection.
  • Luckily, though, epistemic bubbles are easily shattered. We can pop an epistemic bubble simply by exposing its members to the information and arguments that they’ve missed.
  • echo chambers are a far more pernicious and robust phenomenon.
  • amieson and Cappella’s book is the first empirical study into how echo chambers function
  • echo chambers work by systematically alienating their members from all outside epistemic sources.
  • Their research centres on Rush Limbaugh, a wildly successful conservative firebrand in the United States, along with Fox News and related media
  • His constant attacks on the ‘mainstream media’ are attempts to discredit all other sources of knowledge. He systematically undermines the integrity of anybody who expresses any kind of contrary view.
  • outsiders are not simply mistaken – they are malicious, manipulative and actively working to destroy Limbaugh and his followers. The resulting worldview is one of deeply opposed force, an all-or-nothing war between good and evil
  • The result is a rather striking parallel to the techniques of emotional isolation typically practised in cult indoctrination
  • cult indoctrination involves new cult members being brought to distrust all non-cult members. This provides a social buffer against any attempts to extract the indoctrinated person from the cult.
  • The echo chamber doesn’t need any bad connectivity to function. Limbaugh’s followers have full access to outside sources of information
  • As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain
  • Their worldview can survive exposure to those outside voices because their belief system has prepared them for such intellectual onslaught.
  • exposure to contrary views could actually reinforce their views. Limbaugh might offer his followers a conspiracy theory: anybody who criticises him is doing it at the behest of a secret cabal of evil elites, which has already seized control of the mainstream media.
  • Perversely, exposure to outsiders with contrary views can thus increase echo-chamber members’ confidence in their insider sources, and hence their attachment to their worldview.
  • ‘evidential pre-emption’. What’s happening is a kind of intellectual judo, in which the power and enthusiasm of contrary voices are turned against those contrary voices through a carefully rigged internal structure of belief.
  • One might be tempted to think that the solution is just more intellectual autonomy. Echo chambers arise because we trust others too much, so the solution is to start thinking for ourselves.
  • that kind of radical intellectual autonomy is a pipe dream. If the philosophical study of knowledge has taught us anything in the past half-century, it is that we are irredeemably dependent on each other in almost every domain of knowledge
  • Limbaugh’s followers regularly read – but do not accept – mainstream and liberal news sources. They are isolated, not by selective exposure, but by changes in who they accept as authorities, experts and trusted sources.
  • we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
  • I am quite confident that there are plenty of echo chambers on the political Left. More importantly, nothing about echo chambers restricts them to the arena of politics
  • The world of anti-vaccination is clearly an echo chamber, and it is one that crosses political lines. I’ve also encountered echo chambers on topics as broad as diet (Paleo!), exercise technique (CrossFit!), breastfeeding, some academic intellectual traditions, and many, many more
  • Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.
  • much of the recent analysis has lumped epistemic bubbles together with echo chambers into a single, unified phenomenon. But it is absolutely crucial to distinguish between the two.
  • Epistemic bubbles are rather ramshackle; they go up easily, and they collapse easily
  • Echo chambers are far more pernicious and far more robust. They can start to seem almost like living things. Their belief systems provide structural integrity, resilience and active responses to outside attacks
  • the two phenomena can also exist independently. And of the events we’re most worried about, it’s the echo-chamber effects that are really causing most of the trouble.
  • new data does, in fact, seem to show that people on Facebook actually do see posts from the other side, or that people often visit websites with opposite political affiliation.
  • their basis for evaluation – their background beliefs about whom to trust – are radically different. They are not irrational, but systematically misinformed about where to place their trust.
  • Many people have claimed that we have entered an era of ‘post-truth’.
  • Not only do some political figures seem to speak with a blatant disregard for the facts, but their supporters seem utterly unswayed by evidence. It seems, to some, that truth no longer matters.
  • This is an explanation in terms of total irrationality. To accept it, you must believe that a great number of people have lost all interest in evidence or investigation, and have fallen away from the ways of reason.
  • echo chambers offers a less damning and far more modest explanation. The apparent ‘post-truth’ attitude can be explained as the result of the manipulations of trust wrought by echo chambers.
  • We don’t have to attribute a complete disinterest in facts, evidence or reason to explain the post-truth attitude. We simply have to attribute to certain communities a vastly divergent set of trusted authorities.
  • An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.
  • in many ways, echo-chamber members are following reasonable and rational procedures of enquiry. They’re engaging in critical reasoning. They’re questioning, they’re evaluating sources for themselves, they’re assessing different pathways to information. They are critically examining those who claim expertise and trustworthiness, using what they already know about the world
  • none of this weighs against the existence of echo chambers. We should not dismiss the threat of echo chambers based only on evidence about connectivity and exposure.
  • Notice how different what’s going on here is from, say, Orwellian doublespeak, a deliberately ambiguous, euphemism-filled language designed to hide the intent of the speaker.
  • echo chambers don’t trade in vague, ambiguous pseudo-speech. We should expect that echo chambers would deliver crisp, clear, unambiguous claims about who is trustworthy and who is not
  • clearly articulated conspiracy theories, and crisply worded accusations of an outside world rife with untrustworthiness and corruption.
  • Once an echo chamber starts to grip a person, its mechanisms will reinforce themselves.
  • In an epistemically healthy life, the variety of our informational sources will put an upper limit to how much we’re willing to trust any single person. Everybody’s fallible; a healthy informational network tends to discover people’s mistakes and point them out. This puts an upper ceiling on how much you can trust even your most beloved leader
  • nside an echo chamber, that upper ceiling disappears.
  • Being caught in an echo chamber is not always the result of laziness or bad faith. Imagine, for instance, that somebody has been raised and educated entirely inside an echo chamber
  • when the child finally comes into contact with the larger world – say, as a teenager – the echo chamber’s worldview is firmly in place. That teenager will distrust all sources outside her echo chamber, and she will have gotten there by following normal procedures for trust and learning.
  • It certainly seems like our teenager is behaving reasonably. She could be going about her intellectual life in perfectly good faith. She might be intellectually voracious, seeking out new sources, investigating them, and evaluating them using what she already knows.
  • The worry is that she’s intellectually trapped. Her earnest attempts at intellectual investigation are led astray by her upbringing and the social structure in which she is embedded.
  • Echo chambers might function like addiction, under certain accounts. It might be irrational to become addicted, but all it takes is a momentary lapse – once you’re addicted, your internal landscape is sufficiently rearranged such that it’s rational to continue with your addiction
  • Similarly, all it takes to enter an echo chamber is a momentary lapse of intellectual vigilance. Once you’re in, the echo chamber’s belief systems function as a trap, making future acts of intellectual vigilance only reinforce the echo chamber’s worldview.
  • There is at least one possible escape route, however. Notice that the logic of the echo chamber depends on the order in which we encounter the evidence. An echo chamber can bring our teenager to discredit outside beliefs precisely because she encountered the echo chamber’s claims first. Imagine a counterpart to our teenager who was raised outside of the echo chamber and exposed to a wide range of beliefs. Our free-range counterpart would, when she encounters that same echo chamber, likely see its many flaws
  • Those caught in an echo chamber are giving far too much weight to the evidence they encounter first, just because it’s first. Rationally, they should reconsider their beliefs without that arbitrary preference. But how does one enforce such informational a-historicity?
  • The escape route is a modified version of René Descartes’s infamous method.
  • Meditations on First Philosophy (1641). He had come to realise that many of the beliefs he had acquired in his early life were false. But early beliefs lead to all sorts of other beliefs, and any early falsehoods he’d accepted had surely infected the rest of his belief system.
  • The only solution, thought Descartes, was to throw all his beliefs away and start over again from scratch.
  • He could start over, trusting nothing and no one except those things that he could be entirely certain of, and stamping out those sneaky falsehoods once and for all. Let’s call this the Cartesian epistemic reboot.
  • Notice how close Descartes’s problem is to our hapless teenager’s, and how useful the solution might be. Our teenager, like Descartes, has problematic beliefs acquired in early childhood. These beliefs have infected outwards, infesting that teenager’s whole belief system. Our teenager, too, needs to throw everything away, and start over again.
  • Let’s call the modernised version of Descartes’s methodology the social-epistemic reboot.
  • when she starts from scratch, we won’t demand that she trust only what she’s absolutely certain of, nor will we demand that she go it alone
  • For the social reboot, she can proceed, after throwing everything away, in an utterly mundane way – trusting her senses, trusting others. But she must begin afresh socially – she must reconsider all possible sources of information with a presumptively equanimous eye. She must take the posture of a cognitive newborn, open and equally trusting to all outside sources
  • we’re not asking people to change their basic methods for learning about the world. They are permitted to trust, and trust freely. But after the social reboot, that trust will not be narrowly confined and deeply conditioned by the particular people they happened to be raised by.
  • Such a profound deep-cleanse of one’s whole belief system seems to be what’s actually required to escape. Look at the many stories of people leaving cults and echo chambers
  • Take, for example, the story of Derek Black in Florida – raised by a neo-Nazi father, and groomed from childhood to be a neo-Nazi leader. Black left the movement by, basically, performing a social reboot. He completely abandoned everything he’d believed in, and spent years building a new belief system from scratch. He immersed himself broadly and open-mindedly in everything he’d missed – pop culture, Arabic literature, the mainstream media, rap – all with an overall attitude of generosity and trust.
  • It was the project of years and a major act of self-reconstruction, but those extraordinary lengths might just be what’s actually required to undo the effects of an echo-chambered upbringing.
  • we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
  • Stories of actual escapes from echo chambers often turn on particular encounters – moments when the echo-chambered individual starts to trust somebody on the outside.
  • Black’s is case in point. By high school, he was already something of a star on neo-Nazi media, with his own radio talk-show. He went on to college, openly neo-Nazi, and was shunned by almost every other student in his community college. But then Matthew Stevenson, a Jewish fellow undergraduate, started inviting Black to Stevenson’s Shabbat dinners. In Black’s telling, Stevenson was unfailingly kind, open and generous, and slowly earned Black’s trust. This was the seed, says Black, that led to a massive intellectual upheaval – a slow-dawning realisation of the depths to which he had been misled
  • Similarly, accounts of people leaving echo-chambered homophobia rarely involve them encountering some institutionally reported fact. Rather, they tend to revolve around personal encounters – a child, a family member, a close friend coming out.
  • hese encounters matter because a personal connection comes with a substantial store of trust.
  • We don’t simply trust people as educated experts in a field – we rely on their goodwill. And this is why trust, rather than mere reliability, is the key concept
  • goodwill is a general feature of a person’s character. If I demonstrate goodwill in action, then you have some reason to think that I also have goodwill in matters of thought and knowledge.
  • f one can demonstrate goodwill to an echo-chambered member – as Stevenson did with Black – then perhaps one can start to pierce that echo chamber.
  • the path I’m describing is a winding, narrow and fragile one. There is no guarantee that such trust can be established, and no clear path to its being established systematically.
  • what we’ve found here isn’t an escape route at all. It depends on the intervention of another. This path is not even one an echo-chamber member can trigger on her own; it is only a whisper-thin hope for rescue from the outside.
Javier E

Reasons for Reason - NYTimes.com - 0 views

  • Rick Perry’s recent vocal dismissals of evolution, and his confident assertion that “God is how we got here” reflect an obvious divide in our culture.
  • underneath this divide is a deeper one. Really divisive disagreements are typically not just over the facts. They are also about the best way to support our views of the facts. Call this a disagreement in epistemic principle. Our epistemic principles tell us what is rational to believe, what sources of information to trust.
  • I suspect that for most people, scientific evidence (or its lack) has nothing to do with it. Their belief in creationism is instead a reflection of a deeply held epistemic principle: that, at least on some topics, scripture is a more reliable source of information than science.  For others, including myself, this is never the case.
  • ...17 more annotations...
  • appealing to another method won’t help either — for unless that method can be shown to be reliable, using it to determine the reliability of the first method answers nothing.
  • Every one of our beliefs is produced by some method or source, be it humble (like memory) or complex (like technologically assisted science). But why think our methods, whatever they are, are trustworthy or reliable for getting at the truth? If I challenge one of your methods, you can’t just appeal to the same method to show that it is reliable. That would be circular
  • How do we rationally defend our most fundamental epistemic principles? Like many of the best philosophical mysteries, this a problem that can seem both unanswerable and yet extremely important to solve.
  • it seems to suggest that in the end, all “rational” explanations end up grounding out on something arbitrary. It all just comes down to what you happen to believe, what you feel in your gut, your faith.  Human beings have historically found this to be a very seductive idea,
  • this is precisely the situation we seem to be headed towards in the United States. We live isolated in our separate bubbles of information culled from sources that only reinforce our prejudices and never challenge our basic assumptions. No wonder that — as in the debates over evolution, or what to include in textbooks illustrate — we so often fail to reach agreement over the history and physical structure of the world itself. No wonder joint action grinds to a halt. When you can’t agree on your principles of evidence and rationality, you can’t agree on the facts. And if you can’t agree on the facts, you can hardly agree on what to do in the face of the facts.
  • We can’t decide on what counts as a legitimate reason to doubt my epistemic principles unless we’ve already settled on our principles—and that is the very issue in question.
  • The problem that skepticism about reason raises is not about whether I have good evidence by my principles for my principles. Presumably I do.[1] The problem is whether I can give a more objective defense of them. That is, whether I can give reasons for them that can be appreciated from what Hume called a “common point of view” — reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”[2]
  • Any way you go, it seems you must admit you can give no reason for trusting your methods, and hence can give no reason to defend your most fundamental epistemic principles.
  • So one reason we should take the project of defending our epistemic principles seriously is that the ideal of civility demands it.
  • there is also another, even deeper, reason. We need to justify our epistemic principles from a common point of view because we need shared epistemic principles in order to even have a common point of view. Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • democracies aren’t simply organizing a struggle for power between competing interests; democratic politics isn’t war by other means. Democracies are, or should be, spaces of reasons.
  • we need an epistemic common currency because we often have to decide, jointly, what to do in the face of disagreement.
  • Sometimes we can accomplish this, in a democratic society, by voting. But we can’t decide every issue that way
  • We need some forms of common currency before we get to the voting booth.
  • Even if, as the skeptic says, we can’t defend the truth of our principles without circularity, we might still be able to show that some are better than others. Observation and experiment, for example, aren’t just good because they are reliable means to the truth. They are valuable because almost everyone can appeal to them. They have roots in our natural instincts, as Hume might have said.
  • that is one reason we need to resist skepticism about reason: we need to be able to give reasons for why some standards of reasons — some epistemic principles — should be part of that currency and some not.
  • Reasons for Reason By MICHAEL P. LYNCH
Javier E

Tips for Keeping the Peace and Making a Difference Around Politics at Thanksgiving - Ad... - 1 views

  • Creating more peace, rather than more polarization
  • Avoid starting out with “you’re wrong.” The result of starting out that way is rarely that the other person ends up saying “oh, yes, you are totally right and I AM wrong!” Even if it’s true, it’s usually not effective.
  • If you really plan to get deep into political discussions, I recommend reading up on news sources from the side opposite your views. Really read them closely and deeply. This will familiarize you with the arguments that convince your opposite-side relative and reduce your shock when you hear them repeat those arguments. It will allow you to react with more peace and less anger, and will allow you to prepare your arguments better. Remember, the arguments that convince you (that you read in your news sources) are not the same ones that convince them. Address the arguments that convince them.
  • ...9 more annotations...
  • Find out exactly what news sources your relatives read and watch and how they access those sources. Do they click on Facebook links, read just the headlines, use apps, visit sites, or watch TV? Inquire with the intent to really understand their habits.
  • If your opposite-side relative flat-out refuses to read any sources outside their own low-reliability, highly biased ones and dismisses sources you find credible out-of-hand, that might be a sign that discussing politics with that relative is not an effective use of your time or energy. In such an instance, I’d just encourage compassion toward them.
  • Share authentically about something that affects you personally. When discussing politics, sharing authentically about something you have experienced is usually much more effective than sharing something abstract about a policy or politician.
  • Avoid starting out with “you’re wrong.” The result of starting out that way is rarely that the other person ends up saying “oh, yes, you are totally right and I AM wrong!” Even if it’s true, it’s usually not effective.
  • Creating more peace, rather than more polarization
  • If you really plan to get deep into political discussions, I recommend reading up on news sources from the side opposite your views. Really read them closely and deeply. This will familiarize you with the arguments that convince your opposite-side relative and reduce your shock when you hear them repeat those arguments. It will allow you to react with more peace and less anger, and will allow you to prepare your arguments better. Remember, the arguments that convince you (that you read in your news sources) are not the same ones that convince them. Address the arguments that convince them.
  • Find out exactly what news sources your relatives read and watch and how they access those sources. Do they click on Facebook links, read just the headlines, use apps, visit sites, or watch TV? Inquire with the intent to really understand their habits.
  • If your opposite-side relative flat-out refuses to read any sources outside their own low-reliability, highly biased ones and dismisses sources you find credible out-of-hand, that might be a sign that discussing politics with that relative is not an effective use of your time or energy. In such an instance, I’d just encourage compassion toward them.
  • Share authentically about something that affects you personally. When discussing politics, sharing authentically about something you have experienced is usually much more effective than sharing something abstract about a policy or politician.
peterconnelly

Meet the Wikipedia editor who published the Buffalo shooting entry minutes after it sta... - 0 views

  • After Jason Moore, from Portland, Oregon, saw headlines from national news sources on Google News about the Buffalo shooting at a local supermarket on Saturday afternoon, he did a quick search for the incident on Wikipedia. When no results appeared, he drafted a single sentence: "On May 14, 2022, 10 people were killed in a mass shooting in Buffalo, New York." He hit save and published the entry on Wikipedia in less than a minute.
  • That article, which as of Friday has been viewed more than 900,000 times, has since undergone 1,071 edits by 223 editors who've voluntarily updated the page on the internet's free and largest crowdsourced encyclopedia.
  • He's credited with creating 50,000 entries
  • ...13 more annotations...
  • In the middle of breaking news, when people are searching for information, some platforms can present more questions than answers. Although Wikipedia is not staffed with professional journalists, it is viewed as an authoritative source by much of the public, for better or for worse. Its entries are also used for fact-checking purposes by some of the biggest social platforms, adding to the stakes and reach of the work from Moore and others.
  • "Editing Wikipedia can absolutely take an emotional toll on me, especially when working on difficult topics such as the COVID-19 pandemic, mass shootings, terrorist attacks, and other disasters," he said.
  • "I like the instant gratification of making the internet better," he said.
  • "I want to direct people to something that is going to provide them with much more reliable information at a time when it's very difficult for people to understand what sources they can trust."
  • "It is considered cool if you're the first person who creates an article, especially if you do it well with high-quality contributions," said Rasberry.
  • To help patrol incoming edits and predict misconduct or errors, Wikipedia -- like Twitter -- uses artificial intelligence bots that can escalate suspicious content to human reviewers who monitor content.
  • Rasberry, who also wrote the Wikipedia page on the platform's fact checking processes, said Wikipedia does not employ paid staff to monitor anything unless it involves "strange and unusual serious crimes like terrorism or real world violence, such as using Wikipedia to make threats, plan to commit suicide, or when Wikipedia itself is part of a crime.
  • Rasberry said flaws range from a geographical bias, which is related to challenges with communicating across languages; access to internet in lower and middle income countries; and barriers to freedom of journalism around the world.
  • "I've got many other editors that I'm working with who will back me, so when we encounter vandalism or trolls or misinformation or disinformation, editors are very quick to revert inappropriate edits or remove inappropriate content or poorly sourced content," Moore said.
  • While "edit wars" can happen on pages, Rasberry said this tends to occur more often over social issues rather than news.
  • Wikipedia also publicly displays who edits each version of an article via its history page, along with a "talk" page for each post that allows editors to openly discuss edits.
  • "If no reliable sources can be found on a topic, Wikipedia should not have an article on it," the page said.
  • "If it was a paid advertising site or if it had a different mission, I wouldn't waste my time."
karenmcgregor

Is ComputerNetworkAssignmentHelp.com a Legitimate Source for Network Security Assignmen... - 0 views

In the dynamic landscape of academic support services, finding a trustworthy platform for network security assignment writing help is crucial. Today, we'll delve into the legitimacy of https://www....

#networksecurityassignmentwritinghelp #networksecurity #onlineassignmenthelp education

started by karenmcgregor on 08 Jan 24 no follow-up yet
mshilling1

Dominion Voting Systems Official Is In Hiding After Threats : NPR - 0 views

  • It's just the latest example of how people's lives are being upended and potentially ruined by the unprecedented flurry of disinformation this year.
  • As people experience their own individual Internet bubbles, it can be hard to recognize just how much misinformation exists and how the current information ecosystem compares with previous years.
  • NewsGuard, which vets news sources based on transparency and reliability standards, found recently that among the top 100 sources of news in the U.S., sources it deemed unreliable had four times as many interactions this year compared with 2019.
  • ...4 more annotations...
  • But election integrity advocates worry the disinformation won't truly begin to recede until political leaders such as Trump stop questioning the election's legitimacy.
  • Even in an election where almost all the voting was recorded on paper ballots and rigorous audits were done more than ever before, none of that helps if millions of people are working with an alternative set of facts,
  • Even if an election is run perfectly, it doesn't matter to a sizable portion of the public who believes it was unfair. No amount of transparency at the county and state level can really combat the sort of megaphone that Trump wields
  • "When we're in the realm of coupling disinformation from both foreign and domestic sources, and government and nongovernment sources, and none of it is really grounded in reality ... evidence doesn't help much,
Javier E

Technopoly-Chs. 9,10--Scientism, the great symbol drain - 0 views

  • By Scientism, I mean three interrelated ideas that, taken together, stand as one of the pillars of Technopoly.
  • The first and indispensable idea is, as noted, that the methods of the natural sciences can be applied to the study of human behavior. This idea is the backbone of much of psychology and sociology as practiced at least in America, and largely accounts for the fact that social science, to quote F. A. Hayek, "has cont~ibuted scarcely anything to our understanding of social phenomena." 2
  • The second idea is, as also noted, that social science generates specific principles which can be used to organize society on a rational and humane basis. This implies that technical meansmostly "invisible technologies" supervised by experts-can be designed to control human behavior and set it on the proper course.
  • ...63 more annotations...
  • The third idea is that faith in science can serve as a comprehensive belief system that gives meaning to life, as well. as a sense of well-being, morality, and even immortality.
  • the spirit behind this scientific ideal inspired several men to believe that the reliable and predictable knowledge that could be obtained about stars and atoms could also be obtained about human behavior.
  • Among the best known of these early "social scientists" were Claude-Henri de Saint-Simon, Prosper Enfantin, and, of course, Auguste Comte.
  • They held in common two beliefs to which T echnopoly is deeply indebted: that the natural sciences provide a method to unlock the secrets of both the human heart and the direction of social life; that society can be rationally and humanely reorganized according to principles that social science will uncover. It is with these men that the idea of "social engineering" begins and the seeds of Scientism are planted.
  • Information produced by counting may sometimes be valuable in helping a person get an idea, or, even more so, in providing support for an idea. But the mere activity of counting does not make science.
  • Nor does observing th_ings, though it is sometimes said that if one is empirical, one is scientific. To be empirical means to look at things before drawing conclusions. Everyone, therefore, is an empiricist, with the possible exception of paranoid schizophrenics.
  • What we may call science, then, is the quest to find the immutable and universal laws that govern processes, presuming that there are cause-and-effect relations among these processes. It follows that the quest to understand human behavior and feeling can in no sense except the most trivial be called science.
  • Scientists do strive to be empirical and where possible precise, but it is also basic to their enterprise that they maintain a high degree of objectivity, which means that they study things independently of what people think or do about them.
  • I do not say, incidentally, that the Oedipus complex and God do not exist. Nor do I say that to believe in them is harmful-far from it. I say only that, there being no tests that could, in principle, show them to be false, they fall outside the purview Scientism 151 of science, as do almost all theories that make up the content of "social science."
  • in the nineteenth centu~, novelists provided us with most of the powerful metaphors and images of our culture.
  • This fact relieves the scientist of inquiring into their values and motivations and for this reason alone separates science from what is called social science, consigning the methodology of the latter (to quote Gunnar Myrdal) to the status of the "metaphysical and pseudo-objective." 3
  • The status of social-science methods is further reduced by the fact that there are almost no experiments that will reveal a social-science theory to be false.
  • et us further suppose that Milgram had found that 100 percent of his 1 subjecl:s did what they were told, with or without Hannah Arendt. And now let us suppose that I tell you a story of a Scientism 153 group of people who in some real situation refused to comply with the orders of a legitimate authority-let us say, the Danes who in the face of Nazi occupation helped nine thousand Jews escape to Sweden. Would you say to me that this cannot be so because Milgram' s study proves otherwise? Or would you say that this overturns Milgram's work? Perhaps you would say that the Danish response is not relevant, since the Danes did not regard the Nazi occupation as constituting legitimate autho!ity. But then, how would we explain the cooperative response to Nazi authority of the French, the Poles, and the Lithuanians? I think you would say none of these things, because Milgram' s experiment qoes not confirm or falsify any theory that might be said to postulate a law of human nature. His study-which, incidentally, I find both fascinating and terrifying-is not science. It is something else entirely.
  • Freud, could not imagine how the book could be judged exemplary: it was science or it was nothing. Well, of course, Freud was wrong. His work is exemplary-indeed, monumental-but scarcely anyone believes today that Freud was doing science, any more than educated people believe that Marx was doing science, or Max Weber or Lewis Mumford or Bruno Bettelheim or Carl Jung or Margaret Mead or Arnold Toynbee. What these people were doing-and Stanley Milgram was doing-is documenting the behavior and feelings of people as they confront problems posed by their culture.
  • the stories of social r~searchers are much closer in structure and purpose to what is called imaginative literature; that is to say, both a social researcher and a novelist give unique interpretations to a set of human events and support their interpretations with examples in various forms. Their interpretations cannot be proved or disproved but will draw their appeal from the power of their language, the depth of their explanations, the relevance of their examples, and the credibility of their themes.
  • And all of this has, in both cases, an identifiable moral purpose.
  • The words "true" and "false" do not apply here in the sense that they are used in mathematics or science. For there is nothing universally and irrevocably true or false about these interpretations. There are no critical tests to confirm or falsify them. There are no natural laws from which they are derived. They are bound by time, by situation, and above all by the cultural prejudices of the researcher or writer.
  • Both the novelist and the social researcher construct their stories by the use of archetypes and metaphors.
  • Cervantes, for example, gave us the enduring archetype of the incurable dreamer and idealist in Don Quixote. The social historian Marx gave us the archetype of the ruthless and conspiring, though nameless, capitalist. Flaubert gave us the repressed b~urgeois romantic in Emma Bovary. And Margaret Mead gave us the carefree, guiltless Samoan adolescent. Kafka gave us the alienated urbanite driven to self-loathing. And Max Weber gave us hardworking men driven by a mythology he called the Protestant Ethic. Dostoevsky gave us the egomaniac redeemed by love and religious fervor. And B. F. Skinner gave us the automaton redeemed by a benign technology.
  • Why do such social researchers tell their stories? Essentially for didactic and moralistic purposes. These men and women tell their stories for the same reason the Buddha, Confucius, Hillel, and Jesus told their stories (and for the same reason D. H. Lawrence told his).
  • Moreover, in their quest for objectivity, scientists proceed on the assumption that the objects they study are indifferent to the fact that they are being studied.
  • If, indeed, the price of civilization is repressed sexuality, it was not Sigmund Freud who discovered it. If the consciousness of people is formed by their material circumstances, it was not Marx who discovered it. If the medium is the message, it was not McLuhan who discovered it. They have merely retold ancient stories in a modem style.
  • Unlike science, social research never discovers anything. It only rediscovers what people once were told and need to be told again.
  • Only in knowing ~omething of the reasons why they advocated education can we make sense of the means they suggest. But to understand their reas.ons we must also understand the narratives that governed their view of the world. By narrative, I mean a story of human history that gives meaning to the past, explains the present, and provides guidance for the future.
  • In Technopoly, it is not Scientism 159 enough to say, it is immoral and degrading to allow people to be homeless. You cannot get anywhere by asking a judge, a politician, or a bureaucrat to r~ad Les Miserables or Nana or, indeed, the New Testament. Y 01.i must show that statistics have produced data revealing the homeless to be unhappy and to be a drain on the economy. Neither Dostoevsky nor Freud, Dickens nor Weber, Twain nor Marx, is now a dispenser of legitimate knowledge. They are interesting; they are ''.worth reading"; they are artifacts of our past. But as for "truth," we must tum to "science."
  • In Technopoly, it is not enough for social research to rediscover ancient truths or to comment on and criticize the moral behavior of people. In T echnopoly, it is an insult to call someone a "moralizer." Nor is it sufficient for social research to put forward metaphors, images, and ideas that can help people live with some measure of understanding and dignity.
  • Such a program lacks the aura of certain knowledge that only science can provide. It becomes necessary, then, to transform psychology, sociology, and anthropology into "sciences," in which humanity itself becomes an object, much like plants, planets, or ice cubes.
  • That is why the commonplaces that people fear death and that children who come from stable families valuing scholarship will do well in school must be announced as "discoveries" of scientific enterprise. In this way, social resear~hers can see themselves, and can be seen, as scientists, researchers without bias or values, unburdened by mere opinion. In this way, social policies can be claimed to rest on objectively determined facts.
  • given the psychological, social, and material benefits that attach to the label "scientist," it is not hard to see why social researchers should find it hard to give it up.
  • Our social "s'cientists" have from the beginning been less tender of conscience, or less rigorous in their views of science, or perhaps just more confused about the questions their procedures can answer and those they cannot. In any case, they have not been squeamish about imputing to their "discoveries" and the rigor of their procedures the power to direct us in how we ought rightly to behave.
  • It is less easy to see why the rest of us have so willingly, even eagerly, cooperated in perpetuating the same illusion.
  • When the new technologies and techniques and spirit of men like Galileo, Newton, and Bacon laid the foundations of natural science, they also discredited the authority of earlier accounts of the physical world, as found, for example, in the great tale of Genesis. By calling into question the truth of such accounts in one realm, science undermined the whole edifice of belief in sacred stories and ultimately swept away with it the source to which most humans had looked for moral authority. It is not too much to say, I think, that the desacralized world has been searching for an alternative source of moral authority ever since.
  • We welcome them gladly, and the claim explicitly made or implied, because we need so desperately to find some source outside the frail and shaky judgments of mortals like ourselves to authorize our moral decisions and behavior. And outside of the authority of brute force, which can scarcely be called moral, we seem to have little left but the authority of procedures.
  • It is not merely the misapplication of techniques such as quantification to questions where numbers have nothing to say; not merely the confusion of the material and social realms of human experience; not merely the claim of social researchers to be applying the aims and procedures of natural scien\:e to the human world.
  • This, then, is what I mean by Scientism.
  • It is the desperate hope, and wish, and ultimately the illusory belief that some standardized set of procedures called "science" can provide us with an unimpeachable source of moral authority, a suprahuman basis for answers to questions like "What is life, and when, and why?" "Why is death, and suffering?" 'What is right and wrong to do?" "What are good and evil ends?" "How ought we to think and feel and behave?
  • Science can tell us when a heart begins to beat, or movement begins, or what are the statistics on the survival of neonates of different gestational ages outside the womb. But science has no more authority than you do or I do to establish such criteria as the "true" definition of "life" or of human state or of personhood.
  • Social research can tell us how some people behave in the presence of what they believe to be legitimate authority. But it cannot tell us when authority is "legitimate" and when not, or how we must decide, or when it may be right or wrong to obey.
  • To ask of science, or expect of science, or accept unchallenged from science the answers to such questions is Scientism. And it is Technopoly's grand illusion.
  • In the institutional form it has taken in the United States, advertising is a symptom of a world-view 'that sees tradition as an obstacle to its claims. There can, of course, be no functioning sense of tradition without a measure of respect for symbols. Tradition is, in fact, nothing but the acknowledgment of the authority of symbols and the relevance of the narratives that gave birth to them. With the erosion of symbols there follows a loss of narrative, which is one of the most debilitating consequences of Technopoly' s power.
  • What the advertiser needs to know is not what is right about the product but what is wrong about the buyer. And so the balance of business expenditures shifts from product research to market research, which meahs orienting business away from making products of value and toward making consumers feel valuable. The business of business becomes pseudo-therapy; the consumer, a patient reassl.,lred by psychodramas.
  • At the moment, 1t 1s considered necessary to introduce computers to the classroom, as it once was thought necessary to bring closed-circuit television and film to the classroom. To the question "Why should we do this?" the answer is: "To make learning more efficient and more interesting." Such an answer is considered entirely adequate, since in T ~chnopoly efficiency and interest need no justification. It is, therefore, usually not noticed that this answer does not address the question "What is learning for?"
  • What this means is that somewhere near the core of Technopoly is a vast industry with license to use all available symbols to further the interests of commerce, by devouring the psyches of consumers.
  • In the twentieth century, such metaphors and images have come largely from the pens of social historians and researchers. ·Think of John Dewey, William James, Erik Erikson, Alfred Kinsey, Thorstein Veblen, Margaret Mead, Lewis Mumford, B. F. Skinner, Carl Rogers, Marshall McLuhan, Barbara Tuchman, Noam Chomsky, Robert Coles, even Stanley Milgram, and you must acknowledge that our ideas of what we are like and what kind of country we live in come from their stories to a far greater extent than from the stories of our most renowned novelists.
  • social idea that must be advanced through education.
  • Confucius advocated teaching "the Way" because in tradition he saw the best hope for social order. As our first systematic fascist, Plato wished education to produce philosopher kings. Cicero argued that education must free the student from the tyranny of the present. Jefferson thought the purpose of education is to teach the young how to protect their liberties. Rousseau wished education to free the young from the unnatural constraints of a wicked and arbitrary social order. And among John Dewey's aims was to help the student function without certainty in a world of constant change and puzzling· ambiguities.
  • The point is that cultures must have narratives and will find them where they will, even if they lead to catastrophe. The alternative is to live without meaning, the ultimate negation of life itself.
  • It is also to the point to say that each narrative is given its form and its emotional texture through a cluster of symbols that call for respect and allegiance, even devotion.
  • by definition, there can be no education philosophy that does not address what learning is for. Confucius, Plato, Quintilian, Cicero, Comenius, Erasmus, Locke, Rousseau, Jefferson, Russell, Montessori, Whitehead, and Dewey--each believed that there was some transcendent political, spiritual, or
  • The importance of the American Constitution is largely in its function as a symbol of the story of our origins. It is our political equivalent of Genesis. To mock it, to• ignore it, to circwnvent it is to declare the irrelevance of the story of the United States as a moral light unto the world. In like fashion, the Statue of Liberty is the key symbol of the story of America as the natural home of the teeming masses, from anywhere, yearning to be free.
  • There are those who believe--as did the great historian Arnold Toynbee-that without a comprehensive religious narrative at its center a culture must decline. Perhaps. There are, after all, other sources-mythology, politics, philosophy, and science; for example--but it is certain that no culture can flourish without narratives of transcendent orjgin and power.
  • This does not mean that the mere existence of such a narrative ensures a culture's stability and strength. There are destructive narratives. A narrative provides meaning, not necessarily survival-as, for example, the story provided by Adolf Hitler to the German nation in t:he 1930s.
  • What story does American education wish to tell now? In a growing Technopoly, what do we believe education is for?
  • The answers are discouraging, and one of. them can be inferred from any television commercial urging the young to stay in school. The commercial will either imply or state explicitly that education will help the persevering student to get a ·good job. And that's it. Well, not quite. There is also the idea that we educate ourselves to compete with the Japanese or the Germans in an economic struggle to be number one.
  • Young men, for example, will learn how to make lay-up shots when they play basketball. To be able to make them is part of the The Great Symbol Drain 177 definition of what good players are. But they do not play basketball for that purpose. There is usually a broader, deeper, and more meaningful reason for wanting to play-to assert their manhood, to please their fathers, to be acceptable to their peers, even for the sheer aesthetic pleasure of the game itself. What you have to do to be a success must be addressed only after you have found a reason to be successful.
  • Bloom's solution is that we go back to the basics of Western thought.
  • He wants us to teach our students what Plato, Aristotle, Cicero, Saint Augustine, and other luminaries have had to say on the great ethical and epistemological questions. He believes that by acquainting themselves with great books our students will acquire a moral and intellectual foundation that will give meaning and texture to their lives.
  • Hirsch's encyclopedic list is not a solution but a description of the problem of information glut. It is therefore essentially incoherent. But it also confuses a consequence of education with a purpose. Hirsch attempted to answer the question "What is an educated person?" He left unanswered the question "What is an education for?"
  • Those who reject Bloom's idea have offered several arguments against it. The first is that such a purpose for education is elitist: the mass of students would not find the great story of
  • Western civilization inspiring, are too deeply alienated from the past to find it so, and would therefore have difficulty connecting the "best that has been thought and said" to their own struggles to find q1eaning in their lives.
  • A second argument, coming from what is called a "leftist" perspective, is even more discouraging. In a sense, it offers a definition of what is meant by elitism. It asserts that the "story of Western civilization" is a partial, biased, and even oppressive one. It is not the story of blacks, American Indians, Hispanics, women, homosexuals-of any people who are not white heterosexual males of Judea-Christian heritage. This claim denies that there is or can be a national culture, a narrative of organizing power and inspiring symbols which all citizens can identify with and draw sustenance from. If this is true, it means nothing less than that our national symbols have been drained of their power to unite, and that education must become a tribal affair; that is, each subculture must find its own story and symbols, and use them as the moral basis of education.
  • nto this void comes the Technopoly story, with its emphasis on progress without limits, rights without responsibilities, and technology without cost. The T echnopoly story is without a moral center. It puts in its place efficiency, interest, and economic advance. It promises heaven on earth through the conveniences of technological progress. It casts aside all traditional narratives and symbols that· suggest stability and orderliness, and tells, instead, of a life of skills, technical expertise, and the ecstasy of consumption. Its purpose is to produce functionaries for an ongoing Technopoly.
  • It answers Bloom by saying that the story of Western civilization is irrelevant; it answers the political left by saying there is indeed a common culture whose name is T echnopoly and whose key symbol is now the computer, toward which there must be neither irreverence nor blasphemy. It even answers Hirsch by saying that there are items on his list that, if thought about too deeply and taken too seriously, will interfere with the progress of technology.
Javier E

"Wikipedia Is Not Truth" - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • entriesOnPage.push("6a00d83451c45669e20168e7872016970c"); facebookButtons['6a00d83451c45669e20168e7872016970c'] = ''; twitterButtons['6a00d83451c45669e20168e7872016970c'] = ''; email permalink 20 Feb 2012 12:30 PM "Wikipedia Is Not Truth" Timothy Messer-Kruse tried to update the Wiki page on the Haymarket riot of 1886 to correct a long-standing inaccurate claim. Even though he's written two books and numerous articles on the subject, his changes were instantly rejected: I had cited the documents that proved my point, including verbatim testimony from the trial published online by the Library of Congress. I also noted one of my own peer-reviewed articles. One of the people who had assumed the role of keeper of this bit of history for Wikipedia quoted the Web site's "undue weight" policy, which states that "articles should not give minority views as much or as detailed a description as more popular views."
  • "Explain to me, then, how a 'minority' source with facts on its side would ever appear against a wrong 'majority' one?" I asked the Wiki-gatekeeper. ...  Another editor cheerfully tutored me in what this means: "Wikipedia is not 'truth,' Wikipedia is 'verifiability' of reliable sources. Hence, if most secondary sources which are taken as reliable happen to repeat a flawed account or description of something, Wikipedia will echo that."
pier-paolo

Reasons for Reason - The New York Times - 0 views

  • How do we rationally defend our most fundamental epistemic principles? Like many of the best philosophical mysteries, this a problem that can seem both unanswerable and yet extremely important to solve.
  • Any way you go, it seems you must admit you can give no reason for trusting your methods, and hence can give no reason to defend your most fundamental epistemic principles.
  • A legitimate challenge is presumably a rational challenge. Disagreements over epistemic principles are disagreements over which methods and sources to trus
  • ...7 more annotations...
  • That is, whether I can give reasons for them that can be appreciated from what Hume called a “common point of view” — reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”
  • Democracies are, or should be, spaces of reasons.
  • we should take the project of defending our epistemic principles seriously is that the ideal of civility demands it.
  • We need to justify our epistemic principles from a common point of view because we need shared epistemic principles in order to even have a common point of view.
  • Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • But we can’t decide every issue that way, and we certainly can’t decide on our epistemic principles — which methods and sources are actually rationally worthy of trust — by voting
  • They are valuable because almost everyone can appeal to them. They have roots in our natural instincts, as Hume might have said. If so, then perhaps we can hope to give reasons for our epistemic principles. Such reasons will be “merely” practical, but reasons — reasons for reason, as it were — all the same.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

When a Shitposter Runs a Social Media Platform - The Bulwark - 0 views

  • This is an unfortunate and pernicious pattern. Musk often refers to himself as moderate or independent, but he routinely treats far-right fringe figures as people worth taking seriously—and, more troublingly, as reliable sources of information.
  • By doing so, he boosts their messages: A message retweeted by or receiving a reply from Musk will potentially be seen by millions of people.
  • Also, people who pay for Musk’s Twitter Blue badges get a lift in the algorithm when they tweet or reply; because of the way Twitter Blue became a culture war front, its subscribers tend to skew to the righ
  • ...19 more annotations...
  • The important thing to remember amid all this, and the thing that has changed the game when it comes to the free speech/content moderation conversation, is that Elon Musk himself loves conspiracy theorie
  • The media isn’t just unduly critical—a perennial sore spot for Musk—but “all news is to some degree propaganda,” meaning he won’t label actual state-affiliated propaganda outlets on his platform to distinguish their stories from those of the New York Times.
  • In his mind, they’re engaged in the same activity, so he strikes the faux-populist note that the people can decide for themselves what is true, regardless of objectively very different track records from different sources.
  • Musk’s “just asking questions” maneuver is a classic Trump tactic that enables him to advertise conspiracy theories while maintaining a sort of deniability.
  • At what point should we infer that he’s taking the concerns of someone like Loomer seriously not despite but because of her unhinged beliefs?
  • Musk’s skepticism seems largely to extend to criticism of the far-right, while his credulity for right-wing sources is boundless.
  • Brandolini’s Law holds that the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
  • Refuting bullshit requires some technological literacy, perhaps some policy knowledge, but most of all it requires time and a willingness to challenge your own prior beliefs, two things that are in precious short supply online.
  • This is part of the argument for content moderation that limits the dispersal of bullshit: People simply don’t have the time, energy, or inclination to seek out the boring truth when stimulated by some online outrage.
  • Here we can return to the example of Loomer’s tweet. People did fact-check her, but it hardly matters: Following Musk’s reply, she ended up receiving over 5 million views, an exponentially larger online readership than is normal for her. In the attention economy, this counts as a major win. “Thank you so much for posting about this, @elonmusk!” she gushed in response to his reply. “I truly appreciate it.”
  • the problem isn’t limited to elevating Loomer. Musk had his own stock of misinformation to add to the pile. After interacting with her account, Musk followed up last Tuesday by tweeting out last week a 2021 Federalist article claiming that Facebook founder Mark Zuckerberg had “bought” the 2020 election, an allegation previously raised by Trump and others, and which Musk had also brought up during his recent interview with Tucker Carlson.
  • If Zuckerberg wanted to use his vast fortune to tip the election, it would have been vastly more efficient to create a super PAC with targeted get-out-the-vote operations and advertising. Notwithstanding legitimate criticisms one can make about Facebook’s effect on democracy, and whatever Zuckerberg’s motivations, you have to squint hard to see this as something other than a positive act addressing a real problem.
  • It’s worth mentioning that the refutations I’ve just sketched of the conspiratorial claims made by Loomer and Musk come out to around 1,200 words. The tweets they wrote, read by millions, consisted of fewer than a hundred words in total. That’s Brandolini’s Law in action—an illustration of why Musk’s cynical free-speech-over-all approach amounts to a policy in favor of disinformation and against democracy.
  • Moderation is a subject where Zuckerberg’s actions provide a valuable point of contrast with Musk. Through Facebook’s independent oversight board, which has the power to overturn the company’s own moderation decisions, Zuckerberg has at least made an effort to have credible outside actors inform how Facebook deals with moderation issues
  • Meanwhile, we are still waiting on the content moderation council that Elon Musk promised last October:
  • The problem is about to get bigger than unhinged conspiracy theorists occasionally receiving a profile-elevating reply from Musk. Twitter is the venue that Tucker Carlson, whom advertisers fled and Fox News fired after it agreed to pay $787 million to settle a lawsuit over its election lies, has chosen to make his comeback. Carlson and Musk are natural allies: They share an obsessive anti-wokeness, a conspiratorial mindset, and an unaccountable sense of grievance peculiar to rich, famous, and powerful men who have taken it upon themselves to rail against the “elites,” however idiosyncratically construed
  • f the rumors are true that Trump is planning to return to Twitter after an exclusivity agreement with Truth Social expires in June, Musk’s social platform might be on the verge of becoming a gigantic rec room for the populist right.
  • These days, Twitter increasingly feels like a neighborhood where the amiable guy-next-door is gone and you suspect his replacement has a meth lab in the basement.
  • even if Twitter’s increasingly broken information environment doesn’t sway the results, it is profoundly damaging to our democracy that so many people have lost faith in our electoral system. The sort of claims that Musk is toying with in his feed these days do not help. It is one thing for the owner of a major source of information to be indifferent to the content that gets posted to that platform. It is vastly worse for an owner to actively fan the flames of disinformation and doubt.
oliviaodon

What is George Orwell's 1984 about, why have sales soared since Trump adviser Kellyanne... - 0 views

  • GEORGE Orwell’s dystopian novel 1984 has had “doublegood” sales this week after one of Trump’s advisers used the phrase “alternative facts” in an interview.
  • Orwell's novel 1984 is a bleak portrayal of Great Britain re-imagined as a dystopian superstate governed by a dictatorial regime.
  • Many concepts of the novel have crossed over to popular culture or have entered common use in everyday life - the repressive regime is overseen by Big Brother, and the government's invented language "newspeak" was designed to limit freedom of thought. The term "doublethink" - where a person can accept two contradicting beliefs as both being correct - first emerged in the dystopian landscape of Airstrip One.
  • ...2 more annotations...
  • The public started drawing comparisons between the Inner Party's regime and Trump's presidency when his adviser used the phrase "alternative facts" in an interview. Kellyanne Conway was being quizzed after the White House press secretary Sean Spicer apparently lied about the number of people who attended Trump's inauguration. The presenter asked why President Trump has asked Spicer to come out to speak to the press and "utter a falsehood". Conway responded that Spicer didn't utter a falsehood but gave "alternative facts". People drew comparisons with "newspeak" which was aimed at wiping out original thought. Her chose of language was also accused of representing "doublespeak" - which Orwell wrote "means the power of holding two contradictory beliefs in one's mind simultaneously." Washington Post reporter Karen Tumulty said: "Alternative facts is a George Orwell phrase".
  • Sales of 1984 also soared in 2013 when news broke of the National Security Administration's Prism surveillance scandal.
  •  
    *note: the Sun is not a reliable source, but I thought this was an interesting read nonetheless
Javier E

After the Fact - The New Yorker - 1 views

  • newish is the rhetoric of unreality, the insistence, chiefly by Democrats, that some politicians are incapable of perceiving the truth because they have an epistemological deficit: they no longer believe in evidence, or even in objective reality.
  • the past of proof is strange and, on its uncertain future, much in public life turns. In the end, it comes down to this: the history of truth is cockamamie, and lately it’s been getting cockamamier.
  • . Michael P. Lynch is a philosopher of truth. His fascinating new book, “The Internet of Us: Knowing More and Understanding Less in the Age of Big Data,” begins with a thought experiment: “Imagine a society where smartphones are miniaturized and hooked directly into a person’s brain.” As thought experiments go, this one isn’t much of a stretch. (“Eventually, you’ll have an implant,” Google’s Larry Page has promised, “where if you think about a fact it will just tell you the answer.”) Now imagine that, after living with these implants for generations, people grow to rely on them, to know what they know and forget how people used to learn—by observation, inquiry, and reason. Then picture this: overnight, an environmental disaster destroys so much of the planet’s electronic-communications grid that everyone’s implant crashes. It would be, Lynch says, as if the whole world had suddenly gone blind. There would be no immediate basis on which to establish the truth of a fact. No one would really know anything anymore, because no one would know how to know. I Google, therefore I am not.
  • ...20 more annotations...
  • In England, the abolition of trial by ordeal led to the adoption of trial by jury for criminal cases. This required a new doctrine of evidence and a new method of inquiry, and led to what the historian Barbara Shapiro has called “the culture of fact”: the idea that an observed or witnessed act or thing—the substance, the matter, of fact—is the basis of truth and the only kind of evidence that’s admissible not only in court but also in other realms where truth is arbitrated. Between the thirteenth century and the nineteenth, the fact spread from law outward to science, history, and journalism.
  • Lynch isn’t terribly interested in how we got here. He begins at the arrival gate. But altering the flight plan would seem to require going back to the gate of departure.
  • Lynch thinks we are frighteningly close to this point: blind to proof, no longer able to know. After all, we’re already no longer able to agree about how to know. (See: climate change, above.)
  • Empiricists believed they had deduced a method by which they could discover a universe of truth: impartial, verifiable knowledge. But the movement of judgment from God to man wreaked epistemological havoc.
  • For the length of the eighteenth century and much of the nineteenth, truth seemed more knowable, but after that it got murkier. Somewhere in the middle of the twentieth century, fundamentalism and postmodernism, the religious right and the academic left, met up: either the only truth is the truth of the divine or there is no truth; for both, empiricism is an error.
  • That epistemological havoc has never ended: much of contemporary discourse and pretty much all of American politics is a dispute over evidence. An American Presidential debate has a lot more in common with trial by combat than with trial by jury,
  • came the Internet. The era of the fact is coming to an end: the place once held by “facts” is being taken over by “data.” This is making for more epistemological mayhem, not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines
  • “Most knowing now is Google-knowing—knowledge acquired online,”
  • We now only rarely discover facts, Lynch observes; instead, we download them.
  • “The Internet didn’t create this problem, but it is exaggerating it,”
  • nothing could be less well settled in the twenty-first century than whether people know what they know from faith or from facts, or whether anything, in the end, can really be said to be fully proved.
  • In his 2012 book, “In Praise of Reason,” Lynch identified three sources of skepticism about reason: the suspicion that all reasoning is rationalization, the idea that science is just another faith, and the notion that objectivity is an illusion. These ideas have a specific intellectual history, and none of them are on the wane.
  • Their consequences, he believes, are dire: “Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • When we Google-know, Lynch argues, we no longer take responsibility for our own beliefs, and we lack the capacity to see how bits of facts fit into a larger whole
  • Essentially, we forfeit our reason and, in a republic, our citizenship. You can see how this works every time you try to get to the bottom of a story by reading the news on your smartphone.
  • what you see when you Google “Polish workers” is a function of, among other things, your language, your location, and your personal Web history. Reason can’t defend itself. Neither can Google.
  • rump doesn’t reason. He’s a lot like that kid who stole my bat. He wants combat. Cruz’s appeal is to the judgment of God. “Father God, please . . . awaken the body of Christ, that we might pull back from the abyss,” he preached on the campaign trail. Rubio’s appeal is to Google.
  • Is there another appeal? People who care about civil society have two choices: find some epistemic principles other than empiricism on which everyone can agree or else find some method other than reason with which to defend empiricism
  • Lynch suspects that doing the first of these things is not possible, but that the second might be. He thinks the best defense of reason is a common practical and ethical commitment.
  • That, anyway, is what Alexander Hamilton meant in the Federalist Papers, when he explained that the United States is an act of empirical inquiry: “It seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.”
sissij

Unsealed Documents Raise Questions on Monsanto Weed Killer - The New York Times - 0 views

  • The court documents included Monsanto’s internal emails and email traffic between the company and federal regulators. The records suggested that Monsanto had ghostwritten research that was later attributed to academics and indicated that a senior official at the Environmental Protection Agency had worked to quash a review of Roundup’s main ingredient, glyphosate, that was to have been conducted by the United States Department of Health and Human Services.
  • Monsanto also rebutted suggestions that the disclosures highlighted concerns that the academic research it underwrites is compromised.
  • In a statement, Monsanto said, “Glyphosate is not a carcinogen.”
  • ...3 more annotations...
  • The safety of glyphosate is not settled science.
  • they could ghostwrite research on glyphosate by hiring academics to put their names on papers that were actually written by Monsanto.
  • The issue of glyphosate’s safety is not a trivial one for Americans. Over the last two decades, Monsanto has genetically re-engineered corn, soybeans and cotton so it is much easier to spray them with the weed killer, and some 220 million pounds of glyphosate were used in 2015 in the United States.
  •  
    This news shows that there are a lot of cases that companies use science as a shield to convince people that their product is safe and good. Honesty is scientific papers has always been an important issue when we talk about the reliability of those papers. As we discussed in TOK, science is more like a social project that involves a lot of people and all human works are more or less biased and subjective. Now, science is intertwined with benefit and economics so the issue become much more complicated. I think we should identify the sources of the paper before citing any word from the paper because who write the paper is a big factor of which side the paper is standing. --Sissi (3/14/2017)
Javier E

A Push to Redefine Knowledge at Wikipedia - NYTimes.com - 0 views

  • lately Wikipedia has been criticized from without and within for reflecting a Western, male-dominated mindset similar to the perspective behind the encyclopedias it has replaced.
  • If Wikipedia purports to collect the “sum of all human knowledge,” in the words of one of its founders, Jimmy Wales, that, by definition, means more than printed knowledge
  • the article would have been deleted from English Wikipedia if it didn’t have any sources to cite. Those are the rules of the game, and those are the rules he would like to change, or at least bend, or, if all else fails, work around.
  • ...6 more annotations...
  • There are whole cultures, he said, that have little to no printed material to cite as proof about the way life is lived.
  • he and the video’s directors, Priya Sen and Zen Marie, spoke with people in African and Indian villages either in person or over the phone and had them describe basic activities. These recordings were then uploaded and linked to the article as sources, and suddenly an article that seems like it could be a personal riff looks a bit more academic.
  • After a series of hoaxes, culminating in a Wikipedia article in 2005 that maligned the newspaper editor John Seigenthaler for no discernible reason other than because a Wikipedia contributor could, the site tried to ensure that every statement could be traced to a source.
  • Then there is the rule “no original research,” which was meant to say that Wikipedia doesn’t care if you are writing about the subway station you visit every day, find someone who has written reliably on the color of the walls there.
  • Perhaps Mr. Prabhala’s most challenging argument is that by being text-focused, and being locked into the Encyclopedia Britannica model, Wikipedia risks being behind the times.
  • An 18-year-old is comfortable using “objects of trust that have been created on the Internet,” he said, and “Wikipedia isn’t taking advantage of that.”
Javier E

Roots of Memory Aren't Fully Developed Until Adulthood - NYTimes.com - 1 views

  • Although memory performance generally improved with age, the ability to trace the source of a memory — evaluated by the second test — was particularly weak in children. Adolescents and adults performed equally well, but with a significant difference.
  • The participants wore electroencephalogram caps that measured their neural activity. Only adults showed a sophisticated pattern of activity when they were retrieving source memory information
  • when children and adolescents are asked to testify, the reliability of their source memory — for example, recalling the first time a certain person was encountered, and where — should be carefully questioned.
  •  
    That's insane!! Now, when the author says " the ability to remember the origin of memories," is he referring to the actual experience when information was received and the context it was in, or just the manner that the information was received, like the WOK? I'm still not clear on that.
Javier E

The View from Nowhere: Questions and Answers » Pressthink - 2 views

  • In pro journalism, American style, the View from Nowhere is a bid for trust that advertises the viewlessness of the news producer. Frequently it places the journalist between polarized extremes, and calls that neither-nor position “impartial.” Second, it’s a means of defense against a style of criticism that is fully anticipated: charges of bias originating in partisan politics and the two-party system. Third: it’s an attempt to secure a kind of universal legitimacy that is implicitly denied to those who stake out positions or betray a point of view. American journalists have almost a lust for the View from Nowhere because they think it has more authority than any other possible stance.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • Q. What does it say? # A. It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.” #
  • ...11 more annotations...
  • But there are limits to this motion. We can’t transcend all our starting points. No matter how far it pulls back the camera is still occupying a position. We can’t actually take the “view from nowhere,” but this doesn’t mean that objectivity is a lie or an illusion. Our ability to step back and the fact that there are limits to it– both are real. And realism demands that we acknowledge both.
  • Q. So is objectivity a myth… or not? # A. One of the many interesting things Nagel says in that book is that “objectivity is both underrated and overrated, sometimes by the same persons.” It’s underrated by those who scoff at it as a myth. It is overrated by people who think it can replace the view from somewhere or transcend the human subject. It can’t.
  • When MSNBC suspends Keith Olbermann for donating without company permission to candidates he supports– that’s dumb. When NPR forbids its “news analysts” from expressing a view on matters they are empowered to analyze– that’s dumb. When reporters have to “launder” their views by putting them in the mouths of think tank experts: dumb. When editors at the Washington Post decline even to investigate whether the size of rallies on the Mall can be reliably estimated because they want to avoid charges of “leaning one way or the other,” as one of them recently put it, that is dumb. When CNN thinks that, because it’s not MSNBC and it’s not Fox, it’s the only the “real news network” on cable, CNN is being dumb about itself.
  • Let some in the press continue on with the mask of impartiality, which has advantages for cultivating sources and soothing advertisers. Let others experiment with transparency as the basis for trust. When you click on their by-line it takes you to a disclosure page where there is a bio, a kind of mission statement, and a creative attempt to say: here’s where I’m coming from (one example) along with campaign contributions, any affiliations or memberships, and–I’m just speculating now–a list of heroes and villains, or major influences, along with an archive of the work, plus anything else that might assist the user in placing this person on the user’s mattering map.
  • if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense.
  • If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad
  • I think we are in the midst of shift in the system by which trust is sustained in professional journalism. David Weinberger tried to capture it with his phrase: transparency is the new objectivity. My version of that: it’s easier to trust in “here’s where I’m coming from” than the View from Nowhere. These are two different ways of bidding for the confidence of the users.
  • In the newer way, the logic is different. “Look, I’m not going to pretend that I have no view. Instead, I am going to level with you about where I’m coming from on this. So factor that in when you evaluate my report. Because I’ve done the work and this is what I’ve concluded…”
  • it has unearned authority in the American press. If in doing the serious work of journalism–digging, reporting, verification, mastering a beat–you develop a view, expressing that view does not diminish your authority. It may even add to it. The View from Nowhere doesn’t know from this. It also encourages journalists to develop bad habits. Like: criticism from both sides is a sign that you’re doing something right, when you could be doing everything wrong.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.”
huffem4

For years, newspaper design helped us identify fake news. Not anymore | Prospect Magazine - 2 views

  • The evidence suggests that despite all the controversies of the last few years, we are becoming more, not less trusting of our news feeds
  • Many of the shorthand pieces of visual language which help us distinguish print publications—font choices, paper size, image choice and colour schemes—are not present or less prominent online.
  • increasing numbers of people over the long term are coming to trust social media as a source of news
  • ...2 more annotations...
  • fake news is arguably less dangerous than ultra-partisan sites, which tend to rely on confirming prejudices, spinning stories well beyond the bounds of normal journalistic practice, and wearing partisan leanings on their sleeves
  • Social media is the platform where the war against misleading news is being fought—and it should be the primary focus for these changes, with increased prominence given to trusted outlets, and toning down or turning off the ability for untrustworthy sources to be liked or shared.
kaylynfreeman

Why We Already Have False Memories of the COVID-19 Crisis | Psychology Today - 1 views

  • New research sheds light on spotting false memories of emotional events.
  • Have you heard people 'remember' things about the COVID-19 crisis that you know can't be right?
  • Or maybe you have a friend who keeps merging memories of social distancing guidelines?
  • ...19 more annotations...
  • Research has repeatedly shown that false memories can feel incredibly real and can be multi-sensory: In some, we can hear, feel, or see things, just like in real memories. Indeed, to be considered a false memory it can't be a lie, it needs to be part of our reality.
  • I found that if someone has a false memory, we probably can't tell just by looking at the memory. What may be more surprising is that people also seem to be no better than chance at identifying true memories.
  • Source confusion. What we learn from multiple sources about the same topic can very quickly become confused.
  • In both studies, participants were no better than chance at spotting whether a memory was true or false
  • It seems people were actively relying on bad cues when listening to the memories, and somehow these cues made them worse than had they tossed a coin.
  • Participants watched a video of a person recounting a true emotional memory, and the same person recount a false memory. They were told "All, some, or none of the videos you are about to watch involve memories of real accounts. Your task is to identify after each video whether you think the account described actually happened or not.".
  • What this means is that if you already have false memories of what you have done, heard, or seen during the COVID-19 pandemic, you probably can't spot them.
  • This can lead to creating false memories based on source confusion, which is when we misattribute where you learned something.
  • During these unusual times you may want to consider implementing additional safeguards to keep your memories safe.
  • Co-witness contamination. We are all witnesses of this world event, witnesses who are talking to each other all the time. If the COVID-19 pandemic were a crime scene, this would be really bad news.
  • Sameness. Every day we hear unprecedented news or horrific medical stories. But after weeks or months of the same type of information, with a reduction in the amount of new and exciting things happening elsewhere, it gets difficult to separate this long stream of information into meaningful bits.
  • Too late? Already think you or someone else has false memories? Look for independent evidence. You can check the news to see when you actually went into lockdown, or whether you tweeted something about how you felt, or maybe ask a friend what you said to them. That can help establish whether you have a false memory or not.
  • If you want reliable memories of this highly emotional time you need to keep a journal.
  • Assume that those feelings, ideas, fears, beliefs, and experiences will be forgotten. People are generally bad at remembering these details later on.
  • Fake news. Some of the content we see online will be false or misleading.
  • Research has repeatedly shown that false memories can feel incredibly real and can be multi-sensory: In some, we can hear, feel, or see things, just like in real memories.
  • But this week new research takes this even further, and shows that false memories also look real to other people.
  • Both your memory of the news, and your memories of emotional events that are happening in your life are possibly being changed or contaminated.
  • Prevention and evidence are what you need, because it's likely that once you have a false memory, neither you nor anyone else will be able to spot it.
1 - 20 of 39 Next ›
Showing 20 items per page