Skip to main content

Home/ TOK Friends/ Group items tagged manipulation

Rss Feed Group items tagged

Javier E

Lawsuit Against Fox Is Shaping Up to Be a Major First Amendment Case - The New York Times - 0 views

  • The case threatens a huge financial and reputational blow to Fox, by far the most powerful conservative media company in the country. But legal scholars say it also has the potential to deliver a powerful verdict on the kind of pervasive and pernicious falsehoods — and the people who spread them — that are undermining the country’s faith in democracy.
  • “We’re litigating history in a way: What is historical truth?” said Lee Levine, a noted First Amendment lawyer
  • “Here you’re taking very recent current events and going through a process which, at the end, is potentially going to declare what the correct version of history is.”
  • ...10 more annotations...
  • The case has caused palpable unease at the Fox News Channel, said several people there,
  • Dominion is trying to build a case that aims straight at the top of the Fox media empire and the Murdochs. In court filings and depositions, Dominion lawyers have laid out how they plan to show that senior Fox executives hatched a plan after the election to lure back viewers who had switched to rival hard-right networks, which were initially more sympathetic than Fox was to Mr. Trump’s voter-fraud claims.
  • Libel law doesn’t protect lies. But it does leave room for the media to cover newsworthy figures who tell them. And Fox is arguing, in part, that’s what shields it from liability.
  • A spokesman for Dominion declined to comment. In its initial complaint, the company’s lawyers wrote that “The truth matters,” adding, “Lies have consequences.”
  • For Dominion to convince a jury that Fox should be held liable for defamation and pay damages, it has to clear an extremely high legal bar known as the “actual malice” standard. Dominion must show either that people inside Fox knew what hosts and guests were saying about the election technology company was false, or that they effectively ignored information proving that the statements in question were wrong — which is known in legal terms as displaying a reckless disregard for the truth.
  • Dominion’s lawyers have focused some of their questioning in depositions on the decision-making hierarchy at Fox News, according to one person with direct knowledge of the case, showing a particular interest in what happened on election night inside the network in the hours after it projected Mr. Trump would lose Arizona.
  • Fox has also been searching for evidence that could, in effect, prove the Dominion conspiracy theories weren’t really conspiracy theories. Behind the scenes, Fox’s lawyers have pursued documents that would support numerous unfounded claims about Dominion, including its supposed connections to Hugo Chávez, the Venezuelan dictator who died in 2013, and software features that were ostensibly designed to make vote manipulation easier.
  • In one interview, Mr. Giuliani falsely claimed that Dominion was owned by a Venezuelan company with close ties to Mr. Chavez, and that it was formed “to fix elections.” (Dominion was founded in Canada in 2002 by a man who wanted to make it easier for blind people to vote.)
  • “The harm to Dominion from the lies told by Fox is unprecedented and irreparable because of how fervently millions of people believed them — and continue to believe them,”
  • he hurdle Dominion must clear is whether it can persuade a jury to believe that people at Fox knew they were spreading lies.“Disseminating ‘The Big Lie’ isn’t enough,” said RonNell Andersen Jones, a law professor and First Amendment scholar at the University of Utah’s S.J. Quinney College of Law. “It has to be a knowing lie.”
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

Elon Musk Is Not Playing Four-Dimensional Chess - 0 views

  • Musk is not wrong that Twitter is chock-full of noise and garbage, but the most pernicious stuff comes from real people and a media ecosystem that amplifies and rewards incendiary bullshit
  • This dynamic is far more of a problem for Twitter (but also the news media and the internet in general) than shadowy bot farms are. But it’s also a dilemma without much of a concrete solution
  • Were Musk actually curious or concerned with the health of the online public discourse, he might care about the ways that social media platforms like Twitter incentivize this behavior and create an information economy where our sense of proportion on a topic can be so easily warped. But Musk isn’t interested in this stuff, in part because he is a huge beneficiary of our broken information environment and can use it to his advantage to remain constantly in the spotlight.
  • ...3 more annotations...
  • Musk’s concern with bots isn’t only a bullshit tactic he’s using to snake out of a bad business deal and/or get a better price for Twitter; it’s also a great example of his shallow thinking. The man has at least some ability to oversee complex engineering systems that land rockets, but his narcissism affords him a two-dimensional understanding of the way information travels across social media.
  • He is drawn to the conspiratorial nature of bots and information manipulation, because it is a more exciting and easier-to-understand solution to more complex or uncomfortable problems. Instead of facing the reality that many people dislike him as a result of his personality, behavior, politics, or shitty management style, he blames bots. Rather than try to understand the gnarly mechanics and hard-to-solve problems of democratized speech, he sorts them into overly simplified boxes like censorship and spam and then casts himself as the crusading hero who can fix it all. But he can’t and won’t, because he doesn’t care enough to find the answers.
  • Musk isn’t playing chess or even checkers. He’s just the richest man in the world, bored, mad, and posting like your great-uncle.
Javier E

Opinion | How Behavioral Economics Took Over America - The New York Times - 0 views

  • Some behavioral interventions do seem to lead to positive changes, such as automatically enrolling children in school free lunch programs or simplifying mortgage information for aspiring homeowners. (Whether one might call such interventions “nudges,” however, is debatable.)
  • it’s not clear we need to appeal to psychology studies to make some common-sense changes, especially since the scientific rigor of these studies is shaky at best.
  • Nudges are related to a larger area of research on “priming,” which tests how behavior changes in response to what we think about or even see without noticing
  • ...16 more annotations...
  • Behavioral economics is at the center of the so-called replication crisis, a euphemism for the uncomfortable fact that the results of a significant percentage of social science experiments can’t be reproduced in subsequent trials
  • this key result was not replicated in similar experiments, undermining confidence in a whole area of study. It’s obvious that we do associate old age and slower walking, and we probably do slow down sometimes when thinking about older people. It’s just not clear that that’s a law of the mind.
  • And these attempts to “correct” human behavior are based on tenuous science. The replication crisis doesn’t have a simple solution
  • Journals have instituted reforms like having scientists preregister their hypotheses to avoid the possibility of results being manipulated during the research. But that doesn’t change how many uncertain results are already out there, with a knock-on effect that ripples through huge segments of quantitative social scienc
  • The Johns Hopkins science historian Ruth Leys, author of a forthcoming book on priming research, points out that cognitive science is especially prone to building future studies off disputed results. Despite the replication crisis, these fields are a “train on wheels, the track is laid and almost nothing stops them,” Dr. Leys said.
  • These cases result from lax standards around data collection, which will hopefully be corrected. But they also result from strong financial incentives: the possibility of salaries, book deals and speaking and consulting fees that range into the millions. Researchers can get those prizes only if they can show “significant” findings.
  • It is no coincidence that behavioral economics, from Dr. Kahneman to today, tends to be pro-business. Science should be not just reproducible, but also free of obvious ideology.
  • Technology and modern data science have only further entrenched behavioral economics. Its findings have greatly influenced algorithm design.
  • The collection of personal data about our movements, purchases and preferences inform interventions in our behavior from the grocery store to who is arrested by the police.
  • Setting people up for safety and success and providing good default options isn’t bad in itself, but there are more sinister uses as well. After all, not everyone who wants to exploit your cognitive biases has your best interests at heart.
  • Despite all its flaws, behavioral economics continues to drive public policy, market research and the design of digital interfaces.
  • One might think that a kind of moratorium on applying such dubious science would be in order — except that enacting one would be practically impossible. These ideas are so embedded in our institutions and everyday life that a full-scale audit of the behavioral sciences would require bringing much of our society to a standstill.
  • There is no peer review for algorithms that determine entry to a stadium or access to credit. To perform even the most banal, everyday actions, you have to put implicit trust in unverified scientific results.
  • We can’t afford to defer questions about human nature, and the social and political policies that come from them, to commercialized “research” that is scientifically questionable and driven by ideology. Behavioral economics claims that humans aren’t rational.
  • That’s a philosophical claim, not a scientific one, and it should be fought out in a rigorous marketplace of ideas. Instead of unearthing real, valuable knowledge of human nature, behavioral economics gives us “one weird trick” to lose weight or quit smoking.
  • Humans may not be perfectly rational, but we can do better than the predictably irrational consequences that behavioral economics has left us with today.
Javier E

The Constitution of Knowledge - Persuasion - 0 views

  • But ideas in the marketplace do not talk directly to each other, and for the most part neither do individuals.
  • It is a good metaphor as far as it goes, yet woefully incomplete. It conjures up an image of ideas being traded by individuals in a kind of flea market, or of disembodied ideas clashing and competing in some ethereal realm of their own
  • When Americans think about how we find truth amid a world full of discordant viewpoints, we usually turn to a metaphor, that of the marketplace of ideas
  • ...31 more annotations...
  • Rather, our conversations are mediated through institutions like journals and newspapers and social-media platforms. They rely on a dense network of norms and rules, like truthfulness and fact-checking. They depend on the expertise of professionals, like peer reviewers and editors. The entire system rests on a foundation of values: a shared understanding that there are right and wrong ways to make knowledge.
  • Those values and rules and institutions do for knowledge what the U.S. Constitution does for politics: They create a governing structure, forcing social contestation onto peaceful and productive pathways.
  • I call them, collectively, the Constitution of Knowledge. If we want to defend that system from its many persistent attackers, we need to understand it—and its very special notion of reality.
  • What reality really is
  • The question “What is reality?” may seem either too metaphysical to answer meaningfully or too obvious to need answering
  • The whole problem is that humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is no guarantee of truth. Faced with those problems and others, philosophers and practitioners think of reality as a set of propositions (or claims, or statements) that have been validated in some way, and that have thereby been shown to be at least conditionally true—true, that is, unless debunked
  • Some propositions reflect reality as we perceive it in everyday life (“The sky is blue”). Others, like the equations on a quantum physicist’s blackboard, are incomprehensible to intuition. Many fall somewhere in between.
  • a phrase I used a few sentences ago, “validated in some way,” hides a cheat. In epistemology, the whole question is, validated in what way? If we care about knowledge, freedom, and peace, then we need to stake a strong claim: Anyone can believe anything, but liberal science—open-ended, depersonalized checking by an error-seeking social network—is the only legitimate validator of knowledge, at least in the reality-based community.
  • That is a very bold, very broad, very tough claim, and it goes down very badly with lots of people and communities who feel ignored or oppressed by the Constitution of Knowledge: creationists, Christian Scientists, homeopaths, astrologists, flat-earthers, anti-vaxxers, birthers, 9/11 truthers, postmodern professors, political partisans, QAnon followers, and adherents of any number of other belief systems and religions.
  • But, like the U.S. Constitution’s claim to exclusivity in governing (“unconstitutional” means “illegal,” period), the Constitution of Knowledge’s claim to exclusivity is its sine qua non.
  • Rules for reality
  • The specific proposition does not matter. What does matter is that the only way to validate it is to submit it to the reality-based community. Otherwise, you could win dominance for your proposition by, say, brute force, threatening and jailing and torturing and killing those who see things differently—a standard method down through history
  • Say you believe something (X) to be true, and you believe that its acceptance as true by others is important or at least warranted
  • Or you and your like-minded friends could go off and talk only to each other, in which case you would have founded a cult—which is lawful but socially divisive and epistemically worthless.
  • Or you could engage in a social-media campaign to shame and intimidate those who disagree with you—a very common method these days, but one that stifles debate and throttles knowledge (and harms a lot of people).
  • What the reality-based community does is something else again. Its distinctive qualities derive from two core rules: 
  • what counts is the way the rule directs us to behave: You must assume your own and everyone else’s fallibility and you must hunt for your own and others’ errors, even if you are confident you are right. Otherwise, you are not reality-based.
  • The fallibilist rule: No one gets the final say. You may claim that a statement is established as knowledge only if it can be debunked, in principle, and only insofar as it withstands attempts to debunk it.
  • The empirical rule: No one has personal authority. You may claim that a statement has been established as knowledge only insofar as the method used to check it gives the same result regardless of the identity of the checker, and regardless of the source of the statement
  • Who you are does not count; the rules apply to everybody and persons are interchangeable. If your method is valid only for you or your affinity group or people who believe as you do, then you are not reality-based.
  • Whatever you do to check a proposition must be something that anyone can do, at least in principle, and get the same result. Also, no one proposing a hypothesis gets a free pass simply because of who she is or what group she belongs to.
  • Both rules have very profound social implications. “No final say” insists that to be knowledge, a statement must be checked; and it also says that knowledge is always provisional, standing only as long as it withstands checking.
  • “No personal authority” adds a crucial second step by defining what properly counts as checking. The point, as the great American philosopher Charles Sanders Peirce emphasized more than a century ago, is not that I look or you look but that we look; and then we compare, contest, and justify our views. Critically, then, the empirical rule is a social principle that forces us into the same conversation—a requirement that all of us, however different our viewpoints, agree to discuss what is in principle only one reality.
  • By extension, the empirical rule also dictates what does not count as checking: claims to authority by dint of a personally or tribally privileged perspective.
  • In principle, persons and groups are interchangeable. If I claim access to divine revelation, or if I claim the support of miracles that only believers can witness, or if I claim that my class or race or historically dominant status or historically oppressed status allows me to know and say things that others cannot, then I am breaking the empirical rule by exempting my views from contestability by others.
  • Though seemingly simple, the two rules define a style of social learning that prohibits a lot of the rhetorical moves we see every day.
  • Claiming that a conversation is too dangerous or blasphemous or oppressive or traumatizing to tolerate will almost always break the fallibilist rule.
  • Claims which begin “as a Jew,” or “as a queer,” or for that matter “as minister of information” or “as Pope” or “as head of the Supreme Soviet,” can be valid if they provide useful information about context or credentials; but if they claim to settle an argument by appealing to personal or tribal authority, rather than earned authority, they violate the empirical rule. 
  • “No personal authority” says nothing against trying to understand where people are coming from. If we are debating same-sex marriage, I may mention my experience as a gay person, and my experience may (I hope) be relevant.
  • But statements about personal standing and interest inform the conversation; they do not control it, dominate it, or end it. The rule acknowledges, and to an extent accepts, that people’s social positions and histories matter; but it asks its adherents not to burrow into their social identities, and not to play them as rhetorical trump cards, but to bring them to the larger project of knowledge-building and thereby transcend them.
  • the fallibilist and empirical rules are the common basis of science, journalism, law, and all the other branches of today’s reality-based community. For that reason, both rules also attract hostility, defiance, interference, and open warfare from those who would rather manipulate truth than advance it.
Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 0 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
« First ‹ Previous 161 - 166 of 166
Showing 20 items per page