Skip to main content

Home/ TOK Friends/ Group items tagged machine

Rss Feed Group items tagged

11More

Male Stock Analysts With 'Dominant' Faces Get More Information-and Have Better Forecast... - 0 views

  • “People form impressions after extremely brief exposure to faces—within a hundred milliseconds,” says Alexander Todorov, a behavioral-science professor at the University of Chicago Booth School of Business. “They take actions based on those impressions,”
  • . Under most circumstances, such quick impressions aren’t accurate and shouldn’t be trusted, he says.
  • Prof. Teoh and her fellow researchers analyzed the facial traits of nearly 800 U.S. sell-side stock financial analysts working between January 1990 and December 2017 who also had a LinkedIn profile photo as of 2018. They pulled their sample of analysts from Thomson Reuters and the firms they covered from the merged Center for Research in Security Prices and Compustat, a database of financial, statistical and market information
  • ...8 more annotations...
  • The researchers used facial-recognition software to map out specific points on a person’s face, then applied machine-learning algorithms to the facial points to obtain empirical measures for three key face impressions—trustworthiness, dominance and attractiveness.  
  • They examined the association of these impressions with the accuracy of analysts’ quarterly forecasts, drawn from the Institutional Brokers Estimate System
  • Analyst accuracy was determined by comparing each analyst’s prediction error—the difference between their prediction and the actual earnings—with that of all analysts for that same company and quarter.
  • For an average stock valued at $100, Prof. Teoh says, analysts ranked as looking most trustworthy were 25 cents more accurate in earnings-per-share forecasts than the analysts who were ranked as looking least trustworthy
  • Similarly, most-dominant-looking analysts were 52 cents more accurate in their EPS forecast than least-dominant-looking analysts.
  • The relation between a dominant face and accuracy, meanwhile, was significant before and after the regulation was enacted, the analysts say. This suggests that dominant-looking male analysts are always able to obtain information,
  • While forecasts of female analysts regardless of facial characteristics were on average more accurate than those of their male counterparts, the forecasts of women who were seen as more-dominant-looking were significantly less accurate than their male counterparts.  
  • Says Prof. Todorov: “Women who look dominant are more likely to be viewed negatively because it goes against the cultural stereotype.
17More

There Is More to Us Than Just Our Brains - The New York Times - 0 views

  • we are less like data processing machines and more like soft-bodied mollusks, picking up cues from within and without and transforming ourselves accordingly.
  • Still, we “insist that the brain is the sole locus of thinking, a cordoned-off space where cognition happens, much as the workings of my laptop are sealed inside its aluminum case,”
  • We get constant messages about what’s going on inside our bodies, sensations we can either attend to or ignore. And we belong to tribes that cosset and guide us
  • ...14 more annotations...
  • we’re networked organisms who move around in shifting surroundings, environments that have the power to transform our thinking
  • Annie Murphy Paul’s new book, “The Extended Mind,” which exhorts us to use our entire bodies, our surroundings and our relationships to “think outside the brain.”
  • In 2011, she published “Origins,” which focused on all the ways we are shaped by the environment, before birth and minute to minute thereafter.
  • “In the nature-nurture dynamic, nurture begins at the time of conception. The food the mother eats, the air she breathes, the water she drinks, the stress or trauma she experiences — all may affect her child for better or worse, over the decades to come.”
  • a down-to-earth take on the science of epigenetics — how environmental signals become catalysts for gene expression
  • the parallel to this latest book is that the boundaries we commonly assume to be fixed are actually squishy. The moment of a child’s birth, her I.Q. scores or fMRI snapshots of what’s going on inside her brain — all are encroached upon and influenced by outside forces.
  • awareness of our internal signals, such as exactly when our hearts beat, or how cold and clammy our hands are, can boost our performance at the poker table or in the financial markets, and even improve our pillow talk
  • “Though we typically think of the brain as telling the body what to do, just as much does the body guide the brain with an array of subtle nudges and prods. One psychologist has called this guide our ‘somatic rudder,’
  • The “body scan” aspect of mindfulness meditation that has been deployed by the behavioral medicine pioneer Jon Kabat-Zinn may help people lower their heart rates and blood pressure,
  • techniques that help us pinpoint their signals can foster well-being
  • Tania Singer has shown how the neural circuitry underlying compassion is strengthened by meditation practice
  • our thoughts “are powerfully shaped by the way we move our bodies.” Gestures help us understand spatial concepts; indeed, “without gesture as an aid, students may fail to understand spatial ideas at all,”
  • looking out on grassy expanses near loose clumps of trees and a source of water helps us solve problems. “Passive attention,” she writes, is “effortless: diffuse and unfocused, it floats from object to object, topic to topic. This is the kind of attention evoked by nature, with its murmuring sounds and fluid motions; psychologists working in the tradition of James call this state of mind ‘soft fascination.’”
  • The chapters on the ways natural and built spaces reflect universal preferences and enhance the thinking process felt like a respite
9More

Opinion | David Brooks: I Was Wrong About Capitalism - The New York Times - 0 views

  • sometimes I’m just slow. I suffer an intellectual lag.
  • Reality has changed, but my mental frameworks just sit there. Worse, they prevent me from even seeing the change that is already underway — what the experts call “conceptual blindness.”
  • For a while this bet on free-market economic dynamism seemed to be paying off. It was the late 1980s and 1990s
  • ...6 more annotations...
  • In the early 1990s, The Journal sent me on many reporting trips to the U.S.S.R.
  • I saw but did not see the enormous amount of corruption that was going on. I saw but did not see that property rights alone do not spontaneously make a decent society. The primary problem in all societies is order — moral, legal and social order.
  • By the time I came to this job, in 2003, I was having qualms about the free-market education I’d received — but not fast enough. It took me a while to see that the postindustrial capitalism machine — while innovative, dynamic and wonderful in many respects — had some fundamental flaws.
  • The most educated Americans were amassing more and more wealth, dominating the best living areas, pouring advantages into their kids. A highly unequal caste system was forming. Bit by bit it dawned on me that the government would have to get much more active if every child was going to have an open field and a fair chance.
  • the financial crisis hit, the flaws in modern capitalism were blindingly obvious, but my mental frames still didn’t shift fast enough.
  • Sometimes in life you should stick to your worldview and defend it against criticism. But sometimes the world is genuinely different than it was before. At those moments the crucial skills are the ones nobody teaches you: how to reorganize your mind, how to see with new eyes.
11More

Opinion | The Book That Explains Our Cultural Stagnation - The New York Times - 0 views

  • The best explanation I’ve read for our current cultural malaise comes at the end of W. David Marx’s forthcoming “Status and Culture: How Our Desire for Social Rank Creates Taste, Identity, Art, Fashion, and Constant Change,” a book that is not at all boring and that subtly altered how I see the world.
  • Marx posits cultural evolution as a sort of perpetual motion machine driven by people’s desire to ascend the social hierarchy. Artists innovate to gain status, and people unconsciously adjust their tastes to either signal their status tier or move up to a new one.
  • “Status struggles fuel cultural creativity in three important realms: competition between socioeconomic classes, the formation of subcultures and countercultures, and artists’ internecine battles.”
  • ...8 more annotations...
  • avant-garde composer John Cage. When Cage presented his discordant orchestral piece “Atlas Eclipticalis” at Lincoln Center in 1964, many patrons walked out. Members of the orchestra hissed at Cage when he took his bow; a few even smashed his electronic equipment. But Cage’s work inspired other artists, leading “historians and museum curators to embrace him as a crucial figure in the development of postmodern art,” which in turn led audiences to pay respectful attention to his work
  • “There was a virtuous cycle for Cage: His originality, mystery and influence provided him artist status; this encouraged serious institutions to explore his work; the frequent engagement with his work imbued Cage with cachet among the public, who then received a status boost for taking his work seriously,” writes Marx.
  • The internet, Marx writes in his book’s closing section, changes this dynamic. With so much content out there, the chance that others will recognize the meaning of any obscure cultural signal declines
  • in the age of the internet, taste tells you less about a person. You don’t need to make your way into any social world to develop a familiarity with Cage — or, for that matter, with underground hip-hop, weird performance art, or rare sneakers.
  • people are, obviously, no less obsessed with their own status today than they were during times of fecund cultural production.
  • the markers of high social rank have become more philistine. When the value of cultural capital is debased, writes Marx, it makes “popularity and economic capital even more central in marking status.”
  • there’s “less incentive for individuals to both create and celebrate culture with high symbolic complexity.”
  • It makes more sense for a parvenu to fake a ride on a private jet than to fake an interest in contemporary art. We live in a time of rapid and disorientating shifts in gender, religion and technology. Aesthetically, thanks to the internet, it’s all quite dull.
30More

Among the Disrupted - The New York Times - 0 views

  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university,
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods
  • ...27 more annotations...
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • Greif’s book is a prehistory of our predicament, of our own “crisis of man.” (The “man” is archaic, the “crisis” is not.) It recognizes that the intellectual history of modernity may be written in part as the epic tale of a series of rebellions against humanism
  • We are not becoming transhumanists, obviously. We are too singular for the Singularity. But are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.
  • Here is his conclusion: “Anytime your inquiries lead you to say, ‘At this moment we must ask and decide who we fundamentally are, our solution and salvation must lie in a new picture of ourselves and humanity, this is our profound responsibility and a new opportunity’ — just stop.” Greif seems not to realize that his own book is a lasting monument to precisely such inquiry, and to its grandeur
  • “Answer, rather, the practical matters,” he counsels, in accordance with the current pragmatist orthodoxy. “Find the immediate actions necessary to achieve an aim.” But before an aim is achieved, should it not be justified? And the activity of justification may require a “picture of ourselves.” Don’t just stop. Think harder. Get it right.
  • — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • Who has not felt superior to humanism? It is the cheapest target of all: Humanism is sentimental, flabby, bourgeois, hypocritical, complacent, middlebrow, liberal, sanctimonious, constricting and often an alibi for power
  • what is humanism? For a start, humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating
  • The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality
  • Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
  • And posthumanism? It elects to understand the world in terms of impersonal forces and structures, and to deny the importance, and even the legitimacy, of human agency.
  • There have been humane posthumanists and there have been inhumane humanists. But the inhumanity of humanists may be refuted on the basis of their own worldview
  • the condemnation of cruelty toward “man the machine,” to borrow the old but enduring notion of an 18th-century French materialist, requires the importation of another framework of judgment. The same is true about universalism, which every critic of humanism has arraigned for its failure to live up to the promise of a perfect inclusiveness
  • there has never been a universalism that did not exclude. Yet the same is plainly the case about every particularism, which is nothing but a doctrine of exclusion; and the correction of particularism, the extension of its concept and its care, cannot be accomplished in its own name. It requires an idea from outside, an idea external to itself, a universalistic idea, a humanistic idea.
  • Asking universalism to keep faith with its own principles is a perennial activity of moral life. Asking particularism to keep faith with its own principles is asking for trouble.
  • there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life
  • a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion
  • “Our very mastery seems to escape our mastery,” Michel Serres has anxiously remarked. “How can we dominate our domination; how can we master our own mastery?”
  • universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter
  • Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons
  • The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
  • Is all this — is humanism — sentimental? But sentimentality is not always a counterfeit emotion. Sometimes sentiment is warranted by reality.
  • The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence
  • a complacent humanist is a humanist who has not read his books closely, since they teach disquiet and difficulty. In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter.
9More

How Can 'Absurd' Luxury Prices be Justified? - The New York Times - 1 views

  • According to the data company EDITED, average luxury prices are up by 25 percent since 2019. Many brands attribute their increases to everything from inflation and the balancing out of regional price disparities to the fallout of the pandemic and impact of the war in Ukraine. But for many luxury fashion brands, the uptick has been taking place over a longer timeline.
  • Take Chanel, where handbag prices have more than doubled since 2016. The auction house Sotheby’s found a classic 2.55 Chanel bag sold for around $1,650 in 2008. In 2023, that figure from Chanel is closer to $10,200. (If the cost had risen in line with inflation over the 15-year period, it would be expected to cost $2,359).
  • “Industrywide, the pricing has gotten truly absurd,” the influencer Bryan Yambao noted in an Instagram post this week when discussing a $6,000 woolen Miu Miu coat.
  • ...6 more annotations...
  • The answer is the 1 percent, obviously (or those willing to incur credit card debt). Luca Solca, an analyst for Bernstein, estimates that the top 5 percent of luxury clients now account for more than 40 percent of sales for most luxury goods brands
  • As wealth inequality increases globally, luxury brands are doubling down on an ever smaller slice of their clientele
  • VF There’s a sort of warring imperative here between people who see it as a badge of honor not to buy full price and those who belong to the very tiny club that can. Economics are involved, but as much as anything, it’s also psychology.
  • VF Which brings me back to Phoebe Philo’s current scarcity model: We don’t even know how much of any piece they produced (the company wouldn’t say) but the message is the market will bear the prices, since the products are sold out. We think because it is more expensive, it has more value.
  • GT This is the stuff for which Daniel Kahneman won the Nobel. Subjects shown two identical cashmere sweaters, one at a far higher price, inevitably chose the more costly of the two. His point was that we don’t think. We act irrationally.
  • EP Speaking of dollars, a quarter of a trillion them have been wiped off the value of Europe’s seven biggest luxury businesses since April, and sales are generally down across most fashion groups. That is the aspirational middle class stepping away from the credit card machine.
42More

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
18More

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
11More

Two recent surveys show AI will do more harm than good - The Washington Post - 0 views

  • A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.
  • When the same question was asked in a 1987 poll, a higher share of respondents – about one in five – said AI would do more good than harm,
  • In other words, people have less unqualified confidence in AI now than they did 35 years ago, when the technology was more science fiction than reality.
  • ...8 more annotations...
  • The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.
  • “It’s fantastic that there is public skepticism about AI. There absolutely should be,” said Meredith Broussard, an artificial intelligence researcher and professor at New York University.
  • Broussard said there can be no way to design artificial intelligence software to make inherently human decisions, like grading students’ tests or determining the course of medical treatment.
  • Most Americans essentially agree with Broussard that AI has a place in our lives, but not for everything.
  • Most people said it was a bad idea to use AI for military drones that try to distinguish between enemies and civilians or trucks making local deliveries without human drivers. Most respondents said it was a good idea for machines to perform risky jobs such as coal mining.
  • Roman Yampolskiy, an AI specialist at the University of Louisville engineering school, told me he’s concerned about how quickly technologists are building computers that are designed to “think” like the human brain and apply knowledge not just in one narrow area, like recommending Netflix movies, but for complex tasks that have tended to require human intelligence.
  • “We have an arms race between multiple untested technologies. That is my concern,” Yampolskiy said. (If you want to feel terrified, I recommend Yampolskiy’s research paper on the inability to control advanced AI.)
  • The term “AI” is a catch-all for everything from relatively uncontroversial technology, such as autocomplete in your web search queries, to the contentious software that promises to predict crime before it happens. Our fears about the latter might be overwhelming our beliefs about the benefits from more mundane AI.
24More

Opinion | Chatbots Are a Danger to Democracy - The New York Times - 0 views

  • longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process
  • Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.
  • In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.
  • ...21 more annotations...
  • In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.
  • around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots.
  • a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side.
  • It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact
  • In the past, despite our differences, we could at least take for granted that all participants in the political process were human beings. This no longer true
  • Increasingly we share the online debate chamber with nonhuman entities that are rapidly growing more advanced
  • a bot developed by the British firm Babylon reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners. The average score for human doctors? 72 percent.
  • If chatbots are approaching the stage where they can answer diagnostic questions as well or better than human doctors, then it’s possible they might eventually reach or surpass our levels of political sophistication
  • chatbots could seriously endanger our democracy, and not just when they go haywire.
  • They’ll likely have faces and voices, names and personalities — all engineered for maximum persuasion. So-called “deep fake” videos can already convincingly synthesize the speech and appearance of real politicians.
  • The most obvious risk is that we are crowded out of our own deliberative processes by systems that are too fast and too ubiquitous for us to keep up with.
  • A related risk is that wealthy people will be able to afford the best chatbots.
  • in a world where, increasingly, the only feasible way of engaging in debate with chatbots is through the deployment of other chatbots also possessed of the same speed and facility, the worry is that in the long run we’ll become effectively excluded from our own party.
  • the wholesale automation of deliberation would be an unfortunate development in democratic history.
  • A blunt approach — call it disqualification — would be an all-out prohibition of bots on forums where important political speech takes place, and punishment for the humans responsible
  • The Bot Disclosure and Accountability Bil
  • would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop PACs, corporations and labor organizations from using bots to disseminate messages advocating candidates, which would be considered “electioneering communications.”
  • A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers.
  • We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human?
  • We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate
  • the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake.
26More

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 0 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
25More

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
5More

Twitter is dying | TechCrunch - 0 views

  • if the point is simply pure destruction — building a chaos machine by removing a source of valuable information from our connected world, where groups of all stripes could communicate and organize, and replacing that with a place of parody that rewards insincerity, time-wasting and the worst forms of communication in order to degrade the better half — then he’s done a remarkable job in very short order. Truly it’s an amazing act of demolition. But, well, $44 billion can buy you a lot of wrecking balls.
  • That our system allows wealth to be turned into a weapon to nuke things of broad societal value is one hard lesson we should take away from the wreckage of downed turquoise feathers.
  • We should also consider how the ‘rules based order’ we’ve devised seems unable to stand up to a bully intent on replacing free access to information with paid disinformation — and how our democratic systems seem so incapable and frozen in the face of confident vandals running around spray-painting ‘freedom’ all over the walls as they burn the library down.
  • ...2 more annotations...
  • The simple truth is that building something valuable — whether that’s knowledge, experience or a network worth participating in — is really, really hard. But tearing it all down is piss easy.
  • It almost doesn’t matter if this is deliberate sabotage by Musk or the blundering stupidity of a clueless idiot.
9More

Elon Musk's Disastrous Weekend on Twitter - The Atlantic - 0 views

  • It’s useful to keep in mind that Twitter is an amplification machine. It is built to allow people, with astonishingly little effort, to reach many other people. (This is why brands like it.)
  • There are a million other ways to express yourself online: This has nothing to do with free speech, and Twitter is not obligated to protect your First Amendment rights.
  • When Elon Musk and his fans talk about free speech on Twitter, they’re actually talking about loud speech. Who is allowed to use this technology to make their message very loud, to the exclusion of other messages?
  • ...6 more annotations...
  • Musk seems willing to grant this power to racists, conspiracy theorists, and trolls. This isn’t great for reasonable people who want to have nuanced conversations on social media, but the joke has always been on them. Twitter isn’t that place, and it never will be.
  • one of Musk’s first moves after taking over was to fire the company’s head of policy—an individual who had publicly stated a commitment to both free speech and preventing abuse.
  • On Friday, Musk tweeted that Twitter would be “forming a content moderation council with widely diverse viewpoints,” noting that “no major content decisions [would] happen before that council convenes.” Just three hours later, replying to a question about lifting a suspension on The Daily Wire’s Jordan Peterson, Musk signaled that maybe that wasn’t exactly right; he tweeted: “Anyone suspended for minor & dubious reasons will be freed from Twitter jail.” He says he wants a democratic council, yet he’s also setting policy by decree.
  • Perhaps most depressingly, this behavior is quite familiar. As Techdirt’s Mike Masnick has pointed out, we are all stuck “watching Musk speed run the content moderation learning curve” and making the same mistakes that social-media executives made with their platforms in their first years at the helm.
  • Musk has charged himself with solving the central, seemingly intractable issue at the core of hundreds of years of debate about free speech. In the social-media era, no entity has managed to balance preserving both free speech and genuine open debate across the internet at scale.
  • Musk hasn’t just given himself a nearly impossible task; he’s also created conditions for his new company’s failure. By acting incoherently as a leader and lording the prospect of mass terminations over his employees, he’s created a dysfunctional and chaotic work environment for the people who will ultimately execute his changes to the platform
14More

It's Not Just the Discord Leak. Group Chats Are the Internet's New Chaos Machine. - The... - 0 views

  • Digital bulletin-board systems—proto–group chats, you could say—date back to the 1970s, and SMS-style group chats popped up in WhatsApp and iMessage in 2011.
  • As New York magazine put it in 2019, group chats became “an outright replacement for the defining mode of social organization of the past decade: the platform-centric, feed-based social network.”
  • unlike the Facebook feed or Twitter, where posts can be linked to wherever, group chats are a closed system—a safe and (ideally) private space. What happens in the group chat ought to stay there.
  • ...11 more annotations...
  • In every group chat, no matter the size, participants fall into informal roles. There is usually a leader—a person whose posting frequency drives the group or sets the agenda. Often, there are lurkers who rarely chime in
  • Larger group chats are not immune to the more toxic dynamics of social media, where competition for attention and herd behavior cause infighting, splintering, and back-channeling.
  • It’s enough to make one think, as the writer Max Read argued, that “venture-capitalist group chats are a threat to the global economy.” Now you might also say they are a threat to national security.
  • thanks to the private nature of the group chats, this information largely stayed out of the public eye. As Bloomberg reported, “By the time most people figured out that a bank run was a possibility … it was already well underway.”
  • The investor panic that led to the swift collapse of Silicon Valley Bank in March was effectively caused by runaway group-chat dynamics. “It wasn’t phone calls; it wasn’t social media,” a start-up founder told Bloomberg in March. “It was private chat rooms and message groups.
  • Unlike traditional social media or even forums and message boards, group chats are nearly impossible to monitor.
  • as our digital social lives start to splinter off from feeds and large audiences and into siloed areas, a different kind of unpredictability and chaos awaits. Where social networks create a context collapse—a process by which information meant for one group moves into unfamiliar networks and is interpreted by outsiders—group chats seem to be context amplifiers
  • group chats provide strong relationship dynamics, and create in-jokes and lore. For decades, researchers have warned of the polarizing effects of echo chambers across social networks; group chats realize this dynamic fully.
  • Weird things happen in echo chambers. Constant reinforcement of beliefs or ideas might lead to group polarization or radicalization. It may trigger irrational herd behavior such as, say, attempting to purchase a copy of the Constitution through a decentralized autonomous organization
  • Obsession with in-group dynamics might cause people to lose touch with the reality outside the walls of a particular community; the private-seeming nature of a closed group might also lull participants into a false sense of security, as it did with Teixiera.
  • the age of the group chat appears to be at least as unpredictable, swapping a very public form of volatility for a more siloed, incalculable version
17More

'Meta-Content' Is Taking Over the Internet - The Atlantic - 0 views

  • Jenn, however, has complicated things by adding an unexpected topic to her repertoire: the dangers of social media. She recently spoke about disengaging from it for her well-being; she also posted an Instagram Story about the risks of ChatGPT
  • and, in none other than a YouTube video, recommended Neil Postman’s Amusing Ourselves to Death, a seminal piece of media critique from 1985 that denounces television’s reduction of life to entertainment.
  • (Her other book recommendations included Stolen Focus, by Johann Hari, and Recapture the Rapture, by Jamie Wheal.)
  • ...14 more annotations...
  • Social-media platforms are “preying on your insecurities; they’re preying on your temptations,” Jenn explained to me in an interview that shifted our parasocial connection, at least for an hour, to a mere relationship. “And, you know, I do play a role in this.” Jenn makes money through aspirational advertising, after all—a familiar part of any influencer’s job.
  • She’s pro–parasocial relationships, she explains to the camera, but only if we remain aware that we’re in one. “This relationship does not replace existing friendships, existing relationships,” she emphasizes. “This is all supplementary. Like, it should be in addition to your life, not a replacement.” I sat there watching her talk about parasocial relationships while absorbing the irony of being in one with her.
  • The open acknowledgment of social media’s inner workings, with content creators exposing the foundations of their content within the content itself, is what Alice Marwick, an associate communications professor at the University of North Carolina at Chapel Hill, described to me as “meta-content.”
  • Meta-content can be overt, such as the vlogger Casey Neistat wondering, in a vlog, if vlogging your life prevents you from being fully present in it;
  • But meta-content can also be subtle: a vlogger walking across the frame before running back to get the camera. Or influencers vlogging themselves editing the very video you’re watching, in a moment of space-time distortion.
  • Viewers don’t seem to care. We keep watching, fully accepting the performance. Perhaps that’s because the rise of meta-content promises a way to grasp authenticity by acknowledging artifice; especially in a moment when artifice is easier to create than ever before, audiences want to know what’s “real” and what isn’
  • “The idea of a space where you can trust no sources, there’s no place to sort of land, everything is put into question, is a very unsettling, unsatisfying way to live.
  • So we continue to search for, as Murray observes, the “agreed-upon things, our basic understandings of what’s real, what’s true.” But when the content we watch becomes self-aware and even self-critical, it raises the question of whether we can truly escape the machinations of social media. Maybe when we stare directly into the abyss, we begin to enjoy its company.
  • “The difference between BeReal and the social-media giants isn’t the former’s relationship to truth but the size and scale of its deceptions.” BeReal users still angle their camera and wait to take their daily photo at an aesthetic time of day. The snapshots merely remind us how impossible it is to stop performing online.
  • Jenn’s concern over the future of the internet stems, in part, from motherhood. She recently had a son, Lennon (whose first birthday party I watched on YouTube), and worries about the digital world he’s going to inherit.
  • Back in the age of MySpace, she had her own internet friends and would sneak out to parking lots at 1 a.m. to meet them in real life: “I think this was when technology was really used as a tool to connect us.” Now, she explained, it’s beginning to ensnare us. Posting content online is no longer a means to an end so much as the end itself.
  • We used to view influencers’ lives as aspirational, a reality that we could reach toward. Now both sides acknowledge that they’re part of a perfect product that the viewer understands is unattainable and the influencer acknowledges is not fully real.
  • “I forgot to say this to her in the interview, but I truly think that my videos are less about me and more of a reflection of where you are currently … You are kind of reflecting on your own life and seeing what resonates [with] you, and you’re discarding what doesn’t. And I think that’s what’s beautiful about it.”
  • meta-content is fundamentally a compromise. Recognizing the delusion of the internet doesn’t alter our course within it so much as remind us how trapped we truly are—and how we wouldn’t have it any other way.

Unraveling the Mysteries of Wireshark: A Beginner's Guide - 2 views

started by karenmcgregor on 14 Mar 24 no follow-up yet
8More

Musk Peddles Fake News on Immigration and the Media Exaggerates Biden's Decline - 0 views

  • There’s little indication that Biden’s remarks on this occasion—which were lucid, thoughtful, and, as Yglesias noted, cogent—or that any of the countless hours of footage from this past year alone of Biden being oratorically and rhetorically compelling, have meaningfully factored into the media’s appraisal of Biden’s cognitive state
  • Instead, the media has run headlong toward a narrative constructed by the very people politically incentivized to paint Biden in as unflattering a light as possible. When news organizations uncritically accept, rather than journalistically evaluate, the assumption that Biden is severely cognitively compromised in the first place, they effectively grant the right-wing influencers who spend their days curating Biden gaffe supercuts the opportunity to set the terms of the debate
  • Why does the media take at face value that the viral posts showcasing Biden’s gaffes and slip-ups are truly representative of his current state? 
  • ...5 more annotations...
  • Because right-wing commentators aren’t the only ones who think Biden’s mind is basically gone—lots of voters think so too
  • Of course, a major reason why the public thinks this is because the entirety of the right-wing information superstructure is devoted, on a daily basis, to depicting Biden as severely cognitively compromised
  • By contrast, most of the news sources the right sees as hyperpartisan Biden spin machines actually strain at being fair-minded and objective, which disinclines them toward producing any sort of muscular pushback against the right’s relentless mischaracterizations.
  • Since mainstream media venues by and large epistemically rely on the views of the masses to supply journalists with their coverage frames, news operations end up treating popular concerns about Biden’s age as a kind of sacrosanct window into reality rather than as a hype cycle perpetually fed into the ambient collective consciousness by anti-Biden voices intending to sink his reelection chances.
  • even if we grant every single concern that Klein and others have voiced, it is indisputably true that Joe Biden remains an intellectual giant next to Donald Trump
« First ‹ Previous 201 - 218 of 218
Showing 20 items per page