Skip to main content

Home/ History Readings/ Group items tagged guardrails

Rss Feed Group items tagged

Javier E

Is Argentina the First A.I. Election? - The New York Times - 0 views

  • Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.
  • A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.
  • A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.
  • ...8 more annotations...
  • Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.
  • For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.
  • His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.
  • Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.
  • Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.
  • So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.
  • To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.
  • For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.
Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
Javier E

Inside the porn industry, AI looms large - The Washington Post - 0 views

  • Since the first AVN “expo” in 1998, adult entertainment has been overtaken by two business models: Pornhub, a free site supported by ads, and OnlyFans, a subscription platform where individual actors control their businesses and their fate.
  • Now, a new shift is on the horizon: Artificial intelligence models that spin up photorealistic images and videos that put viewers in the director’s chair, letting them create whatever porn they like.
  • Some site owners think it’s a privilege people will pay for, and they are racing to build custom AI models that — unlike the sanitized content on OpenAI’s video engine Sora — draw on a vast repository of porn images and videos.
  • ...26 more annotations...
  • he trickiest question may be how to prevent abuse. AI generators have technological boundaries, but not morals, and it’s relatively easy for users to trick them into creating content that depicts violence, rape, sex with children or a celebrity — or even a crush from work who never consented to appear
  • In some cases, the engines themselves are trained on porn images whose subjects didn’t explicitly agree to the new use. Currently, no federal laws protect the victims of nonconsensual deepfakes.
  • Adult entertainment is a giant industry accounting for a substantial chunk of all internet traffic: Major porn sites get more monthly visitors and page views than Amazon, Netflix, TikTok or Zoom
  • The industry is a habitual early adopter of new technology, from VHS to DVD to dot com. In the mid-2000s, porn companies set up massive sites where users upload and watch free videos, and ad sales foot the bills.
  • At last year’s AVN conference, Steven Jones said his peers looked at him “like he was crazy” when he talked about AI opportunities: “Nobody was interested.” This year, Jones said, he’s been “the belle of the ball.”
  • He called up his old business partner, and the two immediately spent about $550,000 securing the web domains for porn dot ai, deepfake dot com and deepfakes dot com, Jones said. “Lightspeed” was back.
  • One major model, Stable Diffusion, shares its code publicly, and some technologists have figured out how to edit the code to allow for sexual images
  • What keeps Jones up at night is people trying to use his company’s tools to generate images of abuse, he said. The models have some technological guardrails that make it difficult for users to render children, celebrities or acts of violence. But people are constantly looking for workarounds.
  • So with help from an angel investor he will not name, Jones hired five employees and a handful of offshore contractors and started building an image engine trained on bundles of freely available pornographic images, as well as thousands of nude photos from Jones’s own collection
  • Users create what Jones calls a “dream girl,” prompting the AI with descriptions of the character’s appearance, pose and setting. The nudes don’t portray real people, he said. Rather, the goal is to re-create a fantasy from the user’s imagination.
  • The AI-generated images got better, their computerized sheen growing steadily less noticeable. Jones grew his user base to 500,000 people, many of whom pay to generate more images than the five per day allotted to free accounts, he said. The site’s “power users” generate AI porn for 10 hours a day, he said.
  • Jones described the site as an “artists’ community” where people can explore their sexualities and fantasies in a safe space. Unlike some corners of the traditional adult industry, no performers are being pressured, underpaid or placed in harm’s way
  • And critically, consumers don’t have to wait for their favorite OnlyFans performer to come online or trawl through Pornhub to find the content they like.
  • Next comes AI-generated video — “porn’s holy grail,” Jones said. Eventually, he sees the technology becoming interactive, with users giving instructions to lifelike automated “performers.” Within two years, he said, there will be “fully AI cam girls,” a reference to creators who make solo sex content.
  • It costs $12 per day to rent a server from Amazon Web Services, he said, and generating a single picture requires users to have access to a corresponding server. His users have so far generated more than 1.6 million images.
  • Copyright holders including newspapers, photographers and artists have filed a slew of lawsuits against AI companies, claiming the companies trained their models on copyrighted content. If plaintiffs win, it could cut off the free-for-all that benefits entrepreneurs such as Jones.
  • But Jones’s plan to create consumer-friendly AI porn engines faced significant obstacles. The companies behind major image-generation models used technical boundaries to block “not safe for work” content and, without racy images to learn from, the models weren’t good at re-creating nude bodies or scenes.
  • Jones said his team takes down images that other users flag as abusive. Their list of blocked prompts currently contains 1,000 terms including “high school.”
  • “I see certain things people type in, and I just hope to God they’re trying to test the model, like we are. I hope they don’t actually want to see the things they’re typing in.
  • Peter Acworth, the owner of kink dot com, is trying to teach an AI porn generator to understand even subtler concepts, such as the difference between torture and consensual sexual bondage. For decades Acworth has pushed for spaces — in the real world and online — for consenting adults to explore nonconventional sexual interests. In 2006, he bought the San Francisco Armory, a castle-like building in the city’s Mission neighborhood, and turned it into a studio where his company filmed fetish porn until shuttering in 2017.
  • Now, Acworth is working with engineers to train an image-generation model on pictures of BDSM, an acronym for bondage and discipline, dominance and submission, sadism and masochism.
  • Others alluded to a porn apocalypse, with AI wiping out existing models of adult entertainment.“Look around,” said Christian Burke, head of engineering at the adult-industry payment app Melon, gesturing at performers huddled, laughing and hugging across the show floor. “This could look entirely different in a few years.”
  • But the age of AI brings few guarantees for the people, largely women, who appear in porn. Many have signed broad contracts granting companies the rights to reproduce their likeness in any medium for the rest of time
  • Not only could performers lose income, Walters said, they could find themselves in offensive or abusive scenes they never consented to.
  • Lana Smalls, a 23-year-old performer whose videos have been viewed 20 million times on Pornhub, said she’s had colleagues show up to shoots with major studios only to be surprised by sweeping AI clauses in their contracts.
  • “This industry is too fragmented for collective bargaining,” Spiegler said. “Plus, this industry doesn’t like rules.”
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

Opinion | MAGA Turns Against the Constitution - The New York Times - 0 views

  • the problem of public ignorance and fake crises transcends politics. Profound pessimism about the state of the nation is empowering the radical, revolutionary politics that fuels extremists on the right and left.
  • now, for parts of MAGA, the Constitution itself is part of the crisis. If it doesn’t permit Trump to take control, then it must be swept aside.
  • Elements of this argument are now bubbling up across the reactionary, populist right
  • ...28 more annotations...
  • Still others believe that the advent of civil rights laws created, in essence, a second Constitution entirely, one that privileges group identity over individual liberty.
  • Protestant Christian nationalists tend to have a higher regard for the American founding, but they believe it’s been corrupted. They claim that the 1787 Constitution is essentially dead, replaced by progressive power politics that have destroyed constitutional government.
  • Catholic post-liberals believe that liberal democracy itself is problematic. According to their critique, the Constitution’s emphasis on individual liberty “atomizes” American life and degrades the traditional institutions of church and family that sustain human flourishing.
  • The original Constitution and Bill of Rights, while a tremendous advance from the Articles of Confederation, suffered from a singular, near-fatal flaw. They protected Americans from federal tyranny, but they also left states free to oppress American citizens in the most horrific ways
  • if your ultimate aim is the destruction of your political enemies, then the Constitution does indeed stand in your way.
  • Right-wing constitutional critics do get one thing right: The 1787 Constitution is mostly gone, and America’s constitutional structure is substantially different from the way it was at the founding. But that’s a good thing
  • its guardrails against tyranny remain vital and relevant today.
  • Individual states ratified their own constitutions that often purported to protect individual liberty, at least for some citizens, but states were also often violently repressive and fundamentally authoritarian.
  • The criminal justice system could be its own special form of hell. Indigent criminal defendants lacked lawyers, prison conditions were often brutal at a level that would shock the modern conscience, and local law enforcement officers had no real constitutional constraints on their ability to search American citizens and seize their property.
  • Through much of American history, various American states protected slavery, enforced Jim Crow, suppressed voting rights, blocked free speech, and established state churches.
  • As a result, if you were traditionally part of the local ruling class — a white Protestant in the South, like me — you experienced much of American history as a kind of golden era of power and control.
  • The Civil War Amendments changed everything. The combination of the 13th, 14th, and 15th Amendments ended slavery once and for all, extended the reach of the Bill of Rights to protect against government actions at every level, and expanded voting rights.
  • But all of this took time. The end of Reconstruction and the South’s “massive resistance” to desegregation delayed the quest for justice.
  • decades of litigation, activism and political reform have yielded a reality in which contemporary Americans enjoy greater protection for the most fundamental civil liberties than any generation that came before.
  • And those who believe that the civil rights movement impaired individual liberty have to reckon with the truth that Americans enjoy greater freedom from both discrimination and censorship than they did before the movement began.
  • So why are parts of the right so discontent? The answer lies in the difference between power and liberty
  • One of the most important stories of the last century — from the moment the Supreme Court applied the First Amendment to state power in 1925, until the present day — is the way in which white Protestants lost power but gained liberty. Many millions are unhappy with the exchange.
  • Consider the state of the law a century ago. Until the expansion of the Bill of Rights (called “incorporation”) to apply to the states, if you controlled your state and wanted to destroy your enemies, you could oppress them to a remarkable degree. You could deprive them of free speech, you could deprive them of due process, you could force them to pray and read state-approved versions of the Bible.
  • The argument that the Constitution is failing is just as mistaken as the argument that the economy is failing, but it’s politically and culturally more dangerous
  • Powerful people often experience their power as a kind of freedom. A king can feel perfectly free to do what he wants, for example, but that’s not the same thing as liberty.
  • Looked at properly, liberty is the doctrine that defies power. It’s liberty that enables us to exercise our rights.
  • Think of the difference between power and liberty like this — power gives the powerful freedom of action. Liberty, by contrast, protects your freedom of action from the powerful.
  • At their core, right-wing attacks on the modern Constitution are an attack on liberty for the sake of power.
  • An entire class of Americans looks back at decades past and has no memory (or pretends to have no memory) of marginalization and oppression. They could do what they wanted, when they wanted and to whom they wanted.
  • Now they don’t have that same control
  • Muslims, Sikhs, Jews, Buddhists and atheists all approach the public square with the same liberties. Drag queens have the same free speech rights as pastors, and many Americans are livid as a result.
  • when a movement starts to believe that America is in a state of economic crisis, criminal chaos and constitutional collapse, then you can start to see the seeds for revolutionary violence and profound political instability. They believe we live in desperate times, and they turn to desperate measures.
  • “You shall know the truth, and the truth shall set you free.” So much American angst and anger right now is rooted in falsehoods. But the truth can indeed set us free from the rage that tempts American hearts toward tyranny.
Javier E

Opinion | How We've Lost Our Moorings as a Society - The New York Times - 0 views

  • To my mind, one of the saddest things that has happened to America in my lifetime is how much we’ve lost so many of our mangroves. They are endangered everywhere today — but not just in nature.
  • Our society itself has lost so many of its social, normative and political mangroves as well — all those things that used to filter toxic behaviors, buffer political extremism and nurture healthy communities and trusted institutions for young people to grow up in and which hold our society together.
  • You see, shame used to be a mangrove
  • ...28 more annotations...
  • That shame mangrove has been completely uprooted by Trump.
  • The reason people felt ashamed is that they felt fidelity to certain norms — so their cheeks would turn red when they knew they had fallen short
  • in the kind of normless world we have entered where societal, institutional and leadership norms are being eroded,” Seidman said to me, “no one has to feel shame anymore because no norm has been violated.”
  • People in high places doing shameful things is hardly new in American politics and business. What is new, Seidman argued, “is so many people doing it so conspicuously and with such impunity: ‘My words were perfect,’ ‘I’d do it again.’ That is what erodes norms — that and making everyone else feel like suckers for following them.”
  • Nothing is more corrosive to a vibrant democracy and healthy communities, added Seidman, than “when leaders with formal authority behave without moral authority.
  • Without leaders who, through their example and decisions, safeguard our norms and celebrate them and affirm them and reinforce them, the words on paper — the Bill of Rights, the Constitution or the Declaration of Independence — will never unite us.”
  • . Trump wants to destroy our social and legal mangroves and leave us in a broken ethical ecosystem, because he and people like him best thrive in a broken system.
  • He keeps pushing our system to its breaking point, flooding the zone with lies so that the people trust only him and the truth is only what he says it is. In nature, as in society, when you lose your mangroves, you get flooding with lots of mud.
  • Responsibility, especially among those who have taken oaths of office — another vital mangrove — has also experienced serious destruction.
  • It used to be that if you had the incredible privilege of serving as U.S. Supreme Court justice, in your wildest dreams you would never have an American flag hanging upside down
  • Your sense of responsibility to appear above partisan politics to uphold the integrity of the court’s rulings would not allow it.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be a mangrove.
  • when moral arousal manifests as moral outrage — and immediate demands for firings — “it can result in a vicious cycle of moral outrage being met with equal outrage, as opposed to a virtuous cycle of dialogue and the hard work of forging real understanding.”
  • In November 2022, the Heterodox Academy, a nonprofit advocacy group, surveyed 1,564 full-time college students ages 18 to 24. The group found that nearly three in five students (59 percent) hesitate to speak about controversial topics like religion, politics, race, sexual orientation and gender for fear of negative backlashes by classmates.
  • Locally owned small-town newspapers used to be a mangrove buffering the worst of our national politics. A healthy local newspaper is less likely to go too far to one extreme or another, because its owners and editors live in the community and they know that for their local ecosystem to thrive, they need to preserve and nurture healthy interdependencies
  • in 2023, the loss of local newspapers accelerated to an average of 2.5 per week, “leaving more than 200 counties as ‘news deserts’ and meaning that more than half of all U.S. counties now have limited access to reliable local news and information.”
  • As in nature, it leaves the local ecosystem with fewer healthy interdependencies, making it more vulnerable to invasive species and disease — or, in society, diseased ideas.
  • It’s not that the people in these communities have changed. It’s that if that’s what you are being fed, day in and day out, then you’re going to come to every conversation with a certain set of predispositions that are really hard to break through.”
  • we have gone from you’re not supposed to say “hell” on the radio to a nation that is now being permanently exposed to for-profit systems of political and psychological manipulation (and throw in Russia and China stoking the fires today as well), so people are not just divided, but being divided. Yes, keeping Americans morally outraged is big business at home now and war by other means by our geopolitical rivals.
  • More than ever, we are living in the “never-ending storm” that Seidman described to me back in 2016, in which moral distinctions, context and perspective — all the things that enable people and politicians to make good judgments — get blown away.
  • Blown away — that is exactly what happens to the plants, animals and people in an ecosystem that loses its mangroves.
  • a trend ailing America today: how much we’ve lost our moorings as a society.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be mangroves.
  • civility itself also used to be a mangrove.
  • “Why the hell not?” Drummond asks.“You’re not supposed to say ‘hell,’ either,” the announcer says.You are not supposed to say “hell,” either. What a quaint thought. That is a polite exclamation point in today’s social media.
  • Another vital mangrove is religious observance. It has been declining for decades:
  • So now the most partisan national voices on Fox News, or MSNBC — or any number of polarizing influencers like Tucker Carlson — go straight from their national studios direct to small-town America, unbuffered by a local paper’s or radio station’s impulse to maintain a community where people feel some degree of connection and mutual respect
  • In a 2021 interview with my colleague Ezra Klein, Barack Obama observed that when he started running for the presidency in 2007, “it was still possible for me to go into a small town, in a disproportionately white conservative town in rural America, and get a fair hearing because people just hadn’t heard of me. … They didn’t have any preconceptions about what I believed. They could just take me at face value.”
‹ Previous 21 - 27 of 27
Showing 20 items per page