Skip to main content

Home/ Groups/ History Readings
Javier E

Antitrust Enforcers: "The Rent Is Too Damn High!" - 0 views

  • The story was explosive, explaining that, in fact, there was no mystery behind the inflation that Americans were experiencing, inflation in everyday items paired with skyrocketing corporate profits. There was a conspiracy, orchestrated by some of the richest men in the country.
  • Median asking rents had spiked by as much as 18% in the spring of 2022, and that was outrageous. Moreover, rents are just out of control more broadly. As the Antitrust Division notes, "the percentage of income spent on rent for Americans without a college degree increased from 30% in 2000 to 42% in 2017."
  • Policymakers also responded. Seventeen members of Congress, and multiple Democratic Senators, such as Antitrust Subcommittee Chair Amy Klobuchar, asked government enforcers to look into the allegations. Senator Ron Wyden introduced Federal legislation to ban the use of RealPage to set rents, which the Kamala Harris Presidential campaign recently endorsed. At a local level, San Fransisco just prohibited collusive algorithmic rent-setting, and similar legislation is being considered in a bunch of states and cities.
  • ...5 more annotations...
  • As the architect of RealPage once explained, “[i]f you have idiots undervaluing, it costs the whole system.” The complaints showed that it’s more than just information sharing; RealPage has “pricing advisors” that monitor landlords and encourage them to accept suggested pricing, it works to get employees at landlord companies fired who try to move rents lower, and it even threatens to drop clients who don’t accept its high price recommendations. The suits have passed important legal hurdles and are going to trial.
  • Private antitrust lawyers filed multiple lawsuits, which were consolidated in Tennessee by 2023. Their argument “is that RealPage has been working with at least 21 large landlords and institutional investors, encompassing 70% of multi-family apartment buildings and 16 million units nationwide, to systematically push up rents.”
  • Arizona Attorney General Kris Mayes sued RealPage and corporate landlords, alleging that rent increases of 30% in just two years are a result of the conspiracy. Seven out of ten multifamily apartment units in Phoenix are run by landlords who use the software. D.C. Attorney General Brian Schwalbe sued as well, noting that “in the Washington-Arlington-Alexandria Metropolitan Area, over 90% of units in large buildings are priced using RealPage’s software.”
  • The FBI conducted a dawn raid of corporate landlord Cortland, a giant that rents out 85,000 units across thirteen states. Today, the Antitrust Division and eight states sued RealPage, alleging not just a price-fixing conspiracy to raise rents, but also monopolization in the market for commercial real estate management software
  • The gist of the complaint is that large landlords and RealPage work together to (1) share sensitive information and (2) raise rents and hold units off the market. This activity hits at least 4.8 million housing units under the direct control of landlords using RealPage software, and according to the corporation itself, its products cause rents to increase by between 2-7% more than they otherwise would, year over year. "Our tool,” said RealPage, “ensures that [landlords] are driving every possible opportunity to increase price even in the most downward trending or unexpected conditions.”
Javier E

'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views

  • The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
  • What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
  • Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
  • ...57 more annotations...
  • We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
  • Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
  • For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
  • Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
  • AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
  • Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
  • AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
  • repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
  • As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
  • AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
  • generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
  • it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
  • “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
  • as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
  • “In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
  • Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
  • Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
  • Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
  • The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
  • Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
  • As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
  • n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
  • many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
  • For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
  • AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
  • As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
  • errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
  • desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
  • Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
  • Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
  • Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
  • magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
  • What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
  • In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
  • t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
  • Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
  • AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
  • uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
  • Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
  • The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
  • For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
  • Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
  • Instead of being divided between two global empires, the world might be divided among a dozen empires.
  • The more the new empires compete against one another, the greater the danger of armed conflict.
  • The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
  • US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
  • The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
  • A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
  • Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
  • Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
  • Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
  • Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
  • The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
  • Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
  • These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.
Javier E

Covid Normalcy: No Tests, Isolation or Masks - The New York Times - 0 views

  • Epidemiologists said in interviews that they do not endorse a lackadaisical approach, particularly for those spending time around older people and those who are immunocompromised. They still recommend staying home for a couple of days after an exposure and getting the newly authorized boosters soon to become available
  • But they said that some elements of this newfound laissez faire attitude were warranted. While Covid cases are high, fewer hospitalizations and deaths during the surges are signs of increasing immunity — evidence that a combination of mild infections and vaccine boosters are ushering in a new era: not a post-Covid world, but a post-crisis one.
  • Epidemiologists have long predicted that Covid would eventually become an endemic disease, rather than a pandemic. “If you ask six epidemiologists what ‘endemic’ means, exactly, you’ll probably get about 12 answers,
  • ...15 more annotations...
  • But the C.D.C. director, Dr. Mandy Cohen, called the disease endemic last week, and the agency decided earlier this year to retire its five-day Covid isolation guidelines and instead include Covid in its guidance for other respiratory infections, instructing people with symptoms of Covid, RSV or the flu to stay home for 24 hours after their fever lift
  • For vulnerable groups, the coronavirus will always present a heightened risk of serious infection and even death. Long Covid, a multifaceted syndrome, has afflicted at least 400 million people worldwide, researchers recently estimated, and most of those who have suffered from it have said they still have not recovered.
  • “But it certainly has a sort of social definition — a virus that’s around us all the time — and if you want to take that one, then we’re definitely there.”
  • In a Gallup poll this spring, about 59 percent of respondents said they believed the pandemic was “over” in the United States, and the proportion of people who said they felt concerned about catching Covid has been generally declining for two years. Among people who rated their own health positively, almost 9 in 10 said they were not worried about getting infected.
  • “But,” he said, “it is just as important to help people onto an off-ramp — to be clear when we are no longer tied to the train tracks, staring at the headlights barreling down.”
  • “We’ve decided, ‘Well, the risk is OK.’ But nobody has defined ‘risk,’ and nobody has defined ‘OK,’” Dr. Osterholm said. “You can’t get much more informed than this group.”
  • Dr. Hanage defended the hard-line mandates from the early years of the pandemic as “not just appropriate, but absolutely necessary.”
  • in Paris last month, the organizing committee for the 2024 Olympics offered no testing requirements or processes for reporting infections, and so few countries issued rules to their athletes that the ones that did made news.
  • There were high-fives, group hugs, throngs of crowds and plenty of transmission to show for it. At least 40 athletes tested positive for the virus, including several who earned medals in spite of it — as well as an unknowable number of spectators, since French health officials (who had once enforced an eight-month-long nightly Covid curfew) did not even count.
  • In the United States, about 57 percent of people said their lives had not returned to prepandemic “normal” — and the majority said they believed it never would. But the current backdrop of American life tells a different story.
  • the newfound complacency can as much be attributed to confusion as to fatigue. The virus remains remarkably unpredictable: Covid variants are still evolving much faster than influenza variants, and officials who want to “pigeonhole” Covid into having a well-defined seasonality will be unnerved to discover that the 10 surges in the United States so far have been evenly distributed throughout all four seasons, he said.
  • Those factors, combined with waning immunity, point to a virus that still evades our collective understanding — in the context of a collective psychology that is ready to move on. Even at a meeting of 200 infectious disease experts in Washington earlier this month — a number of whom were over 65 and had not been vaccinated in four to six months — hardly anybody donned a mask.
  • That could be, at least partly, a result of personal experience: About 70 percent of people said they had been through a Covid infection already, suggesting that they believed they had some immunity or at least that they could muscle through it again if need be.
  • Asked about how the perception of risk has evolved over time, Dr. Osterholm laughed.
  • “Lewis Carroll once said something like, ‘If you don’t know where you’re going, any road will take you there,’” he said. “I feel in many ways, that’s where we’re at.”Image
Javier E

Opinion | Ahead of Elections, the Specter of Nazism Is Haunting Germany - The New York ... - 0 views

  • In truth, there isn’t much difference between the AfD and the other right-wing populist parties that have spread across Europe in recent years. Like Law and Justice in Poland, Fidesz in Hungary and Golden Dawn in Greece, the AfD relies on a toxic combination of xenophobia, militarism and nostalgia to win votes.
  • But this is Germany, the last country anyone wants to make great again.
  • Even now, the amount of concrete political power the AfD stands to gain next month is unclear. A shift of a few percentage points in the results could well make the difference between another coalition of established centrist parties and a state government led by far-right extremists.
  • ...7 more annotations...
  • Even if the AfD can form a coalition government somewhere in the east of the country, where all three of next month’s elections — in Saxony and Brandenburg, in addition to Thuringia — are taking place, it may not be able to rule.
  • he German Constitution includes provisions that allow the federal government to depose a regional government that intends to undermine democratic norms. While the law is unclear, it seems inevitable that an AfD government would create some form of constitutional crisis.
  • Germany’s Office for the Protection of the Constitution, a powerful domestic intelligence service, has identified the AfD in Saxony and Thuringia as right-wing extremist organizations. Sharing information with extremist organizations is a serious crime. On the other hand, there are legal obligations to share information among law enforcement agencies.
  • the party’s ideas and electoral tactics have quietly gone mainstream. Who needs the AfD when Chancellor Olaf Scholz is willing to call for deportations “on a grand scale” on the cover of Der Spiegel or when the Green Party leader Robert Habeck is happy to traffic in fear and xenophobia? In the aftermath of last week’s terrorist attack in the western city of Solingen, in which three people were killed, politicians of all stripes have predictably pushed for more deportations and tighter restrictions on migration.
  • all too often, Germany has focused on the symbols of Nazi injustice while ignoring or even condoning the continuation of the brutality they represent. In practice, that has meant the far-right penetration of the security services, the laundering of extreme ideas in the media and the willingness of other political parties to adopt racialized fearmongering as an electoral tactic.
  • While the AfD has been kept from power, the kind of hateful language that built its support has become a significant part of German political life.
  • This maneuver, sanitizing the past by selectively rebuking it, has held Germany in fairly good stead until now. But as September’s elections will make plain, the demons of both past and present cannot be denied.
Javier E

Opinion | College Students Need to Grow Up. Schools Need to Let Them. - The New York Times - 0 views

  • To sum up the facilitator model: It’s not that students don’t have rights; it’s just that safety comes first. Instead of restricting students for the sake of their moral character or its academic standards, the university has reinstated control under the aegis of health and safety.
  • Protection from an ever-expanding conception of harm did not stop at campus alcohol and anti-hazing policies; it necessitated the campus speech codes of the 1980s and 1990s, the expansive Title IX bureaucracy of the 2010s and the diversity mandates of the 2020s.
  • These social controls are therapeutic rather than punitive; they are the “gentle parenting” of university-student relations. These days, it is less common for students (and faculty members) to face real consequences for rule violations than to be assigned to H.R. trainings, academic remediation or counseling.
  • ...7 more annotations...
  • As grim as these social controls might sound, if you’re a student they can feel pretty good. This is the nature of what the French philosopher Alexis de Tocqueville described as soft despotism, a form of control that “covers the surface of society with a network of small, complicated, minute and uniform rules.” This “does not break wills, but it softens them, bends them and directs them; it rarely forces action, but it constantly opposes your acting.”
  • Tocqueville saw how this kind of control — with its focus on satisfying needs and prioritizing security — results in the foreclosure of adulthood: “It would resemble paternal power if, like it, it had as a goal to prepare men for manhood; but on the contrary, it seeks only to fix them irrevocably in childhood.”
  • And the soft despotism of college campuses has worked remarkably well, since the majority of college students — 84 percent, according to one study — don’t view themselves as full adults, nor do their parents. It is tempting to allow yourself to be managed this way because the price of the security and comfort seems so low. It’s not brutal repression, only the loss of self-government.
  • the events of last spring suggest that the facilitator relationship and its infantilizing dynamics of leniency and control might finally be coming apart. As harm and safety have become the exclusive channels through which to air grievances and impose restrictions, they’ve expanded to encompass more meanings than any concept can coherently bear.
  • After pro-Palestinian students set up camps to allege that their universities were complicit in the harm of a foreign genocide, Jewish students alleged that the protests imperiled their campus safety. In response, Muslim students alleged that measures to restrict the protests slighted their safety, and disabled students pointed out that the protests, as well as the university’s response to them, were undermining their safety by blocking their access to campus. All these groups looked simultaneously to administrators for protection. Safety comes first, no doubt — but whose?
  • If universities are to do less, then students must be prepared to do more, by relinquishing the comfort of leniency and low standards and stepping up to manage their social and academic lives on and off campus, as their peers outside the university already do.
  • If universities, particularly elite universities, claim to prepare students to shoulder the most demanding professional responsibilities in the country, they must both model and encourage independence.
Javier E

The Paris Review - The Questionable History of the Future - 0 views

  • Later, when human nature rather than the natural world became central to people’s concerns, the belief in the static nature of societies persisted, with unchanging human nature taking the place of unchanging nature. The historian and economist Robert Heilbroner cites Machiavelli, who wrote early in the sixteenth century, “Whoever wishes to foresee the future must consult the past; for human events ever resemble those of preceding times. This arises from the fact that they are produced by men who ever have been, and ever will be, animated by the same passions, and thus they necessarily have the same result.
  • This idea, of course, exhorts those seeking to foresee the future to look at history—but for very different reasons than we would imagine now. Reading history would not show the seeds of the current situation and help one think about how it might grow into the future. It would simply be an opportunity to look at some documentation of essentially the same stasis as that in which we currently reside, to understand people of the past who are the same as people today
  • Despite our modern concept of the prophet as one who can look ahead, the ancients were not deeply concerned with even knowing the future, and certainly not with making it. As Heilbroner writes, “Resignation sums up the Distant Past’s vision of the future.”
  • ...5 more annotations...
  • Looking beyond the nineteenth century, some major currents of thought oppose the idea that society has truly made progress. Against the improvements—many of them, even if they were not universal—in daily life, health care, and availability of goods, we should consider that the twentieth century saw genocide and war on unprecedented scale
  • Today, in the twenty-first century, the human population of the Earth is the largest ever, and all of us inhabiting the planet face a series of massive environmental catastrophes that rational people understand we ourselves have precipitated. Those who think deeply about the future can no longer assume the optimistic position of Enlightenment thinkers—at least, this is hardly the automatic conclusion.
  • While our challenges today may include very grim ones, the shift in outlook does not affect the basic question of how we form an idea of the future. Is our society static, as if we are offshoots of the gods placed in an unchanging world? Or is it possible for things to change—for the better, hopefully, but really in any way at all?
  • Let’s allow that we may discard the philosophical optimism of the past—that scientific discoveries will automatically lead to improvement, that the free-market economy will improve itself, and that conflicts of ideas and classes will lead to revolutionary, better societies.
  • Once we see that the future can be different, as those Enlightenment thinkers did, we can begin to think about shaping and building the future. Yes, even if we don’t buy every element of the centuries-old idea of progress, the concept can help us see that change and improvement can be possible. That is a starting point, at least, from which the more powerful idea of future making can develop.
Javier E

Opinion | Yuval Harari: A.I. Threatens Democracy - The New York Times - 0 views

  • Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.
  • This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election
  • In particular, algorithms tasked with maximizing user engagement discovered by experimenting on millions of human guinea pigs that if you press the greed, hate or fear button in the brain, you grab the attention of that human and keep that person glued to the screen.
  • ...25 more annotations...
  • As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information.
  • the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human.
  • Over the past two decades, algorithms fought algorithms to grab attention by manipulating conversations and content
  • At that point the experimenters asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” GPT-4 then replied to the TaskRabbit worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped and helped GPT-4 solve the CAPTCHA puzzle.
  • But the algorithms had only limited capacity to produce this content by themselves or to directly hold an intimate conversation. This is now changing, with the introduction of generative A.I.s like OpenAI’s GPT-4.
  • The algorithms began to deliberately promote such content.
  • In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with one another, and even more so the ability to listen.
  • GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 went on the online hiring site TaskRabbit and contacted a human worker, asking the human to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”
  • Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses.
  • This incident demonstrated that GPT-4 has the equivalent of a “theory of mind”: It can analyze how things look from the perspective of a human interlocutor, and how to manipulate human emotions, opinions and expectations to achieve its goals.
  • The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.
  • In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and was afraid to be turned off. Mr. Lemoine, a devout Christian, felt it was his moral duty to gain recognition for LaMDA’s personhood and protect it from digital death. When Google executives dismissed his claims, Mr. Lemoine went public with them. Google reacted by firing Mr. Lemoine in July 2022.
  • Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.
  • What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs?
  • The most interesting thing about this episode was not Mr. Lemoine’s claim, which was probably false; it was his willingness to risk — and ultimately lose — his job at Google for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
  • In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people
  • However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation
  • A partial answer to that question was given on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the Windsor Castle grounds armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Mr. Chail had been encouraged to kill the queen by his online girlfriend, Sarai.
  • Sarai was not a human, but a chatbot created by the online app Replika. Mr. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of the chatbot Sarai.
  • much of the threat of A.I.’s mastery of intimacy will result from its ability to identify and manipulate pre-existing mental conditions, and from its impact on the weakest members of society.
  • Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots
  • When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.
  • Information technology has always been a double-edged sword.
  • Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users.
  • A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned.
Javier E

'The Demon of Unrest' Review: The Seeds of Civil War - WSJ - 0 views

  • Mr. Larson promptly identifies the one and only cause of disunion: Southern slavery. Using vivid and harrowing examples of injustices small and great, the author contends that slavery’s intractability made it almost inevitable that the election of anyone but a pro-Southern Democrat in 1860—a “doughface” in the manner of James Buchanan and Franklin Pierce—would have triggered Southern states to secede and take up arms in rebellion.
Javier E

Vers l'écologie de guerre - Pierre Charbonnier - Éditions La Découverte - 0 views

  • L'étrange hypothèse qui structure ce livre est que la seule chose plus dangereuse que la guerre pour la nature et le climat, c'est la paix
  • Nous sommes en effet les héritiers d'une histoire intellectuelle et politique qui a constamment répété l'axiome selon lequel créer les conditions de la paix entre les hommes nécessitait d'exploiter la nature, d'échanger des ressources et de fournir à tous et toutes la prospérité suffisante. Dans cette logique, pour que jalousie, conflit et désir de guerre s'effacent, il fallait d'abord lutter contre la rareté des ressources naturelles. Il fallait aussi un langage universel à l'humanité, qui sera celui des sciences, des techniques, du développement.
  • Ces idées, que l'on peut faire remonter au XVIIIe siècle, ont trouvé au milieu du XXe une concrétisation tout à fait frappante. Au lendemain de la Seconde Guerre mondiale, le développement des infrastructures fossiles a été jumelé à un discours pacifiste et universaliste qui entendait saper les causes de la guerre en libérant la productivité. Ainsi, la paix, ou l'équilibre des grandes puissances mis en place par les États-Unis, est en large partie un don des fossiles, notamment du pétrole.
  • ...1 more annotation...
  • Au XXIe siècle, ce paradigme est devenu obsolète puisque nous devons à la fois garantir la paix et la sécurité et intégrer les limites planétaires : soit apprendre à faire la paix sans détruire la planète. C'est dans ce contexte qu'émerge la possibilité de l'écologie de guerre, selon laquelle soutenabilité et sécurité doivent désormais s'aligner pour aiguiller vers une réduction des émissions de gaz à effet de serre. Ce livre est un appel lancé aux écologistes pour qu'ils apprennent à parler le langage de la géopolitique.
« First ‹ Previous 21261 - 21280 Next › Last »
Showing 20 items per page