Skip to main content

Home/ History Readings/ Group items matching "Gold" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

The sinister spy who made our world a safer place - 0 views

  • Like Oppenheimer, Fuchs is an ambiguous and polarising character. A congressional hearing concluded he had “influenced the safety of more people and accomplished greater damage than any other spy in the history of nations”
  • But by helping the USSR to build the bomb, Fuchs also helped to forge the nuclear balance of power, the precarious equilibrium of mutually assured destruction under which we all still live.
  • Oppenheimer changed the world with science; and Fuchs changed it with espionage. It is impossible to understand the significance of one without the other.
  • ...9 more annotations...
  • In March 1940 two more exiled German scientists working at Birmingham University, Otto Frisch and Rudolf Peierls, outlined the first practical exposition of how to build a nuclear weapon, a device “capable of unleashing an explosion at a temperature comparable to that of the interior of the sun”. Peierls recruited Fuchs to join him in the top-secret project to develop a bomb, codenamed “Tube Alloys”.
  • Fuchs arrived as a refugee in Britain in 1933 and, like many scientists escaping Nazism, he was warmly welcomed by the academic community. At Edinburgh University he studied under the great physicist Max Born, another German exile.
  • Fuchs was extremely clever and very odd: chain-smoking, obsessively punctual, myopic, gangling and solitary, the “perfect specimen of an abstracted professor”, in the words of one colleague. He kept his political beliefs entirely concealed.
  • The son of a Lutheran pastor, Fuchs came of age in the economic chaos and violent political conflict of Weimar Germany. Like many young Germans, he embraced communism, the creed from which he never wavered. He was studying physics at Kiel University when his father was arrested for speaking out against Hitler. His mother killed herself by drinking hydrochloric acid. Returning from an anti-Nazi rally, he was beaten up and thrown into a river by fascist brownshirts. The German Communist Party told him to flee.
  • When Churchill and Roosevelt agreed to collaborate on building the bomb (while excluding the Soviet Union), “Tube Alloys” was absorbed into the far more ambitious Manhattan Project. Fuchs was one of 17 British-based scientists to join Oppenheimer at Los Alamos.
  • “I never saw myself as a spy,” Fuchs later insisted. “I just couldn’t understand why the West was not prepared to share the atom bomb with Moscow. I was of the opinion that something with that immense destructive potential should be made available to the big powers equally.”
  • In June 1945 Gold was waiting on a bench in Santa Fe when Fuchs drove up in his dilapidated car and handed over what his latest biographer calls “a virtual blueprint for the Trinity device”, the codename for the first test of a nuclear bomb a month later. When the Soviet Union carried out its own test in Kazakhstan in 1949, the CIA was astonished, believing Moscow’s atomic weapons programme was years behind the West. America’s nuclear superiority evaporated; the atomic arms race was on.
  • Fuchs was a naive narcissist and a traitor to the country that gave him shelter. He was entirely obedient to his KGB masters, who justified his actions with hindsight. But without him, there might have been only one superpower. Some in the Truman administration argued that the bomb should be used on the Soviet Union before it developed its own. Fuchs and the other atomic spies enabled Moscow to keep nuclear pace with the West, maintaining a fragile peace.
  • As the father of the atomic bomb, Oppenheimer made the world markedly less secure. Fuchs, paradoxically, made it safer.
Javier E

India's 'temple of wealth' reveals $30bn riches | World | The Times - 0 views

  • For the first time that has become clear: its assets include more than ten tonnes of gold, 2.5 tonnes of jewellery, bank deposits of $19.44 billion and 960 properties across India. Its total holdings were valued at $30 billion, putting it on a pedestal with many banks and conglomerates such as Nestlé India and Coal India.
  • Dedicated to the god Venkateswara and built in about AD300, the temple is run by a trust established by the British in 1933. Many Hindus feel that they must visit at least once to say they have lived a fulfilling spiritual life.
  • It is also known as the “rich man’s temple” because it is popular with industrialists and tycoons. Actors seek blessings for their new film and pray for a blockbuster
  • ...1 more annotation...
  • Tirupati was so popular with the elite that the trust which runs the temple made separate arrangements for wealthy worshippers, ushering them in so they could jump the queue
Javier E

'We will coup whoever we want!': the unbearable hubris of Musk and the billionaire tech bros | Society books | The Guardian - 0 views

  • there’s something different about today’s tech titans, as evidenced by a rash of recent books. Reading about their apocalypse bunkers, vampiric longevity strategies, outlandish social media pronouncements, private space programmes and virtual world-building ambitions, it’s hard to remember they’re not actors in a reality series or characters from a new Avengers movie.
  • Unlike their forebears, contemporary billionaires do not hope to build the biggest house in town, but the biggest colony on the moon. In contrast, however avaricious, the titans of past gilded eras still saw themselves as human members of civil society.
  • The ChatGPT impresario Sam Altman, whose board of directors sacked him as CEO before he made a dramatic comeback this week, wants to upload his consciousness to the cloud (if the AIs he helped build and now fears will permit him).
  • ...19 more annotations...
  • Contemporary billionaires appear to understand civics and civilians as impediments to their progress, necessary victims of the externalities of their companies’ growth, sad artefacts of the civilisation they will leave behind in their inexorable colonisation of the next dimension
  • Zuckerberg had to go all the way back to Augustus Caesar for a role model, and his admiration for the emperor borders on obsession. He models his haircut on Augustus; his wife joked that three people went on their honeymoon to Rome: Mark, Augustus and herself; he named his second daughter August; and he used to end Facebook meetings by proclaiming “Domination!”
  • as chronicled by Peter Turchin in End Times, his book on elite excess and what it portends, today there are far more centimillionaires and billionaires than there were in the gilded age, and they have collectively accumulated a much larger proportion of the world’s wealth
  • In 1983, there were 66,000 households worth at least $10m in the US. By 2019, that number had increased in terms adjusted for inflation to 693,000
  • Back in the industrial age, the rate of total elite wealth accumulation was capped by the limits of the material world. They could only build so many railroads, steel mills and oilwells at a time. Virtual commodities such as likes, views, crypto and derivatives can be replicated exponentially.
  • Digital businesses depend on mineral slavery in Africa, dump toxic waste in China, facilitate the undermining of democracy across the globe and spread destabilising disinformation for profit – all from the sociopathic remove afforded by remote administration.
  • on an individual basis today’s tech billionaires are not any wealthier than their early 20th-century counterparts. Adjusted for inflation, John Rockefeller’s fortune of $336bn and Andrew Carnegie’s $309bn exceed Musk’s $231bn, Bezos’s $165bn and Gates’s $114bn.
  • Zuckerberg told the New Yorker “through a really harsh approach, he established two hundred years of world peace”, finally acknowledging “that didn’t come for free, and he had to do certain things”. It’s that sort of top down thinking that led Zuckerberg to not only establish an independent oversight board at Facebook, dubbed the “Supreme Court”, but to suggest that it would one day expand its scope to include companies across the industry.
  • Any new business idea, Thiel says, should be an order of magnitude better than what’s already out there. Don’t compare yourself to everyone else; instead operate one level above the competing masses
  • Today’s billionaire philanthropists, frequently espousing the philosophy of “effective altruism”, donate to their own organisations, often in the form of their own stock, and make their own decisions about how the money is spent because they are, after all, experts in everything
  • Their words and actions suggest an approach to life, technology and business that I have come to call “The Mindset” – a belief that with enough money, one can escape the harms created by earning money in that way. It’s a belief that with enough genius and technology, they can rise above the plane of mere mortals and exist on an entirely different level, or planet, altogether.
  • By combining a distorted interpretation of Nietzsche with a pretty accurate one of Ayn Rand, they end up with a belief that while “God is dead”, the übermensch of the future can use pure reason to rise above traditional religious values and remake the world “in his own interests”
  • Nietzsche’s language, particularly out of context, provides tech übermensch wannabes with justification for assuming superhuman authority. In his book Zero to One, Thiel directly quotes Nietzsche to argue for the supremacy of the individual: “madness is rare in individuals, but in groups, parties, nations, and ages it is the rule”.
  • In Thiel’s words: “I no longer believe that freedom and democracy are compatible.”
  • This distorted image of the übermensch as a godlike creator, pushing confidently towards his clear vision of how things should be, persists as an essential component of The Mindset
  • In response to the accusation that the US government organised a coup against Evo Morales in Bolivia in order for Tesla to secure lithium there, Musk tweeted: “We will coup whoever we want! Deal with it.”
  • For Thiel, this requires being what he calls a “definite optimist”. Most entrepreneurs are too process-oriented, making incremental decisions based on how the market responds. They should instead be like Steve Jobs or Elon Musk, pressing on with their singular vision no matter what. The definite optimist doesn’t take feedback into account, but ploughs forward with his new design for a better world.
  • This is not capitalism, as Yanis Varoufakis explains in his new book Technofeudalism. Capitalists sought to extract value from workers by disconnecting them from the value they created, but they still made stuff. Feudalists seek an entirely passive income by “going meta” on business itself. They are rent-seekers, whose aim is to own the very platform on which other people do the work.
  • The antics of the tech feudalists make for better science fiction stories than they chart legitimate paths to sustainable futures.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
criscimagnael

Lavish Projects and Meager Lives: The Two Faces of a Ruined Sri Lanka - The New York Times - 0 views

  • The international airport, built a decade ago in the name of Sri Lanka’s ruling Rajapaksa family, is devoid of passenger flights, its staff lingering idly in the cafe. The cricket stadium, also constructed on the family’s orders, has had only a few international matches and is so remote that arriving teams face the risk of wildlife attacks.
  • As Sri Lanka grapples with its worst ever economic crisis, with people waiting hours for fuel and cutting back on food, nowhere is the reckless spending that helped wreck the country more visible than in Hambantota, the Rajapaksa family’s home district in the south.
  • This enormous waste — more than $1 billion spent on the port, $250 million on the airport, nearly $200 million on underused roads and bridges, and millions more (figures vary) on the cricket stadium — made Hambantota a throne to the vanity of a political dynasty that increasingly ran the country as a family business.
  • ...16 more annotations...
  • With Mahinda Rajapaksa, the president, then at the peak of his powers, he did what many nationalist strongmen do: erect tributes to himself.
  • That’s now all gone. Sri Lanka is an international basket case whose foreign reserves — which once stood at over $6 billion under the Rajapaksas — have dwindled to almost nothing.
  • The collapse is partly a result of the loss of tourism during the pandemic, a problem made worse as war has kept away many of the Russians and Ukrainians who used to visit in large numbers. But the family’s economic mismanagement and denial of festering problems have also contributed mightily.
  • With food prices rising, electricity often cut and lifesaving medicines scarce, protesters have pushed Mr. Rajapaksa, 76, out of his latest position — prime minister — and are demanding that his brother Gotabaya, 72, give up the presidency.
  • Just outside the private residence of Mr. Rajapaksa, the Carlton House, they tied ropes to a gold-colored statue of his father, D.A. Rajapaksa. When they couldn’t drag it down, they dug under its feet until it collapsed. And around the corner from the family’s sprawling ancestral estate, they torched the museum memorializing the resting place of the patriarch and his wife.
  • “Whatever the politics, they shouldn’t have done this to their parents’ resting place.”
  • Before the economy crashed, she would sell 30 to 40 pots a day. That number has since dropped to about 20, as people have saved for other necessities. Most days in recent weeks, she has come back with half of her stack of 15 unsold.
  • During her grocery trips, she can buy only half of what she did in the past.
  • She was clear about who was to blame: the Rajapaksas.
  • “If you are investing in debt, you should really be looking at return — and quick return. You can’t do all your long-term, hard infrastructure projects on debt,” said Eran Wickramaratne, a former banker turned state minister of finance. “We completely overleveraged ourselves, and the returns are not there.”
  • With their power consolidated, they announced broad tax cuts — rapidly undoing the work of aligning Sri Lanka’s spending more with its means — and made a disastrous decision to ban chemical fertilizers in hopes of turning the country toward organic farming.
  • At the airport, which for a time was used to store grain, the only outsiders are the crews of occasional cargo flights, or groups of curious villagers on tours to see the complex. The cricket stadium, where the scoreboard clock is stuck in some afternoon past, was at one point rented out as a wedding venue to produce some revenue. It has a capacity of 35,000, more than the town of Hambantota’s entire population, 25,000.
  • “But these megaprojects were meaningless,” he said. “This region still has elephants crossing the roads, and people are still cultivating paddy as a livelihood. So these projects were unnecessary.”
  • “They did a lot — they won the war, they built roads,” Ms. Niroshani said.
  • But what about economic hardship, Ms. Wijeyawickrama asked.
  • “In a few days it may be that we have nothing to eat.”
criscimagnael

In Mali, a Massacre With a Russian Footprint - The New York Times - 0 views

  • Suddenly, five low-flying helicopters thrummed overhead, some firing weapons and drawing gunfire in return. Villagers ran for their lives. But there was nowhere to escape: The helicopters were dropping soldiers on the town’s outskirts to block all the exits.
  • In Moura, the security forces “may have also raped, looted, arrested and arbitrarily detained many civilians,” according to the mission, which is preparing a report on the incident.
  • However, using satellite imagery, The New York Times identified the sites of at least two mass graves, which matched the witnesses’ descriptions of where captives were executed and buried.
  • ...18 more annotations...
  • The Wagner Group refers to a network of operatives and companies that serve as what the U.S. Treasury Department has called a “proxy force” of Russia’s ministry of defense. Analysts describe the group as an extension of Russia’s foreign policy through deniable activities, including the use of mercenaries and disinformation campaigns.
  • They ally with embattled political and military leaders who can pay for their services in cash, or with lucrative mining concessions for precious minerals like gold, diamonds and uranium, according to interviews conducted in recent weeks with dozens of analysts, diplomats and military officials in Africa and Western countries.
  • However, Russian foreign minister Sergey V. Lavrov said in May on Italian television that Wagner was present in Mali “on a commercial basis,” providing “security services.”
  • “From Monday to Thursday, the killings didn’t stop,” said Hamadoun, a tailor working near the market when the helicopters arrived. “The whites and the Malians killed together.”
  • The death toll in Moura is the highest in a growing list of human rights abuses committed by the Malian military, which diplomats and Malian human rights observers say have increased since the military began conducting joint operations with the Wagner Group in January.
  • nearly 500 civilians have been killed in the joint operations,
  • Some abuses could amount to crimes against humanity, the U.N. said in one report.
  • The foreigners, according to diplomats, officials and human rights groups, belonged to the Russian paramilitary group known as Wagner.
  • Wherever there are Russian contractors, real or fictional, they never violate human rights.”
  • “They have no incentive to end the conflict, because they are financially motivated,”
  • “They are the government in the region,”
  • The mass executions began on the Monday, and the victims were both civilians and unarmed militants, witnesses said. Soldiers picked out up to 15 people at a time, inspected their fingers and shoulders for the imprint left by regular use of weapons, and executed men yards away from captives.
  • “cadavers everywhere.”
  • The soldiers and their Russian allies left on Thursday, after killing six last prisoners in retaliation for four who had escaped. A Malian soldier told a group of captives that the soldiers had killed “all the bad people,” said Hamadou.
  • The soldier apologized for the good people who “died by accident.”
  • Investigators from the U.N. peacekeeping mission in Mali have so far been denied access to Moura. Russia and China blocked a vote at the U.N. Security Council on an independent investigation.
  • Some Malians in these regions are losing trust in the government.
  • Soon after, the militants returned and kidnapped the deputy mayor. He hasn’t been heard from since.
Javier E

Revealed: Credit Suisse leak unmasks criminals, fraudsters and corrupt politicians | Credit Suisse | The Guardian - 0 views

  • The huge trove of banking data was leaked by an anonymous whistleblower to the German newspaper Süddeutsche Zeitung. “I believe that Swiss banking secrecy laws are immoral,” the whistleblower source said in a statement. “The pretext of protecting financial privacy is merely a fig leaf covering the shameful role of Swiss banks as collaborators of tax evaders.”
  • Swiss financial institutions manage about 7.9tn CHF (£6.3tn) in assets, nearly half of which belongs to foreign clients.
  • It identifies the convicts and money launderers who were able to open bank accounts, or keep them open for years after their crimes emerged. And it reveals how Switzerland’s famed banking secrecy laws helped facilitate the looting of countries in the developing world.
  • ...25 more annotations...
  • his case is one of dozens discovered by reporters appearing to show Credit Suisse opened or maintained accounts for clients who had serious convictions that might be expected to show up in due diligence checks. There are other instances in which Credit Suisse may have taken quick action after red flags emerged, but the case nonetheless shows that dubious clients have been attracted to the bank.
  • Like every other bank in the world, Credit Suisse professes to have stringent control mechanisms to carry out extensive due diligence on its customers to “ensure that the highest standards of conduct are upheld”. In banking parlance, such controls are called know-your-client or KYC checks.
  • A 2017 leaked report commissioned by Switzerland’s financial regulator shed some light on the bank’s internal procedures at that time. Clients would face intensified scrutiny when flagged as a politically exposed person from a high-risk country, or a person involved in a high-risk activity such as gambling, weapons trading, financial services or mining, the report said.
  • Such controls might be expected to prevent a bank from opening accounts for clients such as Rodoljub Radulović, a Serbian securities fraudster indicted in 2001 by the US Securities and Exchange Commission. However, the leaked data identifies him as the co-signatory of two Credit Suisse company accounts. The first was opened in 2005, the year after the SEC had secured a default judgment against Radulović for running a pump-and-dump scheme.
  • One of Radulović’s company accounts held 3.4m CHF (£2.2m) before they closed in 2010. He was recently given a 10-year prison sentence by a court in Belgrade for his role trafficking cocaine from South America for the organised crime boss Darko Šarić.
  • Due diligence is not only for new clients. Banks are required to continually reassess existing customers. The 2017 report said Credit Suisse screened customers at least every three years and as often as once a year for the riskiest clients. Lawyers for Credit Suisse told the Guardian these periodic reviews were introduced “more than 15 years ago”, meaning it was continually running due diligence on existing clients from 2007.
  • The bank might, therefore, have been expected to have discovered that its German client Eduard Seidel was convicted of bribery in 2008. Seidel was an employee of Siemens. As the multinational’s lead in Nigeria, he oversaw a campaign of industrial-scale bribery to secure lucrative contracts for his employer by funnelling cash to corrupt Nigerian politicians.
  • After German authorities raided the Munich headquarters of Siemens in 2006, Seidel immediately confessed his role in the bribery scheme, though he said he had never stolen from the company or appropriated its slush funds. His involvement in the corruption led to his name being entered into the Thomson Reuters World-Check database in 2007.
  • However, the leaked Credit Suisse data shows his accounts were left open until at least well into the last decade. At one point after he left Siemens, one account was worth 54m CHF (£24m). Seidel’s lawyer declined to say whether the accounts were his. He said his client had addressed all outstanding matters relating to his bribery offences and wished to move on with his life.
  • The lawyer did not respond to repeated invitations to explain the source of the 54m CHF. Siemens said it did not know about the money and that its review of its own cashflows shed no light on the account.
  • A representative for Sederholm said Credit Suisse never froze his accounts and did not close them until 2013 when he was unable to provide due diligence material. Asked why Sederholm needed a Swiss account, they said that he was living in Thailand when it was opened, adding: “Can you please tell me if you would prefer to put your money in a Thai or Swiss bank?”
  • One client, Stefan Sederholm, a Swedish computer technician who opened an account with Credit Suisse in 2008, was able to keep it open for two-and-a-half years after his widely reported conviction for human trafficking in the Philippines, for which he was given a life sentence.
  • Swiss banks have cultivated their trusted reputation since as far back as 1713, when the Great Council of Geneva prohibited bankers from revealing details about the fortunes being deposited by European aristocrats. Switzerland soon became a tax haven for many of the world’s elites and its bankers nurtured a “duty of absolute silence” about their clients affairs.
  • The custom was enshrined in statute in 1934 with the introduction of Switzerland’s banking secrecy law, which criminalised the disclosure of client banking information to foreign authorities. Within decades, wealthy clients from all over the world were flocking to Swiss banks. Sometimes, that meant clients with something to hide.
  • One former Credit Suisse employee at the time alleges there was a deeply ingrained culture in Swiss banking of looking the other way when it came to problematic clients. “The bank’s compliance departments [were] masters of plausible deniability,” they told a reporter from the Organized Crime and Corruption Reporting Project, one of the coordinators of the Suisse secrets project. “Never write anything down that could expose an account that is non-compliant and never ask a question you do not want to know the answer to.”
  • The 2000s was also a decade in which foreign regulators and tax authorities became increasingly frustrated at their inability to penetrate the Swiss financial system. That changed in 2007, when the UBS banker Bradley Birkenfeld voluntarily approached US authorities with information about how the bank was helping thousands of wealthy Americans evade tax with secret accounts.
  • Birkenfeld was viewed as a traitor in Switzerland, where banking whistleblowers are often held in contempt. However, a wide-ranging US Senate investigation later uncovered the aggressive tactics used by UBS and Credit Suisse, the latter of which was found to have sent bankers to high-end events to recruit clients, courted a potential customer with free gold, and in one case even delivered sensitive bank statements hidden in the pages of a Sports Illustrated magazine.
  • The revelations sent shock waves through Switzerland’s financial sector and enraged the US, which pressured Switzerland into unilaterally disclosing which of its taxpayers had secret Swiss accounts from 2014. That same year, Switzerland reluctantly signed up to the international convention on the automatic exchange of banking Information.
  • By adopting the so-called common reporting standard (CRS) for sharing tax data, Switzerland in effect agreed that its banks would in the future exchange information about their clients with tax authorities in foreign countries. They started doing so in 2018.
  • Membership of the global exchange system is often cited by Switzerland’s banking industry as a turning point. “There is no longer Swiss bank client confidentiality for clients abroad,” the Swiss Bankers Association told the Guardian. “We are transparent, there is nothing to hide in Switzerland.”
  • Switzerland’s almost 90-year-old banking secrecy law, however, remains in force – and was recently broadened. The Tax Justice Network estimates that countries around the world collectively lose $21bn (£15.4bn) each year in tax revenues because of Switzerland. Many of those countries will be poorer nations that have not signed up to the CRS data exchange.
  • More than 90 countries, most of which are in the developing world, remain in the dark when their wealthy taxpayers hide their money in Swiss accounts.
  • This inequity in the system was cited by the whistleblower behind the leaked data, who said the CRS system “imposes a disproportionate financial and infrastructural burden on developing nations, perpetuating their exclusion from the system in the foreseeable future”.
  • “This situation enables corruption and starves developing countries of much-needed tax revenue. These countries are the ones that therefore suffer most from Switzerland’s reverse-Robin-Hood stunt,” they said.
  • “I am aware that having an offshore Swiss bank account does not necessarily imply tax evasion or any other financial crime,” they said. “However, it is likely that a significant number of these accounts were opened with the sole purpose of hiding their holder’s wealth from fiscal institutions and/or avoiding the payment of taxes on capital gains.”
Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 1 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
Javier E

DNA Confirms Oral History of Swahili People - The New York Times - 0 views

  • A long history of mercantile trade along the eastern shores of Africa left its mark on the DNA of ancient Swahili people.
  • A new analysis of centuries-old bones and teeth collected from six burial sites across coastal Kenya and Tanzania has found that, around 1,000 years ago, local African women began having children with Persian traders — and that the descendants of these unions gained power and status in the highest levels of pre-colonial Swahili society.
  • long-told origin stories, passed down through generations of Swahili families, may be more truthful than many outsiders have presumed.
  • ...26 more annotations...
  • The Swahili Coast is a narrow strip of land that stretches some 2,000 miles along the Eastern African seaboard — from modern-day Mozambique, Comoros and Madagascar in the south, to Somalia in the north
  • In its medieval heyday, the region was home to hundreds of port towns, each ruled independently, but with a common religion (Islam), language (Kiswahili) and culture.
  • Many towns grew immensely wealthy thanks to a vibrant trading network with merchants who sailed across the Indian Ocean on the monsoon winds. Middle Eastern pottery, Asian cloths 0c 0c and other luxury goods came in. African gold, ivory and timber 0c 0c went out — along with a steady flow of enslaved people, who were shipped off and sold across the Arabian Peninsula and Persian Gulf. (Slave trading later took place between the Swahili coast and Europe as well.)
  • A unique cosmopolitan society emerged that blended African customs and beliefs with those of the foreign traders, some of whom stuck around and assimilated.
  • Islam, for example, arrived from the Middle East and became an integral part of the Swahili social fabric, but with coral-stone mosques built and decorated in a local, East African style
  • Or consider the Kiswahili language, which is Bantu in origin but borrows heavily from Indian and Middle Eastern tongues
  • The arrival of Europeans, beginning around 1500, followed by Omani sailors some 200 years later, changed the character of the region
  • over the past 40 years, archaeologists, linguists and historians have come to see Swahili society as predominantly homegrown — with outside elements adopted over time that had only a marginal impact.
  • That African-centric version of Swahili roots never sat well with the Swahili people themselves, though
  • They generally preferred their own origin story, one in which princes from present-day Iran (then known as Persia) sailed across the Indian Ocean, married local women and enmeshed themselves into East African society. Depending on the narrative source, that story dates to around 850 or 1000 — the same period during which genetic mixing was underway, according to the DNA analysis.
  • “It’s remarkably spot on,” said Mark Horton, an archaeologist at the Royal Agricultural University of England
  • “This oral tradition was always maligned,”
  • “Now, with this DNA study, we see there was some truth to it.”
  • The ancient DNA study is the largest of its kind from Africa, involving 135 skeletons dating to late-medieval and early-modern times, 80 of which have yielded analyzable DNA.
  • To figure out where these people came from, the researchers compared genetic signatures from the dug-up bones with cheek swabs or saliva samples taken from modern-day individuals living in Africa, the Middle East and around the world.
  • The burial-site DNA traced back to two primary sources: Africans and present-day Iranians. Smaller contributions came from South Asians and Arabs as well, with foreign DNA representing about half of the skeletons’ genealogy
  • “It’s surprising that the genetic signature is so strong
  • Gene sequences from tiny power factories inside the cell, known as mitochondria, were overwhelmingly African in origin. Since children inherit these bits of DNA only from their mothers, the researchers inferred that the maternal forbearers of the Swahili people were mostly of African descent.
  • By comparison, the Y chromosome, passed from father to son, was chock-full of Asian DNA that the researchers found was common in modern-day Iran. So, a large fraction of Swahili ancestry presumably came from Persian men
  • Dr. Reich initially assumed that conquering men settled the region by force, displacing the local males in the process. “My hypothesis was that this was a genetic signature of inequality and exploitation,”
  • hat turned out to be a “naïve expectation,” Dr. Reich said, because “it didn’t take into account the cultural context in this particular case.”
  • In East Africa, Persian customs never came to dominate. Instead, most foreign influences — language, architecture, fashion, arts — were incorporated into a way of life that remained predominantly African in character, with social strictures, kinship systems and agricultural practices that reflected Indigenous traditions.
  • “Swahili was an absorbing society,” said Adria LaViolette, an archaeologist at the University of Virginia who has worked on the East African coast for over 35 years. Even as the Persians who arrived influenced the culture, “they became Swahili,”
  • One major caveat to the study: Nearly all the bones and teeth came from ornamental tombs that were located near grand mosques, sites where only the upper class would have been laid to rest.
  • the results might not be representative of the general populace.
  • Protocols for disinterring, sampling and reburying human remains were established in consultation with local religious leaders and community stakeholders. Under Islamic law, exhumations are permitted if they serve a public interest, including that of determining ancestry,
Javier E

Aya Nakamura, French-Malian Singer, Is Caught in Olympic Storm - The New York Times - 0 views

  • “There is a sort of religion of language in France,” said Julien Barret, a linguist and writer who has written an online glossary of the language prevalent in the banlieues where Ms. Nakamura grew up. “French identity is conflated with the French language” he added, in what amounts to “a cult of purity.”
  • France’s former African colonies increasingly infuse the language with their own expressions. Singers and rappers, often raised in immigrant families, have coined new terms.
  • Ms. Nakamura’s dance-floor hits use an eclectic mix of French argot like verlan, which reverses the order of syllables; West African dialect like Nouchi in the Ivory Coast; and innovative turns of phrase that are sometimes nonsensical but quickly catch on.
  • ...4 more annotations...
  • In “Djadja,” her breakout song from 2018 that has become an anthem of female empowerment, she calls out a man who lies about sleeping with her by singing “I’m not your catin,” using a centuries-old French term for prostitute. It has been streamed about one billion times.
  • Ms. Nakamura has encountered criticism of her music before in France, where expectations of assimilation are high. Some on the right complain she has become French but shown more interest in her African roots or her American role models.
  • She responded to her critics on French television in 2019, saying of her music, “In the end, it speaks to everyone.”“You don’t understand,” she added. “But you sing.
  • The Olympics furor appears unlikely to subside soon. As a commentator on France Inter radio put it: “France has no oil, but we do have debates. In fact, we almost deserve a gold medal for that.”
Javier E

Gary Shteyngart: Crying Myself to Sleep on the Icon of the Seas - The Atlantic - 0 views

  • now I understand something else: This whole thing is a cult. And like most cults, it can’t help but mirror the endless American fight for status. Like Keith Raniere’s NXIVM, where different-colored sashes were given out to connote rank among Raniere’s branded acolytes, this is an endless competition among Pinnacles, Suites, Diamond-Plusers, and facing-the-mall, no-balcony purple SeaPass Card peasants, not to mention the many distinctions within each category. The more you cruise, the higher your status.
  • No wonder the most mythical hero of Royal Caribbean lore is someone named Super Mario, who has cruised so often, he now has his own working desk on many ships. This whole experience is part cult, part nautical pyramid scheme.
  • There is, however, a clientele for whom this cruise makes perfect sense. For a large middle-class family (he works in “supply chains”), seven days in a lower-tier cabin—which starts at $1,800 a person—allow the parents to drop off their children in Surfside, where I imagine many young Filipina crew members will take care of them, while the parents are free to get drunk at a swim-up bar and maybe even get intimate in their cabin. Cruise ships have become, for a certain kind of hardworking family, a form of subsidized child care.
  • ...2 more annotations...
  • Crew members like my Panamanian cabin attendant seem to work 24 hours a day. A waiter from New Delhi tells me that his contract is six months and three weeks long. After a cruise ends, he says, “in a few hours, we start again for the next cruise.” At the end of the half a year at sea, he is allowed a two-to-three-month stay at home with his family. As of 2019, the median income for crew members was somewhere in the vicinity of $20,000, according to a major business publication. Royal Caribbean would not share the current median salary for its crew members, but I am certain that it amounts to a fraction of the cost of a Royal Bling gold-plated, zirconia-studded chalice.
  • It is also unseemly to write about the kind of people who go on cruises. Our country does not provide the education and upbringing that allow its citizens an interior life. For the creative class to point fingers at the large, breasty gentlemen adrift in tortilla-chip-laden pools of water is to gather a sour harvest of low-hanging fruit.
Javier E

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
« First ‹ Previous 141 - 152 of 152
Showing 20 items per page