Skip to main content

Home/ History Readings/ Group items tagged linguistic

Rss Feed Group items tagged

rachelramirez

The Linguistic Evolution of 'Like' - The Atlantic - 0 views

  • The Evolution of 'Like'
  • It’s under this view of language—as something becoming rather than being, a film rather than a photo, in motion rather than at rest—that we should consider the way young people use (drum roll, please) like.
  • To an Old English speaker, the word that later became like was the word for, of all things, “body.”
  • ...11 more annotations...
  • The word was lic, and lic was part of a word, gelic, that meant “with the body,” as in “with the body of,” which was a way of saying “similar to”—as in like.
  • Like has become a piece of grammar: It is the source of the suffix -ly. To the extent that slowly means “in a slow fashion,” as in “with the quality of slowness,” it is easy (and correct) to imagine that slowly began as “slow-like,”
  • Therefore, like is ever so much more than some isolated thing clinically described in a dictionary with a definition like “(preposition) ‘having the same characteristics or qualities as; similar to.’
  • Because we think of like as meaning “akin to” or “similar to,” kids decorating every sentence or two with it seems like overuse.
  • So today’s like did not spring mysteriously from a crowd on the margins of unusual mind-set and then somehow jump the rails from them into the general population. The seeds of the modern like lay among ordinary people; the Beatniks may not even have played a significant role in what happened later. The point is that like transformed from something occasional into something more regular.
  • It’s real-life usage of this kind—to linguists it is data, just like climate patterns are to meteorologists—that suggests that the idea of like as the linguistic equivalent to slumped shoulders is off.
  • In that light, what has happened to like is that it has morphed into a modal marker—actually, one that functions as a protean indicator of the human mind at work in conversation.
  • There are actually two modal marker likes—that is, to be fluent in modern American English is to have subconsciously internalized not one but two instances of grammar involving like.
  • The like acknowledges—imagine even a little curtsey—the discomfort. It softens the blow—that is, eases—by swathing the statement in the garb of hypotheticality that the basic meaning of like lends.
  • Something “like” x is less threatening than x itself; to phrase things as if x were only “like,” x is thus like offering a glass of water, a compress, or a warm little blanket.
  • People’s sense of how they talk tends to differ from the reality, and the person of a certain age who claims never to use like “that way” as often as not, like, does—and often.
Emilio Ergueta

Feast Your Eyes on This Beautiful Linguistic Family Tree | Mental Floss - 0 views

  • hen linguists talk about the historical relationship between languages, they use a tree metaphor.
  • Minna Sundberg, creator of the webcomic Stand Still. Stay Silent, a story set in a lushly imagined post-apocalyptic Nordic world, has drawn the antidote to the boring linguistic tree diagram.
Javier E

J.G.A. Pocock, Historian Who Argued for Historical Context, Dies at 99 - The New York T... - 0 views

  • J.G.A. Pocock, who brought new perspectives to historical scholarship by arguing that the first step in understanding events of the past is to identify their linguistic and intellectual context
  • Among the most important were “The Ancient Constitution and the Feudal Law: A Study of English Historical Thought in the Seventeenth Century” (1957), “The Machiavellian Moment: Florentine Political Thought and the Atlantic Republican Tradition” (1975) and, most notably, “Barbarism and Religion,” a six-volume study of the life and times of Edward Gibbon,
  • Professor Pocock, Quentin Skinner and other like-minded scholars, known collectively as the Cambridge School, came to prominence in the late 1960s with a fresh approach to the study of political thought, characterized by an emphasis on context and an unwillingness to assume that all ideas and problems were viewed in the past as they would be viewed today.
  • ...9 more annotations...
  • “Pocock rejected the idea that politics or philosophy addressed the same problems over time — what justice meant for Aristotle did not mean the same for Hobbes or for Rousseau,”
  • “So explaining what political ideas meant in theory and in practice became the historian’s task.”
  • The Cambridge School attracted devotees across the world in departments of politics, history, philosophy, literature and language — scholars who were admonished to set aside any modern-day assumptions and prejudices they might hold when delving into the past.
  • “Readers, Christian or non-believing, who may find themselves involved in analyses of thought they consider obsolete or false, are asked to remember that they are studying the history of a time when such thinking was offered and read seriously,” he wrote.
  • Professor Pocock’s first book, “The Ancient Constitution and the Feudal Law,” made clear that he would not be a conventional historian. The book asked how people in the 17th century viewed their past, and he wasn’t satisfied with drawing on the go-to philosopher of the period, John Locke. As Colin Kidd wrote in The London Review of Books in 2008, the book “drove a bypass around Locke” and “concentrated instead on a set of debates among such obscure antiquaries as William Petyt, James Tyrrell, William Atwood and Robert Brady.”
  • “The Machiavellian Moment” cemented Professor Pocock’s reputation among historians, and it continued to grow from there. The first volume of “Barbarism and Religion” came out in 1999, when Professor Pocock was in his mid-70s. Volume 6 appeared in 2015. He also edited or co-edited “The Political Works of James Harrington” (1977), “Edmund Burke: Reflections on the Revolution in France” (1987) and “The Varieties of British Political Thought, 1500-1800” (1993), among other books.
  • “Pocock’s central contention,” the Oxford historian Keith Thomas wrote in The New York Review of Books in 1986, “is that a work of political thought can only be understood if the reader is aware of the contemporary linguistic constraints to which its author was subject, for these constraints prescribed both his subject matter and the way in which that subject matter was conceptualized.”
  • its application to the history of political ideas forms a great contrast to the assumptions of the 1950s, when it was widely thought that the close reading of a text by an analytic philosopher was sufficient to establish its meaning, even though the philosopher was quite innocent of any knowledge of the period in which the text was written or of the linguistic traditions within which its author operated.”
  • “Historians need to understand that the history of discourse is not a simple linear sequence in which new patterns overcome and replace the old,” he wrote in 1988 in a preface to a reissue of “Politics, Language and Time,” a 1971 essay collection, “but a complex dialogue in which these patterns persist in transforming one another.”
Javier E

How did Neanderthals and other ancient humans learn to count? - 0 views

  • Rafael Núñez, a cognitive scientist at the University of California, San Diego, and one of the leaders of QUANTA, accepts that many animals might have an innate appreciation of quantity. However, he argues that the human perception of numbers is typically much more sophisticated, and can’t have arisen through a process such as natural selection. Instead, many aspects of numbers, such as the spoken words and written signs that are used to represent them, must be produced by cultural evolution — a process in which individuals learn through imitation or formal teaching to adopt a new skill (such as how to use a tool).
  • Although many animals have culture, one that involves numbers is essentially unique to humans. A handful of chimpanzees have been taught in captivity to use abstract symbols to represent quantities, but neither chimps nor any other non-human species use such symbols in the natural world.
  • during excavations at Border Cave in South Africa, archaeologists discovered an approximately 42,000-year-old baboon fibula that was also marked with notches. D’Errico suspects that anatomically modern humans living there at the time used the bone to record numerical information. In the case of this bone, microscopic analysis of its 29 notches suggests they were carved using four distinct tools and so represent four counting events, which D’Errico thinks took place on four separate occasions1.
  • ...14 more annotations...
  • D’Errico has developed a scenario to explain how number systems might have arisen through the very act of producing such artefacts. His hypothesis is one of only two published so far for the prehistoric origin of numbers.
  • It all started by accident, he suggests, as early hominins unintentionally left marks on bones while they were butchering animal carcasses. Later, the hominins made a cognitive leap when they realized that they could deliberately mark bones to produce abstract designs — such as those seen on an approximately 430,000-year-old shell found in Trinil, Indonesia6. At some point after that, another leap occurred: individual marks began to take on meaning, with some of them perhaps encoding numerical information
  • The Les Pradelles hyena bone is potentially the earliest known example of this type of mark-making, says D’Errico. He thinks that with further leaps, or what he dubs cultural exaptations, such notches eventually led to the invention of number signs such as 1, 2 and 37.
  • Overmann has developed her own hypothesis to explain how number systems might have emerged in prehistory — a task made easier by the fact that a wide variety of number systems are still in use around the world. For example, linguists Claire Bowern and Jason Zentz at Yale University in New Haven, Connecticut, reported in a 2012 survey that 139 Aboriginal Australian languages have an upper limit of ‘three’ or ‘four’ for specific numerals. Some of those languages use natural quantifiers such as ‘several’ and ‘many’ to indicate higher values
  • here is even one group, the Pirahã people of the Brazilian Amazon, that is sometimes claimed not to use numbers at all10.
  • In a 2013 study11, Overmann analysed anthropological data relating to 33 contemporary hunter-gatherer societies across the world. She discovered that those with simple number systems (an upper limit not much higher than ‘four’) often had few material possessions, such as weapons, tools or jewellery. Those with elaborate systems (an upper numeral limit much higher than ‘four’) always had a richer array of possessions.
  • In societies with complex number systems, there were clues to how those systems developed. Significantly, Overmann noted that it was common for these societies to use quinary (base 5), decimal or vigesimal (base 20) systems. This suggested to her that many number systems began with a finger-counting stage.
  • This finger-counting stage is important, according to Overmann. She is an advocate of material engagement theory (MET), a framework devised about a decade ago by cognitive archaeologist Lambros Malafouris at the University of Oxford, UK12. MET maintains that the mind extends beyond the brain and into objects, such as tools or even a person’s fingers. This extension allows ideas to be realized in physical form; so, in the case of counting, MET suggests that the mental conceptualization of numbers can include the fingers. That makes numbers more tangible and easier to add or subtract.
  • The societies that moved beyond finger-counting did so, argues Overmann, because they developed a clearer social need for numbers. Perhaps most obviously, a society with more material possessions has a greater need to count (and to count much higher than ‘four’) to keep track of objects.
  • An artefact such as a tally stick also becomes an extension of the mind, and the act of marking tally notches on the stick helps to anchor and stabilize numbers as someone counts.
  • some societies moved beyond tally sticks. This first happened in Mesopotamia around the time when cities emerged there, creating an even greater need for numbers to keep track of resources and people. Archaeological evidence suggests that by 5,500 years ago, some Mesopotamians had begun using small clay tokens as counting aids.
  • Overmann acknowledges that her hypothesis is silent on one issue: when in prehistory human societies began developing number systems. Linguistics might offer some help here. One line of evidence suggests that number words could have a history stretching back at least tens of thousands of years.
  • Evolutionary biologist Mark Pagel at the University of Reading, UK, and his colleagues have spent many years exploring the history of words in extant language families, with the aid of computational tools that they initially developed to study biological evolution. Essentially, words are treated as entities that either remain stable or are outcompeted and replaced as languages spread and diversif
  • Using this approach, Pagel and Andrew Meade at Reading showed that low-value number words (‘one’ to ‘five’) are among the most stable features of spoken languages14. Indeed, they change so infrequently across language families — such as the Indo-European family, which includes many modern European and southern Asian languages — that they seem to have been stable for anywhere between 10,000 and 100,000 years.
malonema1

What's A Woggin? A Bird, a Word, and a Linguistic Mystery | Atlas Obscura - 0 views

  • "She returned with A Plenty of Woggins we Cooked Some for Supper."
  • What in the world is a woggin
lindsayweber1

Donald Trump's unique speaking style, explained by linguists - Vox - 0 views

  • Watching Trump, it’s easy to see how this plays out. He makes vague implications with a raised eyebrow or a shrug, allowing his audience to reach their own conclusions. And that conversational style can be effective. It’s more intimate than a scripted speech. People walk away from Trump feeling as though he were casually talking to them, allowing them to finish his thoughts.
  • "Trump's frequency of divergence is unusual," Liberman says. In other words, he goes off topic way more often than the average person in conversation.
  • "His speech suggests a man with scattered thoughts, a short span of attention, and a lack of intellectual discipline and analytical skills," Pullum says.
  • ...3 more annotations...
  • Many of Trump’s most famous catchphrases are actually versions of time-tested speech mechanisms that salespeople use.
  • Trump’s frequent use of "Many people are saying…" or "Believe me" — often right after saying something that is baseless or untrue. This tends to sound more trustworthy to listeners than just outright stating the baseless claim, since Trump implies that he has direct experience with what he’s talking about. At a base level, Lakoff argues, people are more inclined to believe something that seems to have been shared.
  • And when Trump kept calling Clinton "crooked," or referring to terrorists as "radical Muslims," he strengthened the association through repetition
Javier E

DNA Deciphers Roots of Modern Europeans - NYTimes.com - 0 views

  • today’s Europeans descend from three groups who moved into Europe at different stages of history.
  • The first were hunter-gatherers who arrived some 45,000 years ago in Europe. Then came farmers who arrived from the Near East about 8,000 years ago.
  • Finally, a group of nomadic sheepherders from western Russia called the Yamnaya arrived about 4,500 years ago. The authors of the new studies also suggest that the Yamnaya language may have given rise to many of the languages spoken in Europe today.
  • ...18 more annotations...
  • the new studies were “a major game-changer. To me, it marks a new phase in ancient DNA research.”
  • Until about 9,000 years ago, Europe was home to a genetically distinct population of hunter-gatherers, the researchers found. Then, between 9,000 and 7,000 years ago, the genetic profiles of the inhabitants in some parts of Europe abruptly changed, acquiring DNA from Near Eastern populations.
  • Archaeologists have long known that farming practices spread into Europe at the time from Turkey. But the new evidence shows that it wasn’t just the ideas that spread — the farmers did, too.
  • the Yamnaya, who left behind artifacts on the steppes of western Russia and Ukraine dating from 5,300 to 4,600 years ago. The Yamnaya used horses to manage huge herds of sheep, and followed their livestock across the steppes with wagons full of food and water.
  • “You have groups which are as genetically distinct as Europeans and East Asians. And they’re living side by side for thousands of years.”
  • Between 7,000 and 5,000 years ago, however, hunter-gatherer DNA began turning up in the genes of European farmers. “There’s a breakdown of these cultural barriers, and they mix,”
  • About 4,500 years ago, the final piece of Europe’s genetic puzzle fell into place. A new infusion of DNA arrived — one that is still very common in living Europeans, especially in central and northern Europe.
  • The hunter-gatherers didn’t disappear, however. They managed to survive in pockets across Europe between the farming communities.
  • The closest match to this new DNA, both teams of scientists found, comes from skeletons found in Yamnaya graves in western Russia and Ukraine.
  • it was likely that the expansion of Yamnaya into Europe was relatively peaceful. “It wasn’t Attila the Hun coming in and killing everybody,”
  • the most likely scenario was that the Yamnaya “entered into some kind of stable opposition” with the resident Europeans that lasted for a few centuries. But then gradually the barriers between the cultures eroded.
  • the Yamnaya didn’t just expand west into Europe, however. The scientists examined DNA from 4,700-year-old skeletons from a Siberian culture called the Afanasievo. It turns out that they inherited Yamnaya DNA, too.
  • was surprised by the possibility that Yamnaya pushed out over a range of about 4,000 miles. “
  • For decades, linguists have debated how Indo-European got to Europe. Some favor the idea that the original farmers brought Indo-European into Europe from Turkey. Others think the language came from the Russian steppes thousands of years later.
  • he did think the results were consistent with the idea that the Yamnaya brought Indo-European from the steppes to Europe.
  • The eastward expansion of Yamnaya, evident in the genetic findings, also supports the theory, Dr. Willerslev said. Linguists have long puzzled over an Indo-European language once spoken in western China called Tocharian. It is only known from 1,200-year-old manuscripts discovered in ancient desert towns. It is possible that Tocharian was a vestige of the eastern spread of the Yamnaya.
  • the new studies were important, but were still too limited to settle the debate over the origins of Indo-European. “I don’t think we’re there yet,” he said.
  • Dr. Heggarty speculated instead that early European farmers, the second wave of immigrants, may have brought Indo-European to Europe from the Near East. Then, thousands of years later, the Yamnaya brought the language again to Central Europe.
Javier E

History's Heroic Failures - Talking Points Memo - 0 views

  • Scholars have been applying these source critical tools to the origins of Christianity and Judaism for going on two centuries. But it only within the last couple decades that scholars have begun to apply them to the origins of Islam
  • Many still believe that Islam was, as the historian Ernest Renan once put it, “born in the full light of history.” But this is far from the case. Montgomery Watt’s standard short biography of Muhammad is the product of deep scholarship but operates largely within the canonical historiography of the Islamic tradition
  • The level of detail we seem to know about Muhammad’s life in Mecca and Medina, his prophetic call and early battles is astonishing. But there is a big problem. This level of detail is based on source traditions that don’t meet any kind of modern historical muste
  • ...13 more annotations...
  • Our earliest written accounts of who Muhammad was, what he did and how Islam began come more than a century after his death. This is an authorized, canonical biography written by ibn Ishaq in Baghdad in the mid-8th century. But even that book has been lost. It only survives in editions and excerpts from two other Muslim scholars (ibn Hisham and al-Tabari) writing in the 9th and early 10th centuries, respectively.
  • These narratives come far too long after the events in question to be taken at anything like face value in historical terms. Memories of who Muhammad and his milieu were and what they did were passed down as oral traditions for three or four generations before being written down. They then passed through new rounds of editing and reshaping and recension for another century. We know from ancient and contemporary examples that such oral traditions usually change radically over even short periods of time to accommodate the present realities and needs of the communities in which they are passed down
  • To get your head around the challenge, imagine we had no written records fo the American Civil War and our only knowledge of it came from stories passed down orally for the generations until the the US government had a scholar at the library of Congress pull them together and create an authorized history in say 1985.
  • Unsurprisingly, recent studies using the tools and standards historians apply to other eras suggest that the beginnings of Islam were quite different from the traditional or canonical accounts most of us are familiar with.
  • If these two things, the conventional narrative of events and the process of making sense of the sources and weighing their credibility, can be woven together the end result is fascinating and more compelling than a more cinematic narrative. This Morris accomplishes very well
  • Morris, the author, begins with a discussion of just this issue, how the historian can balance the need for comprehensible narrative of William’s conquest of England with a candid explanation of how little we know with certainty and how much we don’t know at all. This is a literary challenge of the first order, simultaneously building your narrative and undermining it
  • Like the Norman Conquest but on a vastly greater scale, the victory at Yarmouk would have a profound linguistic and cultural impact which lasts down to today. It is why the Levant, Syria and Egypt now all speak Arabic. It is also why the Middle East is dominated by Islam rather than Christianity.
  • History has a small number of these critical linguistic inflection points. Alexander the Great’s conquests made Greek the lingua franca and the language of government and most cities throughout the eastern Mediterranean, which it remained for almost a thousand years.
  • The fact that the Arab conquests so rapidly erased Greek from this region is a clue that its roots never ran that deep and that the Semitic languages that remained the spoken language of the rural masses, particularly Aramaic, may have been more porous and receptive to the related language of Arabic.
  • Earlier I referred to Heraclius as perhaps the last Roman Emperor. This is because most historians mark the change from the Roman to the Byzantine periods at the Muslim conquests
  • The people we call the Byzantines never called themselves that. They remained “Romans” for the next eight centuries until the Ottoman Turks finally conquered Constantinople in 1453.
  • But after the Muslim conquest, the Roman state starts to look less like a smaller Roman empire and more like the other post-Roman successor states in the West. It is smaller and more compact, more tightly integrated in language, religion, economy and culture. From here what we can now call Byzantium enters into a dark, smoldering and more opaque period lasting some two centuries of holding on against repeated Muslim incursions and threat.
  • What captures my attention is this peculiar kind of reversal of fortune, spectacular victories which are not so much erased as rendered moot because they are rapidly followed by far more cataclysmic defeats.
Javier E

Trump's Brazen, Effective Lie - The Atlantic - 0 views

  • Traditionally, magazines have given informed staffers the leeway to share considered judgments of that sort with readers in service of helping them to understand the world. In contrast, newspapers, TV networks, and NPR have shied away from rendering such judgments in deference to longstanding aspirations to “objectivity.”
  • In the case of the physician’s letter, the norms of some major news organizations caused journalists confronted with obvious bullshit to publish under headlines like these:
  • Some outlets signaled in the body of their stories that readers should be skeptical. “The full letter is written in true Trumpian fashion, full of hyperbole and boasting of greatness,” NPR noted.
  • ...11 more annotations...
  • Others, like ABC, published credulous items.
  • What matters most are the actions of Trump, now the most powerful person in the world. If he indeed dictated this letter—and this is well supported even by a glancing linguistic analysis—then it is his ethics that should be called to question … Billions more people are implicated if this letter is evidence of Trump’s willingness to lie to circumvent and subvert a critical vetting process, to baldly misrepresent himself by using people like Bornstein for his own gain.
  • all of this sort of data pales compared to what such an act of forgery would say about his morality; his sense of honesty, transparency, decency, and accountability; his actual fitness to serve as president of the United States.
  • During his rise, Trump put the press and the public in an impossible position by lying in a manner that was both flagrantly obvious to anyone paying close attention and often impossible for news organizations to prove as a settled matter of fact.
  • That his lie is now exposed, like so many before it, is the latest opportunity for Republican elites to level with their base: The president and many of his allies are liars—and while they are hardly the first political elites to ever tell lies in national politics, it is partly their unusually flagrant and shameless mendacity that cause the press to treat them with more skepticism and hostility than bygone GOP presidents.
  • In politics, the skeptical approach that Hamblin took to Trump’s mendacious claims of yore are preferable, I’d argue, to the credulous headlines and articles that some others wrote
  • Still, on other occasions, different journalists have made regrettable errors by going beyond what they could prove empirically and offering analysis
  • there is no perfect journalistic approach to deploy in all cases—and something to be said for a diversity of approaches
  • Trump had flagrantly told so many decades of untruths to the public by December of 2015 that he should long before have ceded the benefit of the doubt that allowed any unverified, advantageous claim about him to make headlines, even atop stories that went on to hint at their dubiousness.
  • Most people, even in politics, are too decent to lie as he did. They possess normal consciences and senses of shame. Trump was willing to exploit the fact that humans extend some general presumptions of trust to function in this world. Like a con man, he benefitted by betraying that trust more shamelessly than others.
  • Trump is the root of the problem. And his minor enablers, like Bornstein, and his major enablers, like Vice President Mike Pence, harm America with their complicity in the lies that the president tells the citizens he is meant to serve
Javier E

Getting Radical About Inequality - The New York Times - 0 views

  • Pierre Bourdieu is helpful reading in the age of Trump. He was born in 1930, the son of a small-town postal worker. By the time he died in 2002, he had become perhaps the world’s most influential sociologist within the academy
  • His great subject was the struggle for power in society, especially cultural and social power. We all possess, he argued, certain forms of social capital. A person might have academic capital (the right degrees from the right schools), linguistic capital (a facility with words), cultural capital (knowledge of cuisine or music or some such) or symbolic capital (awards or markers of prestige). These are all forms of wealth you bring to the social marketplace.
  • In addition, and more important, we all possess and live within what Bourdieu called a habitus. A habitus is a body of conscious and tacit knowledge of how to travel through the world, which gives rise to mannerisms, tastes, opinions and conversational style
  • ...14 more annotations...
  • A habitus is an intuitive feel for the social game. It’s the sort of thing you get inculcated with unconsciously, by growing up in a certain sort of family or by sharing a sensibility with a certain group of friends.
  • Your habitus is what enables you to decode cultural artifacts, to feel comfortable in one setting but maybe not in another. Taste overlaps with social position; taste classifies the classifier.
  • Bourdieu used the phrase “symbolic violence” to suggest how vicious this competition can get
  • The symbolic marketplace is like the commercial marketplace; it’s a billion small bids for distinction, prestige, attention and superiority.
  • Every minute or hour, in ways we’re not even conscious of, we as individuals and members of our class are competing for dominance and respect. We seek to topple those who have higher standing than us and we seek to wall off those who are down below. Or, we seek to take one form of capital, say linguistic ability, and convert it into another kind of capital, a good job.
  • Most groups conceal their naked power grabs under a veil of intellectual or aesthetic purity
  • Every day, Bourdieu argued, we take our stores of social capital and our habitus and we compete in the symbolic marketplace. We vie as individuals and as members of our class for prestige, distinction and, above all, the power of consecration — the power to define for society what is right, what is “natural,” what is “best.”
  • People at the top, he observed, tend to adopt a reserved and understated personal style that shows they are far above the “assertive, attention-seeking strategies which expose the pretensions of the young pretenders.”
  • People at the bottom of any field, on the other hand, don’t have a lot of accomplishment to wave about, but they can use snark and sarcasm to demonstrate the superior sensibilities.
  • Trump is not much of a policy maven, but he’s a genius at the symbolic warfare Bourdieu described. He’s a genius at upending the social rules and hierarchies that the establishment classes (of both right and left) have used to maintain dominance.
  • Bourdieu didn’t argue that cultural inequality creates economic inequality, but that it widens and it legitimizes it.
  • as the information economy has become more enveloping, cultural capital and economic capital have become ever more intertwined. Individuals and classes that are good at winning the cultural competitions Bourdieu described tend to dominate the places where economic opportunity is richest; they tend to harmonize with affluent networks and do well financially.
  • the drive to create inequality is an endemic social sin. Every hour most of us, unconsciously or not, try to win subtle status points, earn cultural affirmation, develop our tastes, promote our lifestyles and advance our class. All of those microbehaviors open up social distances, which then, by the by, open up geographic and economic gaps.
  • Bourdieu radicalizes, widens and deepens one’s view of inequality. His work suggests that the responses to it are going to have to be more profound, both on a personal level — resisting the competitive, ego-driven aspects of social networking and display — and on a national one.
krystalxu

The Push to Make French Gender-Neutral - The Atlantic - 0 views

  • Feminists who believe that these features of the French language put women at a disadvantage disagree about how best to remedy them.
  • It’s not that speaking French makes it impossible for you to conceive of something as gender-neutral, he suggested
  • those studies are limited in that they can’t control for outside factors like culture, which are extremely important in determining sexist attitudes.
  • ...4 more annotations...
  • The statement also warned that inclusive writing would “destroy” the promises of the Francophonie (the linguistic zone encompassing all countries that use French at the administrative level, or whose first or majority language is French).
  • this subject is still on the margins of cognitive science and linguistics research, and critics of inclusive writing say that not enough work has been done to prove conclusively that changing a language will improve gender equality.
  • also the speakers of minority languages in France (like Catalan, Occitan, and Gascon), who have always used the median-period as a phonetic marker.
  • So the uproar was almost instantaneous when, this fall, the first-ever school textbook promoting a gender-neutral version of French was released.
Javier E

How Steven Pinker Became a Target Over His Tweets - The New York Times - 0 views

  • In an era of polarizing ideologies, Professor Pinker, a linguist and social psychologist, is tough to pin down. He is a big supporter of Democrats, and donated heavily to former President Barack Obama, but he has denounced what he sees as the close-mindedness of heavily liberal American universities. He likes to publicly entertain ideas outside the academic mainstream, including the question of innate differences between the sexes and among different ethnic and racial groups. And he has suggested that the political left’s insistence that certain subjects are off limits contributed to the rise of the alt-right.
  • John McWhorter, a Columbia University professor of English and linguistics, cast the Pinker controversy within a moment when, he said, progressives look suspiciously at anyone who does not embrace the politics of racial and cultural identity.
  • He described his critics as “speech police” who “have trolled through my writings to find offensive lines and adjectives.”
  • ...3 more annotations...
  • “I have a mind-set that the world is a complex place we are trying to understand,” he said. “There is an inherent value to free speech, because no one knows the solution to problems a priori.”
  • “Steve is too big for this kerfuffle to affect him,” Professor McWhorter said. “But it’s depressing that an erudite and reasonable scholar is seen by a lot of intelligent people as an undercover monster.”
  • “We’re in this moment that’s like a collective mic drop, and civility and common sense go out the window,” he said. “It’s enough to cry racism or sexism, and that’s that.”
Javier E

Andrew Sullivan: You Say You Want A Revolution? - 0 views

  • One of the things you know if you were brought up as a Catholic in a Protestant country, as I was, is how the attempted extirpation of England’s historic Catholic faith was enforced not just by executions, imprisonments, and public burnings but also by the destruction of monuments, statues, artifacts, paintings, buildings, and sacred sculptures. The shift in consciousness that the religious revolution required could not be sustained by words or terror alone. The new regime — an early pre-totalitarian revolution imposed from the top down — had to remove all signs of what had come before.
  • The impulse for wiping the slate clean is universal. Injustices mount; moderation seems inappropriate; radicalism wins and then tries to destroy the legacy of the past as a whole.
  • for true revolutionary potential, it’s helpful if these monuments are torn down by popular uprisings. That adds to the symbolism of a new era, even if it also adds to the chaos. That was the case in Mao’s Cultural Revolution, when the younger generation, egged on by the regime, went to work on any public symbols or statues they deemed problematically counterrevolutionary, creating a reign of terror that even surpassed France’s.
  • ...22 more annotations...
  • Mao’s model is instructive in another way. It shows you what happens when a mob is actually quietly supported by elites, who use it to advance their own goals. The Red Guards did what they did — to their friends, and parents, and teachers — in the spirit of the Communist regime itself.
  • bram X. Kendi, the New York Times best seller who insists that everyone is either racist or anti-racist, now has a children’s book to indoctrinate toddlers on one side of this crude binary
  • Revolutionary moments also require public confessions of iniquity by those complicit in oppression.
  • These now seem to come almost daily. I’m still marveling this week at the apology the actress Jenny Slate gave for voicing a biracial cartoon character. It’s a classic confession of counterrevolutionary error: “I acknowledge how my original reasoning was flawed and that it existed as an example of white privilege and unjust allowances made within a system of societal white supremacy … Ending my portrayal of ‘Missy’ is one step in a life-long process of uncovering the racism in my actions.” For Slate to survive in her career, she had to go full Cersei in her walk of shame.
  • They murdered and tortured, and subjected opponents to public humiliations — accompanied by the gleeful ransacking of religious and cultural sites. In their attack on the Temple of Confucius, almost 7,000 priceless artifacts were destroyed. By the end of the revolution, almost two-thirds of Beijing’s historical sites had been destroyed in a frenzy of destruction against “the four olds: old customs, old habits, old culture, and old ideas.” Mao first blessed, then reined in these vandals.
  • take this position voiced on Twitter by a chemistry professor at Queen’s University in Canada this week: “Here’s the thing: If whatever institution you are a part of is not COMPLETELY representative of the population you can draw from, you can draw only two conclusions. 1) Bias against the underrepresented groups exists or 2) the underrepresented groups are inherently less qualified.”
  • Other factors — such as economics or culture or individual choice or group preference — are banished from consideration.
  • Revolutions also encourage individuals to take matters in their own hands. The distinguished liberal philosopher Michael Walzer recently noted how mutual social policing has a long and not-so-lovely history — particularly in post–Reformation Europe, in what he has called “the revolution of the saints.”
  • Revolutionaries also create new forms of language to dismantle the existing order. Under Mao, “linguistic engineering” was integral to identifying counterrevolutionaries, and so it is today.
  • The use of the term “white supremacy” to mean not the KKK or the antebellum South but American society as a whole in the 21st century has become routine on the left, as if it were now beyond dispute.
  • The word “women,” J.K. Rowling had the temerity to point out, is now being replaced by “people who menstruate.”
  • The word “oppression” now includes not only being herded into Uighur reeducation camps but also feeling awkward as a sophomore in an Ivy League school.
  • The word “racist,” which was widely understood quite recently to be prejudicial treatment of an individual based on the color of their skin, now requires no intent to be racist in the former sense, just acquiescence in something called “structural racism,” which can mean any difference in outcomes among racial groupings. Being color-blind is therefore now being racist.
  • And there is no escaping this. The woke shift their language all the time, so that words that were one day fine are now utterly reprehensible.
  • You can’t keep up — which is the point. (A good resource for understanding this new constantly changing language of ideology is “Translations From the Wokish.”) The result is an exercise of cultural power through linguistic distortion.
  • So, yes, this is an Orwellian moment
  • It’s not a moment of reform but of a revolutionary break, sustained in part by much of the liberal Establishment.
  • Even good and important causes, like exposing and stopping police brutality, can morph very easily from an exercise in overdue reform into a revolutionary spasm. There has been much good done by the demonstrations forcing us all to understand better how our fellow citizens are mistreated by the agents of the state or worn down by the residue of past and present inequality.
  • But the zeal and certainty of its more revolutionary features threaten to undo a great deal of that goodwill.
  • The movement’s destruction of even abolitionist statues, its vandalism of monuments to even George Washington, its crude demonization of figures like Jefferson, its coerced public confessions, its pitiless wreckage of people’s lives and livelihoods, its crude ideological Manichaeanism, its struggle sessions and mandated anti-racism courses, its purging of cultural institutions of dissidents, its abandonment of objective tests in higher education (replacing them with quotas and a commitment to ideology), and its desire to upend a country’s sustained meaning and practices are deeply reminiscent of some very ugly predecessors.
  • But the erasure of the past means a tyranny of the present. In the words of Orwell, a truly successful ideological revolution means that “every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.”
  • We are not there yet. But unless we recognize the illiberal malignancy of some of what we face, and stand up to it with courage and candor, we soon will be.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

They Wanted to Write the History of Modern China. But How? - The New York Times - 0 views

  • this is the key message of Tsu’s book: The story of how linguists, activists, librarians, scholars and ordinary citizens adapted Chinese writing to the modern world is the story of how China itself became modern.
  • Following the history of the script helps explain China’s past, present — and future. “More than a century’s effort at learning how to standardize and transform its language into a modern technology has landed China here,” writes Tsu, a professor of East Asian languages and literature at Yale, “at the beginning — not the end — of becoming a standard setter, from artificial intelligence to quantum natural language processing, automation to machine translation.”
  • With their “ad hoc efforts to retrofit Chinese characters” to typewriters and telegraphs, Chinese inventors sought to resolve the difficulties “that accompanied being late entrants in systems intended for a different kind of written language. But many wondered if the Chinese script itself was the problem.”
  • ...9 more annotations...
  • This book tells the stories of those who decided otherwise.
  • Tsu weaves linguistic analysis together with biographical and historical context — the ravages of imperialism, civil war, foreign invasions, diplomatic successes and disappointments. This approach not only adds background and meaning to the script debate, but also terrific color to what might have otherwise read like a textbook.
  • Could any alphabet account for the tones needed to differentiate among characters?
  • Each step of the way, these innovators had to ask questions like: How can the Chinese script be organized in a rational way? Could the language be written with an alphabet?
  • By examining these questions closely, Tsu helps the novice to Chinese understand both the underlying challenges and how they were conquered.
  • Mao, Tsu notes, “went down in history as, among other things, the political figure who guided the Chinese language through its two greatest transformations in modern history.”
  • With more than 90 percent of the population illiterate, Mao embraced the movement to reduce the number of strokes in more than 2,200 characters to render them easier to learn and write. (Taiwan, rejecting simplification, still sees itself as the guardian of traditional Chinese culture.)
  • Mao also spurred the creation of Pinyin, a phonetic, Romanized Chinese alphabet designed as an auxiliary aid to learning Chinese script, rather than a replacement.
  • in the end, the Chinese script did not die; instead, it flourished. As Tsu writes, “Every technology that has ever confronted the Chinese script, or challenged it, also had to bow before it.”
Javier E

A Terribly Serious Adventure, by Nikhil Krishnan review - The Washington Post - 0 views

  • he traces the affiliations, rivalries and intellectual spats among the eminences of mid-20th-century philosophy at Oxford: Gilbert Ryle, A.J. Ayer, J.L. Austin, R.M. Hare, Elizabeth Anscombe, Peter Strawson
  • All these thinkers focused their considerable intellectual powers on doing something similar to what I’ve done above: analyzing the words people use to probe the character and limits of how we perceive and understand the world. Such “linguistic philosophy” aimed, in Krishnan’s formulation, “to scrape away at sentences until the content of the thoughts underlying them was revealed, their form unobstructed by the distorting structures of language and idiom.”
  • ‘What’s the good of having one philosophical discussion,’ he told her once. ‘It’s like having one piano lesson.’”
  • ...10 more annotations...
  • Reading their books, he tells us, can be as exciting as reading a great novel or poem. In particular, Krishnan emphasizes the virtues they embodied in themselves and their work. “Some of these virtues were, by any reckoning, moral ones: humility, self-awareness, collegiality, restraint. Others are better thought aesthetic: elegance, concision, directness.”
  • Consider Gilbert Ryle. As the Waynflete professor of metaphysics, he was asked if he ever read novels, to which he replied, “All six of them, once a year.” Jane Austen’s, of course.
  • Another time, an American visitor wondered if it was true that Ryle, as editor of the journal Mind, would “accept or reject an article on the basis of reading just the first paragraph.” “That used to be true at one time,” Ryle supposedly answered. “I had a lot more time in those days.”
  • J.L. Austin gradually emerges as the central figure. “He listened, he understood, and when he started to speak, with the piercing clarity he brought to all things, philosophical or not, it ‘made one’s thoughts race.’”
  • Oxford discussion groups and tutorials tried to avoid those “cheap rhetorical ploys” that aim “at victory and humiliation rather than truth.” Instead their unofficial motto stressed intellectual fraternity: “Let no one join this conversation who is unwilling to be vulnerable.”
  • their meetings continually resounded with “short, punchy interrogations” that aimed “to clarify positions, pose objections and expose inconsistencies.”
  • Austin “wanted to be, all he wanted other people to be, was rational.” His highest praise was to call someone “sensible.”
  • When Oxford announced plans to award Truman an honorary degree, Anscombe objected. She wasn’t protesting against nuclear weapons per se (as was Bertrand Russell) but simply standing up for what she regarded as an inviolable principle: “Choosing to kill the innocent as a means to your ends is always murder.” End of argument. Anscombe’s was a lonely voice, however, except for the support of her philosopher friend, Philippa Foot, whose imposing manner Krishnan brilliantly captures: “She looked like the sort of young woman who knew how to get a boisterous dog to sit.”
  • Despite the sheer entertainment available in “A Terribly Serious Adventure,” readers will want to slow down for its denser pages outlining erudite theories or explaining category mistakes and other specialized terms
  • All these philosophers, as well as a half-dozen others I haven’t been able to mention, come across as both daunting and charismatic
Javier E

Donald Trump will win in a landslide. *The mind behind 'Dilbert' explains why. - The Wa... - 0 views

  • What the Bay Area-based cartoonist recognizes, he says, is the careful art behind Trump’s rhetorical techniques.
  • Adams believes Trump will win because he’s “a master persuader.”
  • His stated credentials in this arena, says Adams — who holds an MBA from UC Berkeley — largely involve being a certified hypnotist and, as a writer and business author, an eternal student in the techniques of persuasive rhetoric.
  • ...18 more annotations...
  • he bolsters that approach, Adams says, by “exploiting the business model” like an entrepreneur. In this model, which “the news industry doesn’t have the ability to change … the media doesn’t really have the option of ignoring the most interesting story,” says Adams, contending that Trump “can always be the most interesting story if he has nothing to fear and nothing to lose.”
  • what Trump is doing? He is acknowledging the suffering of some, Adams says, and then appealing emotionally to that.
  • “The most important thing when you study hypnosis is that you learn that humans are irrational,
  • Having nothing to lose essentially then increases his chance of winning, because it opens up his field of rhetorical play.
  • Within that context, here is what Candidate Trump is doing to win campaign hearts and minds
  • 1. Trump knows people are basically irrational.
  • 2. Knowing that people are irrational, Trump aims to appeal on an emotional level.
  • “The evidence is that Trump completely ignores reality and rational thinking in favor of emotional appeal,” Adams writes. “Sure, much of what Trump says makes sense to his supporters, but I assure you that is coincidence. Trump says whatever gets him the result he wants. He understands humans as 90-percent irrational and acts accordingly.”
  • 3. By running on emotion, facts don’t matter.
  • “There are plenty of important facts Trump does not know. But the reason he doesn’t know those facts is – in part – because he knows facts don’t matter. They never have and they never will. So he ignores them.
  • 4. If facts don’t matter, you can’t really be “wrong.”
  • “If you understand persuasion, Trump is pitch-perfect most of the time. He ignores unnecessary rational thought and objective data and incessantly hammers on what matters (emotions).”
  • “Did Trump’s involvement in the birther thing confuse you?” Adams goes on to ask. “Were you wondering how Trump could believe Obama was not a citizen? The answer is that Trump never believed anything about Obama’s place of birth. The facts were irrelevant, so he ignored them while finding a place in the hearts of conservatives.
  • 5. With fewer facts in play, it’s easier to bend reality.
  • Among the persuasive techniques that Trump uses to help bend reality, Adams says, are repetition of phrases; “thinking past the sale” so the initial part of his premise is stated as a given; and knowing the appeal of the simplest answer, which relates to the concept of Occam’s razor.
  • 6. To bend reality, Trump is a master of identity politics — and identity is the strongest persuader.
  • “The best Trump linguistic kill shots,” Adams writes,”have the following qualities: 1. Fresh word that is not generally used in politics; 2. Relates to the physicality of the subject (so you are always reminded).”
  • : “Identity is always the strongest level of persuasion. The only way to beat it is with dirty tricks or a stronger identity play. … [And] Trump is well on his way to owning the identities of American, Alpha Males, and Women Who Like Alpha Males. Clinton is well on her way to owning the identities of angry women, beta males, immigrants, and disenfranchised minorities.
Javier E

Barry Latzer on Why Crime Rises and Falls - The Atlantic - 0 views

  • Barry Latzer: The optimistic view is that the late ‘60s crime tsunami, which ended in the mid-1990s, was sui generis, and we are now in a period of "permanent peace," with low crime for the foreseeable future
  • Pessimists rely on the late Eric Monkkonen's cyclical theory of crime, which suggests that the successive weakening and strengthening of social controls on violence lead to a crime roller coaster. The current zeitgeist favors a weakening of social controls, including reductions in incarcerative sentences and restrictions on police, on the grounds that the criminal-justice system is too racist, unfair, and expensive. If Monkkonen were correct, we will get a crime rise before long.
  • the most provocative feature of your book: your belief that different cultural groups show different propensities for crime, enduring over time, and that these groups carry these propensities with them when they migrate from place to place.
  • ...21 more annotations...
  • this idea and its implications stir more controversy among criminologists than any other. Would you state your position as precisely as possible in this brief space?
  • Latzer: First of all, culture and race, in the biological or genetic sense, are very different. Were it not for the racism of the 18th and 19th centuries, we might not have had a marked cultural difference between blacks and whites in the U.S. But history cannot be altered, only studied and sometimes deplored. 28 28
  • Different groups of people, insofar as they consider themselves separate from others, share various cultural characteristics: dietary, religious, linguistic, artistic, etc. They also share common beliefs and values. There is nothing terribly controversial about this. If it is mistaken then the entire fields of sociology and anthropology are built on mistaken premises.
  • With respect to violent crime, scholars are most interested in a group's preference for violence as a way of resolving interpersonal conflict. Some groups, traditionally rural, developed cultures of “honor”—strong sensitivities to personal insult. We see this among white and black southerners in the 19th century, and among southern Italian and Mexican immigrants to the U.S. in the early 20th century. These groups engaged in high levels of assaultive crimes in response to perceived slights, mainly victimizing their own kind.
  • This honor culture explains the high rates of violent crime among African Americans who, living amidst southern whites for over a century, incorporated those values. When blacks migrated north in the 20th century, they transported these rates of violence. Elijah Anderson's book, The Code of the Streets, describes the phenomenon, and Thomas Sowell, in Black Liberals and White Rednecks, helps explain it. 28 28
  • Theories of crime that point to poverty and racism have the advantage of explaining why low-income groups predominate when it comes to violent crime. What they really explain, though, is why more affluent groups refrain from such crime. And the answer is that middle-class people (regardless of race) stand to lose a great deal from such behavior.
  • Likewise, the lead removal theory. The same "lead-free" generation that engaged in less crime from 1993 on committed high rates of violent crime between 1987 and 1992.
  • Frum: Let’s flash forward to the present day. You make short work of most of the theories explaining the crime drop-off since the mid-1990s: the Freakonomics theory that attributes the crime decline to easier access to abortion after 1970; the theory that credits reductions in lead poisoning; and the theory that credits the mid-1990s economic spurt. Why are these ideas wrong? And what would you put in their place? 28 28
  • both the abortion and leaded-gasoline theories are mistaken because of a failure to explain the crime spike that immediately preceded the great downturn. Abortions became freely available starting in the 1970s, which is also when lead was removed from gasoline. Fast-forward 15 to 20 years to the period in which unwanted babies had been removed from the population and were not part of the late adolescent, early adult, cohort. This cohort was responsible for the huge spike in crime in the late 1980s, early 1990s, the crack cocaine crime rise. Why didn't the winnowing through abortion of this population reduce crime?
  • The cultural explanation for violence is superior to explanations that rest of poverty or racism, however, because it can account for the differentials in the violent-crime rates of groups with comparable adversities
  • As for economic booms, it is tempting to argue that they reduce crime on the theory that people who have jobs and higher incomes have less incentive to rob and steal. This is true. But violent crimes, such as murder and manslaughter, assault, and rape, are not motivated by pecuniary interests. They are motivated by arguments, often of a seemingly petty nature, desires for sexual conquest by violence in the case of rape, or domestic conflicts, none of which are related to general economic conditions
  • Rises in violent crime have much more to do with migrations of high-crime cultures, especially to locations in which governments, particularly crime-control agents, are weak.
  • Declines are more likely when crime controls are strong, and there are no migrations or demographic changes associated with crime rises
  • In short, the aging of the violent boomer generation followed by the sudden rise and demise of the crack epidemic best explains the crime trough that began in the mid-1990s and seems to be continuing even today.
  • Contrary to leftist claims, strengthened law enforcement played a major role in the crime decline. The strengthening was the result of criminal-justice policy changes demanded by the public, black and white, and was necessitated by the weakness of the criminal justice system in the late ‘60s
  • On the other hand, conservatives tend to rely too much on the strength of the criminal-justice system in explaining crime oscillations, which, as I said, have a great to do with migrations and demographics
  • The contemporary challenge is to keep law enforcement strong without alienating African Americans, an especially difficult proposition given the outsized violent-crime rates in low-income black communities.
  • Frum: The sad exception to the downward trend in crime since 1990 is the apparent increase in mass shootings
  • Should such attacks be included in our thinking about crime? If so, how should we think about them? 28 28
  • If we separate out the ideologically motivated mass killings, such as Orlando (apparently) and San Bernardino, then we have a different problem. Surveilling potential killers who share a violent ideology will be extremely difficult but worthwhile. Limiting the availability of rapid-fire weapons with high-capacity ammunition clips is also worth doing, but politically divisive.
  • of course, developments abroad will affect the number of incidents, as will the copycat effect in the immediate aftermath of an incident. This is a complex problem, different from ordinary killings, which, by the way, take many more lives.
Javier E

Clovis People Probably Not Alone in North America - NYTimes.com - 0 views

  • “The colonization of the Americas involved multiple technologically divergent, and possibly genetically divergent, founding groups.”
  • Indeed, new genetic evidence described in the current issue of the journal Nature shows that the Americas appeared to be first populated by three waves of migrants from Siberia: one large migration about 15,000 years ago, followed by two lesser migrations. Such a pattern had been hypothesized 25 years ago on the basis of Native American language groups spoken today, but had not been widely accepted by linguistics scholars.
  • human DNA from the cave, extracted from coprolites, or dried feces, pointed to Siberian-East Asian origins of the people.
  • ...1 more annotation...
  • The findings lend support to an emerging hypothesis that the Clovis technology, named for the town in New Mexico where the first specimens were discovered, actually arose in what is now the Southeastern United States and moved west to the Plains and the Southwest. The Western Stemmed technology began, perhaps earlier, in the West. Most artifacts of that kind have been found on the West Coast and in Idaho, Nevada, Utah and Wyoming. “We seem to have two different traditions coexisting in the United States that did not blend for a period of hundreds of years,” Dr. Jenkins said.
Javier E

The Technocratic Nightmare - NYTimes.com - 0 views

  • The European Union is an attempt to build an economic and legal superstructure without a linguistic, cultural, historic and civic base. It was the final of the post-World War II efforts — the United Nations was among the first — to build governments that were transnational, passionless and safe.
  • At this moment of crisis, it is obvious how little moral solidarity undergirds the European pseudostate. Americans in Oregon are barely aware when their tax dollars go to Americans in Arizona. We are one people with one shared destiny. West Germans were willing to pay enormous subsidies to build the former East Germany. They, too, are one people.
  • But that shared identity doesn’t exist between Germans and Greeks, or even between French and Germans. It was easy to be European when it didn’t cost anything. When sacrifices are necessary, the European identity dissolves away.
  • ...1 more annotation...
  • A European central banker said he had always wondered how Europe’s leaders could have stumbled into World War I. “From the middle of a crisis,” he said recently, “you can see how easy it is to make mistakes.”
1 - 20 of 57 Next › Last »
Showing 20 items per page