Skip to main content

Home/ TOK Friends/ Group items tagged human development

Rss Feed Group items tagged

Javier E

The Future of Sex - The European - 1 views

  • Consider the most likely scenario for how human sexual behavior will develop over the next hundred years or so in the absence of cataclysm. Here’s what I see if we continue on our current path:
  • Like every other aspect of human life, our sexuality will become increasingly mediated by technology. The technology of pornography will become ever more sophisticated—even if the subject matter of porn itself will remain as primal as ever.
  • As the technology improves, society continues to grow ever more fragmented, and hundreds of millions of Chinese men with no hope of marrying a bona-fide, flesh-and-blood woman come of age, sex robots will become as common and acceptable as dildos and vibrators are today. After all, the safest sex is that which involves no other living things…
  • ...4 more annotations...
  • As our sexuality becomes ever more divorced from emotion and intimacy, a process already well underway, sex will increasingly be seen as simply a matter of provoking orgasm in the most efficient, reliable ways possible.
  • Human sexuality will continue to be subjected to the same commodification and mechanization as other aspects of our lives. Just as the 21st century saw friends replaced by Facebook friends, nature replaced by parks, ocean fisheries replaced by commercially farmed seafood, and sunshine largely supplanted by tanning salons, we’ll see sexual interaction reduced to mechanically provoked orgasm as human beings become ever more dominated by the machines and mechanistic thought processes that developed in our brains and societies like bacteria in a petri dish.
  • Gender identity will fade away as sexual interaction becomes less “human” and we grow less dependent upon binary interactions with other people. As more and more of our interactions take place with non-human partners, others’ expectations and judgments will become less relevant to the development of sexual identity, leading to greater fluidity and far less urgency and passion concerning sexual expression.
  • the collapse of western civilization may well be the best thing that could happen for human sexuality. Following the collapse of the consumerist, competitive mind-set that now dominates so much of human thought, we’d possibly be free to rebuild a social world more in keeping with our preagricultural origins, characterized by economies built upon sharing rather than hoarding, a politics of respect rather than of power, and a sexuality of intimacy rather than alienation.
anonymous

Human Brain: facts and information - 0 views

  • The human brain is more complex than any other known structure in the universe.
  • Weighing in at three pounds, on average, this spongy mass of fat and protein is made up of two overarching types of cells—called glia and neurons—and it contains many billions of each.
  • The cerebrum is the largest part of the brain, accounting for 85 percent of the organ's weight. The distinctive, deeply wrinkled outer surface is the cerebral cortex. It's the cerebrum that makes the human brain—and therefore humans—so formidable. Animals such as elephants, dolphins, and whales actually have larger brains, but humans have the most developed cerebrum. It's packed to capacity inside our skulls, with deep folds that cleverly maximize the total surface area of the cortex.
  • ...18 more annotations...
  • The cerebrum has two halves, or hemispheres, that are further divided into four regions, or lobes. The frontal lobes, located behind the forehead, are involved with speech, thought, learning, emotion, and movement.
  • Behind them are the parietal lobes, which process sensory information such as touch, temperature, and pain.
  • At the rear of the brain are the occipital lobes, dealing with vision
  • Lastly, there are the temporal lobes, near the temples, which are involved with hearing and memory.
  • The second-largest part of the brain is the cerebellum, which sits beneath the back of the cerebrum.
  • diencephalon, located in the core of the brain. A complex of structures roughly the size of an apricot, its two major sections are the thalamus and hypothalamus
  • The brain is extremely sensitive and delicate, and so it requires maximum protection, which is provided by the hard bone of the skull and three tough membranes called meninges.
  • Want more proof that the brain is extraordinary? Look no further than the blood-brain barrier.
  • This led scientists to learn that the brain has an ingenious, protective layer. Called the blood-brain barrier, it’s made up of special, tightly bound cells that together function as a kind of semi-permeable gate throughout most of the organ. It keeps the brain environment safe and stable by preventing some toxins, pathogens, and other harmful substances from entering the brain through the bloodstream, while simultaneously allowing oxygen and vital nutrients to pass through.
  • One in five Americans suffers from some form of neurological damage, a wide-ranging list that includes stroke, epilepsy, and cerebral palsy, as well as dementia.
  • Alzheimer’s disease, which is characterized in part by a gradual progression of short-term memory loss, disorientation, and mood swings, is the most common cause of dementia. It is the sixth leading cause of death in the United States
  • 50 million people suffer from Alzheimer’s or some form of dementia. While there are a handful of drugs available to mitigate Alzheimer’s symptoms, there is no cure.
  • Unfortunately, negative attitudes toward people who suffer from mental illness are widespread. The stigma attached to mental illness can create feelings of shame, embarrassment, and rejection, causing many people to suffer in silence.
  • In the United States, where anxiety disorders are the most common forms of mental illness, only about 40 percent of sufferers receive treatment. Anxiety disorders often stem from abnormalities in the brain’s hippocampus and prefrontal cortex.
  • Attention-deficit/hyperactivity disorder, or ADHD, is a mental health condition that also affects adults but is far more often diagnosed in children.
  • ADHD is characterized by hyperactivity and an inability to stay focused.
  • Depression is another common mental health condition. It is the leading cause of disability worldwide and is often accompanied by anxiety. Depression can be marked by an array of symptoms, including persistent sadness, irritability, and changes in appetite.
  • The good news is that in general, anxiety and depression are highly treatable through various medications—which help the brain use certain chemicals more efficiently—and through forms of therapy
  •  
    Here is some anatomy of the brain and descriptions of diseases like Alzheimer's and conditions like ADHD, depression, anxiety.
runlai_jiang

Human Microbiome and Microbiota - 0 views

  • The human microbiota consists of the entire collection of microbes that live in and on the body. In fact, there are 10 times as many microbial inhabitants of the body than there are body cells. Study of the human microbiome is inclusive of inhabitant microbes as well as the entire genomes of the body's microbial communities. These microbes reside in distinct locations in the ecosystem of the human body and perform important functions that are necessary for healthy human development.
  • Microbes of the BodyMicroscopic organisms that inhabit the body include archaea, bacteria, fungi, protists, and viruses. Microbes start to colonize the body from the moment of birth. An individual's microbiome changes in number and type throughout his or her lifetime, with the numbers of species increasing from birth to adulthood and decreasing in old age. These microbes are unique from person to person and can be impacted by certain activities, such as hand washing or taking antibiotics. Bacteria are the most numerous microbes in the human microbiome.
  • Human skin is populated by a number of different microbes that reside on the surface of the skin, as well as within glands and hair. Our skin is in constant contact with our external environment and serves as the body's first line of defense against potential pathogens. Skin microbiota help to prevent pathogenic microbes from colonizing the skin by occupying skin surfaces. They also help to educate our immune system by alerting immune cells to the presence of pathogens and initiating an immune response
  • ...2 more annotations...
  • The human gut microbiome is diverse and dominated by trillions of bacteria with as many as one-thousand different bacterial species. These microbes thrive in the harsh conditions of the gut and are heavily involved in maintaining healthy nutrition, normal metabolism, and proper immune function. They aid in the digestion of non-digestible carbohydrates, the metabolism of bile acid and drugs, and in the synthesis of amino acids and many vitamins. A number of gut microbes also produce antimicrobial substances that protect against pathogenic bacteria.
  • Microbiota of the oral cavity number in the millions and include archaea, bacteria, fungi, protists, and viruses. These organisms exist together and most in a mutualistic relationship with the host, where both the microbes and the host benefit from the relationship. While the majority of oral microbes are beneficial, preventing harmful microbes from colonizing the mouth, some have been known to become pathogenic in response to environmental changes. Bacteria are the most numerous of the oral microbes and include Streptococcus, Actinomyces, Lactobacterium, Staphylococcus, and Propionibacterium.
Ellie McGinnis

Role of Humanities, in School and Life - NYTimes.com - 0 views

  • the major value of a college curriculum, and the reason an undergraduate degree is still preferable to a random menu of massive online open courses, is the opportunity it offers students through a variety of disciplines and the different skills specific to each
  • most colleges do not view humanities and sciences as in competition with each other. Today’s students need to develop the capacity for open-ended inquiry cultivated by the liberal arts, and also the problem-solving skills associated with science and technology.
  • a major factor that’s reshaped humanities education since 1970, when the decline began: postmodernism.
  • ...5 more annotations...
  • I fled my passion, literature, for a practical and rational-minded career in medicine.
  • More important, studying the humanities helps us make sense of our lives and our world, whether the times are good or bad.
  • But the humanities are not on life support. They are alive and well, and remain vitally important in preparing graduates to lead meaningful, considered lives, to flourish in multiple careers and to be informed, engaged citizens of our democracy and our rapidly evolving world
  • While the professors justifiably cite inadequate funding and marketplace demand for scientists and engineers as causes of the marginalization of the humanities, they also ought to look inward at their profession’s rejection of the rational ideals that make the educated world go round.
  • The narrow focus on STEM education can produce a well-trained work force. What the country and the world need are well-educated citizens.
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
Javier E

Accelerationism: how a fringe philosophy predicted the future we live in | World news |... - 1 views

  • Roger Zelazny, published his third novel. In many ways, Lord of Light was of its time, shaggy with imported Hindu mythology and cosmic dialogue. Yet there were also glints of something more forward-looking and political.
  • accelerationism has gradually solidified from a fictional device into an actual intellectual movement: a new way of thinking about the contemporary world and its potential.
  • Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.
  • ...31 more annotations...
  • Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled.
  • Accelerationism, therefore, goes against conservatism, traditional socialism, social democracy, environmentalism, protectionism, populism, nationalism, localism and all the other ideologies that have sought to moderate or reverse the already hugely disruptive, seemingly runaway pace of change in the modern world
  • Robin Mackay and Armen Avanessian in their introduction to #Accelerate: The Accelerationist Reader, a sometimes baffling, sometimes exhilarating book, published in 2014, which remains the only proper guide to the movement in existence.
  • “We all live in an operating system set up by the accelerating triad of war, capitalism and emergent AI,” says Steve Goodman, a British accelerationist
  • A century ago, the writers and artists of the Italian futurist movement fell in love with the machines of the industrial era and their apparent ability to invigorate society. Many futurists followed this fascination into war-mongering and fascism.
  • One of the central figures of accelerationism is the British philosopher Nick Land, who taught at Warwick University in the 1990s
  • Land has published prolifically on the internet, not always under his own name, about the supposed obsolescence of western democracy; he has also written approvingly about “human biodiversity” and “capitalistic human sorting” – the pseudoscientific idea, currently popular on the far right, that different races “naturally” fare differently in the modern world; and about the supposedly inevitable “disintegration of the human species” when artificial intelligence improves sufficiently.
  • In our politically febrile times, the impatient, intemperate, possibly revolutionary ideas of accelerationism feel relevant, or at least intriguing, as never before. Noys says: “Accelerationists always seem to have an answer. If capitalism is going fast, they say it needs to go faster. If capitalism hits a bump in the road, and slows down” – as it has since the 2008 financial crisis – “they say it needs to be kickstarted.”
  • On alt-right blogs, Land in particular has become a name to conjure with. Commenters have excitedly noted the connections between some of his ideas and the thinking of both the libertarian Silicon Valley billionaire Peter Thiel and Trump’s iconoclastic strategist Steve Bannon.
  • “In Silicon Valley,” says Fred Turner, a leading historian of America’s digital industries, “accelerationism is part of a whole movement which is saying, we don’t need [conventional] politics any more, we can get rid of ‘left’ and ‘right’, if we just get technology right. Accelerationism also fits with how electronic devices are marketed – the promise that, finally, they will help us leave the material world, all the mess of the physical, far behind.”
  • In 1972, the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari published Anti-Oedipus. It was a restless, sprawling, appealingly ambiguous book, which suggested that, rather than simply oppose capitalism, the left should acknowledge its ability to liberate as well as oppress people, and should seek to strengthen these anarchic tendencies, “to go still further … in the movement of the market … to ‘accelerate the process’”.
  • By the early 90s Land had distilled his reading, which included Deleuze and Guattari and Lyotard, into a set of ideas and a writing style that, to his students at least, were visionary and thrillingly dangerous. Land wrote in 1992 that capitalism had never been properly unleashed, but instead had always been held back by politics, “the last great sentimental indulgence of mankind”. He dismissed Europe as a sclerotic, increasingly marginal place, “the racial trash-can of Asia”. And he saw civilisation everywhere accelerating towards an apocalypse: “Disorder must increase... Any [human] organisation is ... a mere ... detour in the inexorable death-flow.”
  • With the internet becoming part of everyday life for the first time, and capitalism seemingly triumphant after the collapse of communism in 1989, a belief that the future would be almost entirely shaped by computers and globalisation – the accelerated “movement of the market” that Deleuze and Guattari had called for two decades earlier – spread across British and American academia and politics during the 90s. The Warwick accelerationists were in the vanguard.
  • In the US, confident, rainbow-coloured magazines such as Wired promoted what became known as “the Californian ideology”: the optimistic claim that human potential would be unlocked everywhere by digital technology. In Britain, this optimism influenced New Labour
  • The Warwick accelerationists saw themselves as participants, not traditional academic observers
  • The CCRU gang formed reading groups and set up conferences and journals. They squeezed into the narrow CCRU room in the philosophy department and gave each other impromptu seminars.
  • The main result of the CCRU’s frantic, promiscuous research was a conveyor belt of cryptic articles, crammed with invented terms, sometimes speculative to the point of being fiction.
  • At Warwick, however, the prophecies were darker. “One of our motives,” says Plant, “was precisely to undermine the cheery utopianism of the 90s, much of which seemed very conservative” – an old-fashioned male desire for salvation through gadgets, in her view.
  • K-punk was written by Mark Fisher, formerly of the CCRU. The blog retained some Warwick traits, such as quoting reverently from Deleuze and Guattari, but it gradually shed the CCRU’s aggressive rhetoric and pro-capitalist politics for a more forgiving, more left-leaning take on modernity. Fisher increasingly felt that capitalism was a disappointment to accelerationists, with its cautious, entrenched corporations and endless cycles of essentially the same products. But he was also impatient with the left, which he thought was ignoring new technology
  • lex Williams, co-wrote a Manifesto for an Accelerationist Politics. “Capitalism has begun to constrain the productive forces of technology,” they wrote. “[Our version of] accelerationism is the basic belief that these capacities can and should be let loose … repurposed towards common ends … towards an alternative modernity.”
  • What that “alternative modernity” might be was barely, but seductively, sketched out, with fleeting references to reduced working hours, to technology being used to reduce social conflict rather than exacerbate it, and to humanity moving “beyond the limitations of the earth and our own immediate bodily forms”. On politics and philosophy blogs from Britain to the US and Italy, the notion spread that Srnicek and Williams had founded a new political philosophy: “left accelerationism”.
  • Two years later, in 2015, they expanded the manifesto into a slightly more concrete book, Inventing the Future. It argued for an economy based as far as possible on automation, with the jobs, working hours and wages lost replaced by a universal basic income. The book attracted more attention than a speculative leftwing work had for years, with interest and praise from intellectually curious leftists
  • Even the thinking of the arch-accelerationist Nick Land, who is 55 now, may be slowing down. Since 2013, he has become a guru for the US-based far-right movement neoreaction, or NRx as it often calls itself. Neoreactionaries believe in the replacement of modern nation-states, democracy and government bureaucracies by authoritarian city states, which on neoreaction blogs sound as much like idealised medieval kingdoms as they do modern enclaves such as Singapore.
  • Land argues now that neoreaction, like Trump and Brexit, is something that accelerationists should support, in order to hasten the end of the status quo.
  • In 1970, the American writer Alvin Toffler, an exponent of accelerationism’s more playful intellectual cousin, futurology, published Future Shock, a book about the possibilities and dangers of new technology. Toffler predicted the imminent arrival of artificial intelligence, cryonics, cloning and robots working behind airline check-in desks
  • Land left Britain. He moved to Taiwan “early in the new millennium”, he told me, then to Shanghai “a couple of years later”. He still lives there now.
  • In a 2004 article for the Shanghai Star, an English-language paper, he described the modern Chinese fusion of Marxism and capitalism as “the greatest political engine of social and economic development the world has ever known”
  • Once he lived there, Land told me, he realised that “to a massive degree” China was already an accelerationist society: fixated by the future and changing at speed. Presented with the sweeping projects of the Chinese state, his previous, libertarian contempt for the capabilities of governments fell away
  • Without a dynamic capitalism to feed off, as Deleuze and Guattari had in the early 70s, and the Warwick philosophers had in the 90s, it may be that accelerationism just races up blind alleys. In his 2014 book about the movement, Malign Velocities, Benjamin Noys accuses it of offering “false” solutions to current technological and economic dilemmas. With accelerationism, he writes, a breakthrough to a better future is “always promised and always just out of reach”.
  • “The pace of change accelerates,” concluded a documentary version of the book, with a slightly hammy voiceover by Orson Welles. “We are living through one of the greatest revolutions in history – the birth of a new civilisation.”
  • Shortly afterwards, the 1973 oil crisis struck. World capitalism did not accelerate again for almost a decade. For much of the “new civilisation” Toffler promised, we are still waiting
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Stephen Hawking just gave humanity a due date for finding another planet - The Washingt... - 0 views

  • Hawking told the audience that Earth's cataclysmic end may be hastened by humankind, which will continue to devour the planet’s resources at unsustainable rates
  • “Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years. By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race.”
  • “I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in a 2014 interview that touched upon everything from online privacy to his affinity for his robotic-sounding voice.
  • ...1 more annotation...
  • “Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate,” Hawking warned in recent months. “Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.”
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

Psychological nativism - Wikipedia - 0 views

  • In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs.
  • Some nativists believe that specific beliefs or preferences are "hard-wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.
  • Nativism has a history in philosophy, particularly as a reaction to the straightforward empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.
  • ...14 more annotations...
  • The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.
  • Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language.
  • For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language
  • In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).
  • A number of other theorists[1][2][3] have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.[4]
  • Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success.[20] Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).[14][21][22][23]
  • The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.
  • This evidence is all the more impressive when one considers that most children do not receive reliable corrections for grammatical errors.[9] Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly.[10] The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.
  • Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.[11
  • experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggest that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.[11]
  • modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".[13]
  • , Chomsky's poverty of the stimulus argument is controversial within linguistics.[14][15][16][17][18][19]
  • Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.[6]
  • Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and the intended outcome is how an organism is meant to develop.[24]
grayton downing

Lab-Grown Model Brains | The Scientist Magazine® - 0 views

  • In an Austrian laboratory, a team of scientists has grown three-dimensional models of embryonic human brain
  • “Even the most complex organ—the human brain—can start to form without any micro-manipulation.”
  • Knoblich cautioned that the organoids are not “brains-in-a-jar.” “We’re talking about the very first steps of embryonic brain development, like in the first nine weeks of pregnancy,” he said. “They’re nowhere near an adult human brain and they don’t form anything that resembles a neuronal network.”
  • ...6 more annotations...
  • It took a huge amount of work to fine-tune the conditions, but once the team did, the organoids grew successfully within just 20 to 30 days.
  • scientists have developed organoids that mimic several human organs, including eyes, kidneys, intestines, and even brains.
  • They really highlight the ability just nudge these human embryonic cells and allow them to self-assemble
  • The mouse brain isn’t good enough for studying microcephaly,” said Huttner. “You need to put those genes into an adequate model like this one. It is, after all, human. It definitely enriches the field. There’s no doubt about that.”
  • organoids are unlikely to replace animal experiments entirely. “We can’t duplicate the elegance with which one can do genetics in animal models,” he said, “but we might be able to reduce the number of animal experiments, especially when it comes to toxicology or drug testing.”
  • the future, he hopes to develop larger organoids.
Javier E

Technopoly-Chs. 4.5--The Broken Defenses - 0 views

  • r ~~~-~st of us. There is almo-~t-n~ ~ wheth;~~ct~~l or imag'l ined, that will surprise us for very long, since we have no comprehensive and consistent picture of the world that would [ make the fact appear as an unacceptable contradiction.
  • The belief system of a tool-using culture is rather like a brand-new deck of cards. Whether it is a culture of technological simplicity or sophistication, there always exists a more or less comprehensive, ordered world-view, resting on a set of metaphysical or theological assumptions. Ordinary men and women might not clearly grasp how the harsh realities of their lives fit into the grand and benevolent design of the universe, but they have no doubt that there is such a design, and their priests and shamans are well able, by deduction from a handful of principles, to make it, if not wholly rational, at least coherent.
  • From the early seventeenth century, when Western culture u~ertook to reorganize itself to accommodate the printing press, until the mid-nineteenth century, no significant technologies were introduced that altered l-he form, volume, or speed of . in~. As a consequence, Western culture had more than two hundred years to accustom itself to the new information conditions created by the press.
  • ...86 more annotations...
  • That is eseecial1y the case with technical facts.
  • as incomprehensible problems mount, as the con- ~ cept of progress fades, as meaning itself becomes suspect, the T echnopolist stands firm in believing that what the world needs is yet more information. It is like the joke about the man who , complains that the food he is being served in a restaurant is \ inedibleand also that the_ portions are too small
  • The faith of those who believed in Progress was based on the assumption that one could discern a purpose to the human enterprise, even without the theological scaffolding that supported the Christian edifice of belief. Science and technology were the chief instruments of Progress, and · i.Lac_cumulation of reliable in orma on a out nature _1b_n, would bring ignorance, superstition, and suffering to an end.
  • In T ~chnopoly, we are driven to fill our lives with the quesUo "accesTinformation.
  • But the genie that came out of the bottle proclaiming that information was the new god of culture was a deceiver. It solved the problez:n of information scarcity, the disadvantages o_f wh~s~ious. But it gave no wami g_ahout the dan_gers of information7rttn,
  • !:ion of what is called a_ curriculum was a logical step toward 1./ organizing, limiting, and discriminating among available sources of information. Schools became technocracy's first secular bureaucracies, structures for legitimizing some parts of the flow of infgrmatiQD and di"s.ci.e.diling other earts. School;;ere, in short, a ~eans of governing the ecology of information.
  • James Beniger's The <;antral Revolution, which is among the three or four most important books we have on the lb\b'ect of the relation of informe;ition to culture. In the next chapter, I have relied to a considerable degree on The Control Revolution in my discussion of the breakdown of the control mechanisms,
  • most of the methods by which technocracies. have hoped to keep information from running amok are now dysfunctional. Indeed, one_ ~_i!)!_.Q.L.de£ining_a.I..em Q~ oly is to say that its inf_o_fmation immu is inoperable.
  • Very early ~n, tt..w.as..understood that the printed book had er ate.cl-a ir::ifo · · on crisis and that . =somet ing needed to be done to aintain a measure of control.
  • it is why in _a TechnoE,.oly there can be no transcendent sense of purpose or meaning, no cultural coherence.
  • In - 1480, before the informati9n explosion, there were thirty-four schools in all of England. By 1660, there were 444, one school for every twelve square miles.
  • There were several reasons for the rapid growth of the common school, but none was more obvious than that it was a necessary response to the anxiefies and confusion aroused by information on the loose. The inven-
  • The milieu in which T echnopoly flourishes is one in which the tie between information and human purpose has been severed, i.e., inf~rmation appears indiscriminately, directed at no one in particular, in enormous volume and at high speeds; and disconnected from theory, meaning, or purpose.
  • Abetted ~~orm of ed~~on that in itself has been em _lie~any co~e~ent world-view, Technopoly deprives us of the social, p·olitical, historical, mefaphys1cal, logical, or spiritual bases for knowing what is beyond belief.
  • It developed new institutions, such as the school and representative government. It developed new conceptions of knowledge and intelligence, and a height-
  • ened respect for reason and privacy. It developed new forms of economic activity, such as mechanized production and corporate capitalism, and even gave articulate expression to the possibilities of a humane socialism.
  • There is not a single line written by Jefferson, Adams, Paine, Hamilton, or Franklin that does not take for granted that when information is made available to citizens they are capable of managing it. This is not to say that the Founding Fathers believed information could not be false, misleading, or irrelevant. But they believed that the marketplace of infonpation and ideas was sufficiently ordered so that citizens could make sense of what they read and heard and, through reason, judge ·its μsefulness to their lives. Jefferson's proposals for education, Paine'~ arguments for self-governance, Franklin's arrangements for community affairs assume coherent, commonly shared principles.that allow us to debate such questions as: What are the responsibilities of citizens? What is the nature of education? What constitutes human progress? What are the limitations of social structures?
  • New forms of public discourse came into being through newspapers, pamphlets, broadsides, and books.
  • It is no wonder that the eighteenth century gave us our standard of excellence in the use of reason, as exemplified in the work of Goethe, Voltaire, Diderot, Kant, Hume, Adam Smith, Edmund Burke, Vico, Edward Gibbon, and, of course, Jefferson, Madison, Franklin, Adams, Hamilton, and Thomas Paine.
  • I weight the list with America's "Founding Fathers" because technocratic-typographic America was the first nation ever to be argued into existence irLpr111t. Paine's Common Sense and The Rights of Man, Jefferson's Declaration of Independence, and the Federalist Papers were written and printed efforts to make the American experiment appear reasonable to the people, which to the eighteenth-century mind was both necessary and sufficient. To any people whose politics were the politics of the printed page, as Tocqueville said of America, reason and printr ing were inseparable.
  • The presumed close connection among information, reason, and usefulness began to lose its_ legitimacy toward the midnineteenth century with the invention of the telegraph. Prior to the telegraph, information could be moved only as fa~. as a train could travel: al5out thirty-five miles per hour. Prior to the telegraph, information was sought as part of the process of understanding and solvin articular roblems. Prior to the telegraph, informal-ion tended to be of local interest.
  • First Amendment to the United States Constitution stands as a monument to the ideolo_g~~ print. It says: "Congress shall make no law respecting the establishment of religion, or prohibiting the free exercise thereof; or abridging freedom of speech or of the press; or of the right of the people peaceably to assemble, and to petition the government for a redress of grievances." In these forty-five words we may find the fundamental values of the literate, reasoning_giind as fostered by the print revolution: a belief in privacy, individuality, intellectual freedom, open criticism, and ~.' adio .
  • telegraphy created the idea of context-free . 1 informatig_n::= that fs'~the idea that the value of information need ;;~t be ti~ to any function it might serve in social and political
  • decision-making and action. The telegraph made information into a commodity, a "thing" that could be bought and sold irrespective of its uses or meaning. 2
  • a new definition qf information came into being. Here was information that rejected the necessit ·of interco~nectedness, proceeded without conte~rgued for instancy against historic continuity, and offere · ascination· in place of corn !exit and cohe ence.
  • The potential of the telegraph to transform information into a commodity might never have been realized except for its artnershi with the enny ress, which was the first institution to grasp the significance of the annihilation of space and the saleability of irrelevant information.
  • the fourth stage of the information revolution occurred, broadcasting. And then the fifth, computer technology. Each of these brought with it new forms of information, unpre~edented amounts of it, and increased speeds
  • photography was invented at approximately the same time a~phy, and initiated the Ehi:rd stage of the information revolution. Daniel Boorstin has called it "the graphic revolution," bec~use the photograph and other ico~ogr~phs br~ on a massive intrusion of ima es into the symbolic environment:
  • The new imagery, with photography at its forefront, did not merely function as a supplement to language but tended to replace it as our dominant: means for construing, understanding~d testing reaj.ity.
  • ~ the beginning of the seventeenth century, an entirely new information environment had been created by_12rint
  • It is an improbable world. It is a world in which the idea of human progress, as Bacon ex~sed it, has been g~ by the idea of technological progress.
  • The aim is no_t to reduZe ignorance, r . supersti ion, and s ering but to accommodate ourselves to the requirements of new technologies.
  • echnopoly is a state of cttlture., It is also a st~te of mind. It consists in the deification of technology, which means that the culture seeks its authorization in te0,~logy, finds · .atisf~tions in technolo , and takes its orders from technolog-¥,
  • We proceed under ( the. assumption that information is our friend, believing that cultures may suffer grievously from a lack of information, which, of course, they do. It is only now beginning to be understood that cultures may also suffer grievously from infori mation glut, information without meaning, information without · .... control mechanisms.
  • Those who feel most comfortable in Technop.oJy are those who are convinced that technical progress is humanity's supreme achievement and the instrument by which our most profound dilemmas may be solved. They also believe that information is an unmixed blessing, which through its continued and uncontrolled production and dissemination offers increased freedom, creativity, and peace of mind.
  • Th_e relationship between information and the mechanisms ( for its control is fairly simple ~ec · ·ology increases the available supply of information. As the supply is increased, \ control mechanisms are strained. Additional control mech\ anisms ~re needed to cope with new information. When addi1 tional control mechanisms are themselves technical, they in tum I further increase the supply of information. When the supply of information is no longer controllable, a general breakdown in psychic tranquillity and social purpose occurs. Without defenses, people have no way of finding meaning in their experiences, lose their capacity to remember, and have difficulty imagining reasonable futures.
  • any decline in the force of i~~~ti'?n_s makes people vulnerable to information chaos. 1 To say that life is destabilized by weakened institutions is merely to say that information loses its use and therefore becomes a source of confu;~n rather than coherence.
  • T echnop_oly, then, is to say it is what h~pens to society when the defe~ainst informati;~ glut have broken down.
  • Soci~finstitufions sometimes do their work simply by denying people access to information, but principally by directing how much weight and, therefore, value one must give to information. Social institutions are concerned with the meaning of information and can be quite rigorous in enforcing standards of admission.
  • H is what happens when a culture, overcome by information generated by technology, tries to employ technology itself as a means of providing clear direction and humane purpose. The effort is mostly doomed to failure
  • although legal theory has been taxed to the limit by new information from diverse sources-biology, psychology, and sociology, among themthe rules governing relevance have remained fairly stable. This may account for Americans' overuse of the co~~-~~ as a mean; of finding cohe_!Til.<iAncl__s.tability. As other institutions become I unusabl~ mechanisms for the control of wanton information, the courts stand as a final arbiter of truth.
  • the school as a mechanism for information control. What its standards are can usually be found in, a curriculum or, with even more clarity, in a course catalogue. A college catalogue lists courses, subjects, and fields of study that, taken together, amount to a certified statement of what a serious student ought to think about.
  • The Republican Party represented the interests of the rich, who, by definition, had no concern for us.
  • More to the point, in what is omitted from a catalogue, we may learn what a serious student ought not to think about. A college catalogue, in other words, is a formal description of an information management program; it defines and categorizes knowledge, and in so doing systematically excludes, demeans, labels as trivial-i~ a word, disregards certain kinds of information.
  • In the West, the family as an institution for the management of nonbiological information began with the ascendance of print. As books on every conceivable subject become available, parent_~ were forced int°._the roles of guard-· ians'... protectors, nurturers, and arbiters of taste and rectitude. \ Their function was to define what it means to be a child by \ excluding from the family's domain information that would 1. undermine its purpose.
  • all_ theories are oversimplifications, or at least lead to oversimplification. The rule of law is an oversimplification. A curriculum is an oversimplification. So is a family's conception of a child. T~~t is the funt!ion _o._Ltheories-_ to o~~~~ip:lp}}_fy, and thus to assist believers in_ organiziDg, weighting, _ _an~_ excluding information. Therein lies the power of theories.
  • That the family can no longer do this is, I believe, obvious to everyone.
  • Th~-ir weakness is that precisely because they oversimplify, they are vulnerable to attack by new information. When there is too much information to _$_ustaJ12 -~,:Z}I theory, infoLm_a_ti.on._Q.~S<?~es essentially mea11iD_g!~s
  • The political party is another.
  • As a young man growing up in a Democratic-household, I was provided with clear instructions on what value to assign to political events and commentary.
  • The most imposing institutions for the control of information are religio!1 ~nd the st~J:f, .. They do their work in a somewhat more abstract way than do courts, schools, families, or political parties. The_y m?n~g~__Ji;1formation throug~ creation of mytJ:is and stories that express theories about funq1m1entaf question_s_:_ __ 10:_hy are we here, where have we come from, and where are we headed?
  • They followed logically from theory, which was, as I remember it, as follows: Because people need protection, they must align themselves with a political organization. The Democratic Party was entitled to our loyalty because it represented the social and economic interests of the working class, of which our family, relatives, and neighbors were members
  • the Bible also served as an information control mechanism, especially in the moral domain. The Bible gives manifold
  • any educational institution, if it is to function well in the mana~~nt of information, must have a theory about its purpose and meaning-'. .!n'!::!Sl. have the means to give clear expression to its_ theory, and must do so, to a large extent, by excluding information.
  • instructions on what one must do and must not do, as well as guidance on what language to avoid (on pain of committing blasphemy), what ideas to avoid (on pain of committing heresy), what symbols to avoid (on pain of committing idolatry). Necessarily but perhaps._ unfortunately, the Bible also explained how the world came into being in such literal detail that it could not accommodate new information produced by the telescope and subsequent technologies.
  • in observing God's laws, and the detailed requirements of their enactment, believers receive guidance about what books they should not read, about what plays and films they should not see, about what music they should not hear, about what subjects their children should not study, and so on. For strict fundamentalists of the Bible, the theory and what follows from it seal them off from unwanted information, and in that way their actions are invested with meaning, clarity, and, they believe, moral authority.
  • Those who reject the Bible's theory and who believe, let us say, in the theory of Science are also protected from unwanted information. Their theory, for example, instructs them to disregard information about astrology, dianetics, and creationism, which they usually label as medieval superstition or subjective opinion.
  • Their theory fails to give any guidance about moral information and, by definition, gives little weight to information that falls outside the constraints of science. Undeniably, fewer and fewer people are bound in any serious way to Biblical or other religious traditions as a source of compelling attention and authority, the result of which is that they make no f!lOral decisions, onl~_pradical ones. _This is still another way of defining Technopoly. The term is aptly used for a _culture whose av.~ilable theories do not offer guidance about what is acceptable informaHon in the moral domain.
  • thought-world that functions not only without a transcendent; narrative to provide moral underpinnings but also without strong social institutions to control the flood of information produced by technology.
  • In the r case of the United States, the great eighteenth-century revolution was not indifferent to commodity capitalism but was nonetheless infused with profound moral content. The U~!ed States was not merely an experiment in a new form of governance; it wai1nefu1fillmenl-oFGocf s plan. True, Adams, Jeffe;son, and Painere1ected-fne supernatural elements in the Bible,· but they never doubted that their experiment had the imprimatur of \ Providence. People were to be free but for a eurp_9se. Their [ God~giv_e~ig[ifs im li~_? obli ations and responsibilities, not L onfytoGod but to other nations, to which the new republic would be a guide and a showcase of what is possible-w~en reason and spirituality commingle.
  • American Technopoly must rel,y, to an obsessive extent, on technica( ~ethods to control the flow of information. Three such means merit speci attention.
  • The first is bureaucracy, which James Beniger in The Control © Revolution ra°i1l~as atoremost among all technological solutions to the crisis of control."
  • It is an open question whether or not "liberal democracy" in its present form can provide a thought-world of sufficient moral substance to sustain meaningful lives.
  • Vaclav Havel, then newly elected as president of Czechoslovakia, posed in an address to the U.S. Congress. "We still don't know how to put morality ahead of politics, science, and economics," he said. "We are still incapable of understanding that the only genuine backbone of our actions-if they are to be moral-is responsibility. Responsibility to something higher than my family, my country, my firm, my success." What Havel is saying is that it is not enough for his nation to liberate itself from one flawed theory; it is necessary to find another, and he worries that Technopoly provides no answer.
  • Francis Fukuyama is wrong. There is another ideological conflict to be fought-between "liberal democracy" as conceived in the eighteenth century, with all its transcendent moral underpinnings, and T echnopoly, a twentieth-century
  • in at- ~ tempting to make the most rational use of information, bureaucracy ignores all information and ideas that do not contribute to efficiency
  • bureaucracy has no intellectual, I political, or moral theory--,--except for its implicit assumption that efficiency is the principal aim of all social institutions and that other goals are essentially less worthy, if not irrelevant. That is why John Stuart Mill thought bureaucracy a "tyranny" and C. S. Lewis identified it with Hell.
  • in principle a bureaucracy is simply a coordinated series of techniques for reducing the amount of information that requires processing.
  • The transformation of bureaucracy from a set of techniques·> designecfto serve social ~tutions to an auton-;;mous metainstitution that largely serves itself came as a result of several developments in the mid-andlate-nineteenth century: rapid ../ industrial growth, improvements in transportation and commu- ·✓ nication, the extension of government into ever-larger realms of V public and business affairs, the increasing centralization of gov- v ernmental structures.
  • extent that the decision will affect the efficient operations of the J bureaucracy, and takes no responsibility for its human consequences.
  • Along the way, it ceased to be merely a servant of social institutions an
  • became ~ their master. Bureaucracy now not only solves problems but creates them. More important, it defines what our problems are---and they arec!.lways, in the bureaucra!!c view, problems of l . , efficiency.
  • ex~r- (J} tis~ is a second important technical means by which Technopoly s~s furiously to control information.
  • the expert in Techno oly has two characteristics that distinguish im or her from experts of the {i) past. First, Technopoly's experts tend to be ignorant about any matter not directly related to their specialized area.
  • T echnopoly' s experts claim dominion not only_gyer technical matters but also over so@,--12~ichological. and moral · aff~irs.
  • "bureaucrat" has come to mean a person who \ by training, commitment, and even temperament is indifferent ~ ). to both the content and the fatality of a human problem. Th~ \ 'bureaucrat considers the implications of a decision only to the
  • Technical machinery is essential to both the bureaucrat and c:/ the expert, and m~ be regarded as a third mechanism of information control.
  • I have in mind "softer" technologies such as IQ tests, SATs, standardized forms, taxonomies, and opinion polls. Some of these I discuss in detail in chapter eight, "Invisible T echnologies," but I mention them here because their role in reducing the types and quantity of information admitted to a system often goes unnoticed, and therefore their role in redefining traditional concepl::s also· goes unnoticed. There is, for example, no test that can measure a person's intelligenc
  • Th_~-role of t!;_e ~xpert is to concentrate o_l}_one_ .H~ld of knowledge, sift through all that is available, eliminate that -.--:-: __ __:~---------which has no bearing on a problem, and use what is left !Q. !!§Sist in solving a probl~.
  • the expert relies on our believing in the reality of technical machinery, which means we will reify the answers generated by the machinery. We come to believe that our score is our intelligence,· or our capacity for creativity or love or pain. We come to believe that the results of opinion polls are what people believe, as if our beliefs can be encapsulated in such sentences as "I approve" and "I disapprove."
  • it is disas~ \ trou~p!ie~e_~ved by technical means and where efficiency is usually irrelevant, such as in education, law, fa~iiy life, and p·r;blems of p~;;~~al maladjustment.
  • perceptions and judgment declines, bureaucracies, expertise, and technical machinery become the principal means by which, T echnopoly hopes to control information and thereby provide itself with intelligibility and order. The rest of this book tells the · story of why this cannot work, and of the pain and stupidity that are the consequences.
  • Institutions ca~~aked~cisions on the basis of scores and. sfatistics, and. there certainly may be occasions where there is no reasonable alternative. But unless such decisions are made with profound skepticism-that is, acknowledged as being made for administrative convenience-they are delusionary.
  • In Technopoly, the \. delusion is sanctified by our granting inordinate prestige to experts who are armed with sophisticated technical machinery. Shaw once remarked that all professions are conspiracies against the laity. I would go further: in Technopoly, all exeeds are invested with the charisma of priestliness
  • The god they serve does not speak \ of righteousness or goodness or mercy or grace. Their god speaks of efficiency, precision, objectivity. And that is why such concepts as sin and evil disappear in Technopoly. They come from a moral universe that is irrelevant to the theology of expertise. And so the priests of Technopoly call sin "social deviance," which is a statistical concept, and they call evil "psychopathology," which is a medical concept. Sin and evil disappear because they cannot be measured and objectified, and therefore cannot be dealt with by experts.
  • As the power of traditional social institutions to organize
tornekm

Of bairns and brains | The Economist - 0 views

  • especially given the steep price at which it was bought. Humans’ outsized, power-hungry brains suck up around a quarter of their body’s oxygen supplies.
  • . It was simply humanity’s good fortune that those big sexy brains turned out to be useful for lots of other things, from thinking up agriculture to building internal-combustion engines. Another idea is that human cleverness arose out of the mental demands of living in groups whose members are sometimes allies and sometimes rivals.
  • human infants take a year to learn even to walk, and need constant supervision for many years afterwards. That helplessness is thought to be one consequence of intelligence—or, at least, of brain size.
  • ...6 more annotations...
  • ever-more incompetent infants, requiring ever-brighter parents to ensure they survive childhood.
  • The self-reinforcing nature of the process would explain why intelligence is so strikingly overdeveloped in humans compared even with chimpanzees.
  • developed first in primates, a newish branch of the mammals, a group that is itself relatively young.
  • found that babies born to mothers with higher IQs had a better chance of surviving than those born to low-IQ women, which bolsters the idea that looking after human babies is indeed cognitively taxing.
  • none of this adds up to definitive proof.
  • Any such feedback loop would be a slow process (at least as reckoned by the humans themselves), most of which would have taken place in the distant past.
Javier E

Our Machine Masters - NYTimes.com - 0 views

  • the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.
  • This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection and better algorithms. The upshot is clear, “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I.”
  • Two big implications flow from this. The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power.
  • ...14 more annotations...
  • in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them.
  • The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head.
  • Advances in artificial intelligence will accelerate this centralizing trend. That’s because A.I. companies will be able to reap the rewards of network effects. The bigger their network and the more data they collect, the more effective and attractive they become.
  • As a result, our A.I. future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”
  • engineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.
  • The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do
  • On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.
  • For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.
  • In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions.
  • I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.
  • In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.
  • In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it
  • In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.
  • In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.
Javier E

Denying Genetics Isn't Shutting Down Racism, It's Fueling It - 0 views

  • For many on the academic and journalistic left, genetics are deemed largely irrelevant when it comes to humans. Our large brains and the societies we have constructed with them, many argue, swamp almost all genetic influences.
  • Humans, in this view, are the only species on Earth largely unaffected by recent (or ancient) evolution, the only species where, for example, the natural division of labor between male and female has no salience at all, the only species, in fact, where natural variations are almost entirely social constructions, subject to reinvention.
  • if we assume genetics play no role, and base our policy prescriptions on something untrue, we are likely to overshoot and over-promise in social policy, and see our rhetoric on race become ever more extreme and divisive.
  • ...21 more annotations...
  • Reich simply points out that this utopian fiction is in danger of collapse because it is not true and because genetic research is increasingly proving it untrue.
  • “You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work.” Which means to say that the differences could be (and actually are) substantial.
  • If you don’t establish a reasonable forum for debate on this, Reich argues, if you don’t establish the principle is that we do not have to be afraid of any of this, it will be monopolized by truly unreasonable and indeed dangerous racists. And those racists will have the added prestige for their followers of revealing forbidden knowledge.
  • so there are two arguments against the suppression of this truth and the stigmatization of its defenders: that it’s intellectually dishonest and politically counterproductive.
  • Klein seems to back a truly extreme position: that only the environment affects IQ scores, and genes play no part in group differences in human intelligence. To this end, he cites the “Flynn effect,” which does indeed show that IQ levels have increased over the years, and are environmentally malleable up to a point. In other words, culture, politics, and economics do matter.
  • But Klein does not address the crucial point that even with increases in IQ across all races over time, the racial gap is still frustratingly persistent, that, past a certain level, IQ measurements have actually begun to fall in many developed nations, and that Flynn himself acknowledges that the effect does not account for other genetic influences on intelligence.
  • In an email exchange with me, in which I sought clarification, Klein stopped short of denying genetic influences altogether, but argued that, given rising levels of IQ, and given how brutal the history of racism against African-Americans has been, we should nonetheless assume “right now” that genes are irrelevant.
  • My own brilliant conclusion: Group differences in IQ are indeed explicable through both environmental and genetic factors and we don’t yet know quite what the balance is.
  • We are, in this worldview, alone on the planet, born as blank slates, to be written on solely by culture. All differences between men and women are a function of this social effect; as are all differences between the races. If, in the aggregate, any differences in outcome between groups emerge, it is entirely because of oppression, patriarchy, white supremacy, etc. And it is a matter of great urgency that we use whatever power we have to combat these inequalities.
  • Liberalism has never promised equality of outcomes, merely equality of rights. It’s a procedural political philosophy rooted in means, not a substantive one justified by achieving certain ends.
  • A more nuanced understanding of race, genetics, and environment would temper this polarization, and allow for more unifying, practical efforts to improve equality of opportunity, while never guaranteeing or expecting equality of outcomes.
  • In some ways, this is just a replay of the broader liberal-conservative argument. Leftists tend to believe that all inequality is created; liberals tend to believe we can constantly improve the world in every generation, forever perfecting our societies.
  • Rightists believe that human nature is utterly unchanging; conservatives tend to see the world as less plastic than liberals, and attempts to remake it wholesale dangerous and often counterproductive.
  • I think the genius of the West lies in having all these strands in our politics competing with one another.
  • Where I do draw the line is the attempt to smear legitimate conservative ideas and serious scientific arguments as the equivalent of peddling white supremacy and bigotry. And Klein actively contributes to that stigmatization and demonization. He calls the science of this “race science” as if it were some kind of illicit and illegitimate activity, rather than simply “science.”
  • He goes on to equate the work of these scientists with the “most ancient justification for bigotry and racial inequality.” He even uses racism to dismiss Murray and Harris: they are, after all, “two white men.
  • He still refuses to believe that Murray’s views on this are perfectly within the academic mainstream in studies of intelligence, as they were in 1994.
  • Klein cannot seem to hold the following two thoughts in his brain at the same time: that past racism and sexism are foul, disgusting, and have wrought enormous damage and pain and that unavoidable natural differences between races and genders can still exist.
  • , it matters that we establish a liberalism that is immune to such genetic revelations, that can strive for equality of opportunity, and can affirm the moral and civic equality of every human being on the planet.
  • We may even embrace racial discrimination, as in affirmative action, that fuels deeper divides. All of which, it seems to me, is happening — and actively hampering racial progress, as the left defines the most multiracial and multicultural society in human history as simply “white supremacy” unchanged since slavery; and as the right viscerally responds by embracing increasingly racist white identity politics.
  • liberalism is integral to our future as a free society — and it should not falsely be made contingent on something that can be empirically disproven. It must allow for the truth of genetics to be embraced, while drawing the firmest of lines against any moral or political abuse of it
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
huffem4

Critical Theory - New Discourses - 1 views

  • According to these theorists, a “critical” theory may be distinguished from a “traditional” theory according to a specific practical purpose: a theory is critical to the extent that it seeks human “emancipation from slavery,” acts as a “liberating … influence,” and works “to create a world which satisfies the needs and powers” of human beings (Horkheimer 1972, 246).
  • Because such theories aim to explain and transform all the circumstances that enslave human beings, many “critical theories” in the broader sense have been developed. They have emerged in connection with the many social movements that identify varied dimensions of the domination of human beings in modern societies.
  • The Critical Theory of the “Institute for Social Research,” which is better known as the Frankfurt School, focused on power analyses that began from a Marxist (or Marxian) perspective with an aim to understand why Marxism wasn’t proving successful in Western contexts. It rapidly developed a “post-Marxist” position that criticized Marx’s primary focus on economics and expanded his views on power, alienation, and exploitation into all aspects of post-Enlightenment Western culture. These theorists sometimes referred to themselves as “cultural Marxists,” and were referred to that way by others, but the term “cultural Marxism” is now more commonly used to describe (a misconception of) postmodernism (see also, neo-Marxism) or a certain anti-Semitic conspiracy theory.
  • ...5 more annotations...
  • a Traditional Theory is meant to be descriptive of some phenomenon, usually social, and aims to understand how it works and why it works that way, a Critical Theory should proceed from a prescriptive normative moral vision for society, describe how the item being critiqued fails that vision (usually in a systemic sense), and prescribe activism to subvert, dismantle, disrupt, overthrow, or change it—that is, generally, to break and then remake society in accordance with the particular critical theory’s prescribed vision
  • One of the ambitions of the Critical Theorists of the Frankfurt School was to address cultural power in a way that allowed an awakening of working-class consciousness out of the ideology of capitalism in order to overcome it.
  • Critical theories in a broader sense are largely understood to be the critical study of various types of power relations within myriad aspects of culture, often under a broad rubric referred to in general as “cultural studies.”
  • They are to be found within many disciplines and subdisciplines within the theoretical humanities, including cultural studies, media studies, gender studies, ethnic/race/whiteness/black studies, sexuality/LGBT/trans studies, postcolonial, indigenous, and decolonial studies, disability studies, and fat studies. Critical theories of various kinds are also to be found within (but not necessarily dominant over) other fields of the humanities, social sciences, and arts, including English (literature), sociology, philosophy, art, history and, particularly, pedagogy (theory of education).
  • The focus on identity, experiences, and activism, rather than an attempt to find truth, leads to conflict with empirical scholars and undermines public confidence in the worth of scholarship that uses this approach.
Javier E

What's Wrong With the Teenage Mind? - WSJ.com - 1 views

  • What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.
  • The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again
  • The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.
  • ...23 more annotations...
  • adolescents aren't reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults.
  • What teenagers want most of all are social rewards, especially the respect of their peers
  • Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.
  • The second crucial system in our brains has to do with control; it channels and harnesses all that seething energy. In particular, the prefrontal cortex reaches out to guide other parts of the brain, including the parts that govern motivation and emotion. This is the system that inhibits impulses and guides decision-making, that encourages long-term planning and delays gratification.
  • Today's adolescents develop an accelerator a long time before they can steer and brake.
  • Expertise comes with experience.
  • In gatherer-hunter and farming societies, childhood education involves formal and informal apprenticeship. Children have lots of chances to practice the skills that they need to accomplish their goals as adults, and so to become expert planners and actors.
  • In the past, to become a good gatherer or hunter, cook or caregiver, you would actually practice gathering, hunting, cooking and taking care of children all through middle childhood and early adolescence—tuning up just the prefrontal wiring you'd need as an adult. But you'd do all that under expert adult supervision and in the protected world of childhood
  • In contemporary life, the relationship between these two systems has changed dramatically. Puberty arrives earlier, and the motivational system kicks in earlier too. At the same time, contemporary children have very little experience with the kinds of tasks that they'll have to perform as grown-ups.
  • The experience of trying to achieve a real goal in real time in the real world is increasingly delayed, and the growth of the control system depends on just those experiences.
  • This control system depends much more on learning. It becomes increasingly effective throughout childhood and continues to develop during adolescence and adulthood, as we gain more experience.
  • An ever longer protected period of immaturity and dependence—a childhood that extends through college—means that young humans can learn more than ever before. There is strong evidence that IQ has increased dramatically as more children spend more time in school
  • children know more about more different subjects than they ever did in the days of apprenticeships.
  • Wide-ranging, flexible and broad learning, the kind we encourage in high-school and college, may actually be in tension with the ability to develop finely-honed, controlled, focused expertise in a particular skill, the kind of learning that once routinely took place in human societies.
  • this new explanation based on developmental timing elegantly accounts for the paradoxes of our particular crop of adolescents.
  • First, experience shapes the brain.
  • the brain is so powerful precisely because it is so sensitive to experience. It's as true to say that our experience of controlling our impulses make the prefrontal cortex develop as it is to say that prefrontal development makes us better at controlling our impulses
  • Second, development plays a crucial role in explaining human nature
  • there is more and more evidence that genes are just the first step in complex developmental sequences, cascades of interactions between organism and environment, and that those developmental processes shape the adult brain. Even small changes in developmental timing can lead to big changes in who we become.
  • Brain research is often taken to mean that adolescents are really just defective adults—grown-ups with a missing part.
  • But the new view of the adolescent brain isn't that the prefrontal lobes just fail to show up; it's that they aren't properly instructed and exercised
  • Instead of simply giving adolescents more and more school experiences—those extra hours of after-school classes and homework—we could try to arrange more opportunities for apprenticeship
  • Summer enrichment activities like camp and travel, now so common for children whose parents have means, might be usefully alternated with summer jobs, with real responsibilities.
  •  
    The two brain systems, the increasing gap between them, and the implications for adolescent education.
Javier E

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
Javier E

Look At Me by Patricia Snow | Articles | First Things - 0 views

  • Maurice stumbles upon what is still the gold standard for the treatment of infantile autism: an intensive course of behavioral therapy called applied behavioral analysis that was developed by psychologist O. Ivar Lovaas at UCLA in the 1970s
  • in a little over a year’s time she recovers her daughter to the point that she is indistinguishable from her peers.
  • Let Me Hear Your Voice is not a particularly religious or pious work. It is not the story of a miracle or a faith healing
  • ...54 more annotations...
  • Maurice discloses her Catholicism, and the reader is aware that prayer undergirds the therapy, but the book is about the therapy, not the prayer. Specifically, it is about the importance of choosing methods of treatment that are supported by scientific data. Applied behavioral analysis is all about data: its daily collection and interpretation. The method is empirical, hard-headed, and results-oriented.
  • on a deeper level, the book is profoundly religious, more religious perhaps than its author intended. In this reading of the book, autism is not only a developmental disorder afflicting particular individuals, but a metaphor for the spiritual condition of fallen man.
  • Maurice’s autistic daughter is indifferent to her mother
  • In this reading of the book, the mother is God, watching a child of his wander away from him into darkness: a heartbroken but also a determined God, determined at any cost to bring the child back
  • the mother doesn’t turn back, concedes nothing to the condition that has overtaken her daughter. There is no political correctness in Maurice’s attitude to autism; no nod to “neurodiversity.” Like the God in Donne’s sonnet, “Batter my heart, three-personed God,” she storms the walls of her daughter’s condition
  • Like God, she sets her sights high, commits both herself and her child to a demanding, sometimes painful therapy (life!), and receives back in the end a fully alive, loving, talking, and laughing child
  • the reader realizes that for God, the harrowing drama of recovery is never a singular, or even a twice-told tale, but a perennial one. Every child of his, every child of Adam and Eve, wanders away from him into darkness
  • we have an epidemic of autism, or “autism spectrum disorder,” which includes classic autism (Maurice’s children’s diagnosis); atypical autism, which exhibits some but not all of the defects of autism; and Asperger’s syndrome, which is much more common in boys than in girls and is characterized by average or above average language skills but impaired social skills.
  • At the same time, all around us, we have an epidemic of something else. On the street and in the office, at the dinner table and on a remote hiking trail, in line at the deli and pushing a stroller through the park, people go about their business bent over a small glowing screen, as if praying.
  • This latter epidemic, or experiment, has been going on long enough that people are beginning to worry about its effects.
  • for a comprehensive survey of the emerging situation on the ground, the interested reader might look at Sherry Turkle’s recent book, Reclaiming Conversation: The Power of Talk in a Digital Age.
  • she also describes in exhaustive, chilling detail the mostly horrifying effects recent technology has had on families and workplaces, educational institutions, friendships and romance.
  • many of the promises of technology have not only not been realized, they have backfired. If technology promised greater connection, it has delivered greater alienation. If it promised greater cohesion, it has led to greater fragmentation, both on a communal and individual level.
  • If thinking that the grass is always greener somewhere else used to be a marker of human foolishness and a temptation to be resisted, today it is simply a possibility to be checked out. The new phones, especially, turn out to be portable Pied Pipers, irresistibly pulling people away from the people in front of them and the tasks at hand.
  • all it takes is a single phone on a table, even if that phone is turned off, for the conversations in the room to fade in number, duration, and emotional depth.
  • an infinitely malleable screen isn’t an invitation to stability, but to restlessness
  • Current media, and the fear of missing out that they foster (a motivator now so common it has its own acronym, FOMO), drive lives of continual interruption and distraction, of virtual rather than real relationships, and of “little” rather than “big” talk
  • if you may be interrupted at any time, it makes sense, as a student explains to Turkle, to “keep things light.”
  • we are reaping deficits in emotional intelligence and empathy; loneliness, but also fears of unrehearsed conversations and intimacy; difficulties forming attachments but also difficulties tolerating solitude and boredom
  • consider the testimony of the faculty at a reputable middle school where Turkle is called in as a consultant
  • The teachers tell Turkle that their students don’t make eye contact or read body language, have trouble listening, and don’t seem interested in each other, all markers of autism spectrum disorder
  • Like much younger children, they engage in parallel play, usually on their phones. Like autistic savants, they can call up endless information on their phones, but have no larger context or overarching narrative in which to situate it
  • Students are so caught up in their phones, one teacher says, “they don’t know how to pay attention to class or to themselves or to another person or to look in each other’s eyes and see what is going on.
  • “It is as though they all have some signs of being on an Asperger’s spectrum. But that’s impossible. We are talking about a schoolwide problem.”
  • Can technology cause Asperger’
  • “It is not necessary to settle this debate to state the obvious. If we don’t look at our children and engage them in conversation, it is not surprising if they grow up awkward and withdrawn.”
  • In the protocols developed by Ivar Lovaas for treating autism spectrum disorder, every discrete trial in the therapy, every drill, every interaction with the child, however seemingly innocuous, is prefaced by this clear command: “Look at me!”
  • If absence of relationship is a defining feature of autism, connecting with the child is both the means and the whole goal of the therapy. Applied behavioral analysis does not concern itself with when exactly, how, or why a child becomes autistic, but tries instead to correct, do over, and even perhaps actually rewire what went wrong, by going back to the beginning
  • Eye contact—which we know is essential for brain development, emotional stability, and social fluency—is the indispensable prerequisite of the therapy, the sine qua non of everything that happens.
  • There are no shortcuts to this method; no medications or apps to speed things up; no machines that can do the work for us. This is work that only human beings can do
  • it must not only be started early and be sufficiently intensive, but it must also be carried out in large part by parents themselves. Parents must be trained and involved, so that the treatment carries over into the home and continues for most of the child’s waking hours.
  • there are foundational relationships that are templates for all other relationships, and for learning itself.
  • Maurice’s book, in other words, is not fundamentally the story of a child acquiring skills, though she acquires them perforce. It is the story of the restoration of a child’s relationship with her parents
  • it is also impossible to overstate the time and commitment that were required to bring it about, especially today, when we have so little time, and such a faltering, diminished capacity for sustained engagement with small children
  • The very qualities that such engagement requires, whether our children are sick or well, are the same qualities being bred out of us by technologies that condition us to crave stimulation and distraction, and by a culture that, through a perverse alchemy, has changed what was supposed to be the freedom to work anywhere into an obligation to work everywhere.
  • In this world of total work (the phrase is Josef Pieper’s), the work of helping another person become fully human may be work that is passing beyond our reach, as our priorities, and the technologies that enable and reinforce them, steadily unfit us for the work of raising our own young.
  • in Turkle’s book, as often as not, it is young people who are distressed because their parents are unreachable. Some of the most painful testimony in Reclaiming Conversation is the testimony of teenagers who hope to do things differently when they have children, who hope someday to learn to have a real conversation, and so o
  • it was an older generation that first fell under technology’s spell. At the middle school Turkle visits, as at many other schools across the country, it is the grown-ups who decide to give every child a computer and deliver all course content electronically, meaning that they require their students to work from the very medium that distracts them, a decision the grown-ups are unwilling to reverse, even as they lament its consequences.
  • we have approached what Turkle calls the robotic moment, when we will have made ourselves into the kind of people who are ready for what robots have to offer. When people give each other less, machines seem less inhuman.
  • robot babysitters may not seem so bad. The robots, at least, will be reliable!
  • If human conversations are endangered, what of prayer, a conversation like no other? All of the qualities that human conversation requires—patience and commitment, an ability to listen and a tolerance for aridity—prayer requires in greater measure.
  • this conversation—the Church exists to restore. Everything in the traditional Church is there to facilitate and nourish this relationship. Everything breathes, “Look at me!”
  • there is a second path to God, equally enjoined by the Church, and that is the way of charity to the neighbor, but not the neighbor in the abstract.
  • “Who is my neighbor?” a lawyer asks Jesus in the Gospel of Luke. Jesus’s answer is, the one you encounter on the way.
  • Virtue is either concrete or it is nothing. Man’s path to God, like Jesus’s path on the earth, always passes through what the Jesuit Jean Pierre de Caussade called “the sacrament of the present moment,” which we could equally call “the sacrament of the present person,” the way of the Incarnation, the way of humility, or the Way of the Cross.
  • The tradition of Zen Buddhism expresses the same idea in positive terms: Be here now.
  • Both of these privileged paths to God, equally dependent on a quality of undivided attention and real presence, are vulnerable to the distracting eye-candy of our technologies
  • Turkle is at pains to show that multitasking is a myth, that anyone trying to do more than one thing at a time is doing nothing well. We could also call what she was doing multi-relating, another temptation or illusion widespread in the digital age. Turkle’s book is full of people who are online at the same time that they are with friends, who are texting other potential partners while they are on dates, and so on.
  • This is the situation in which many people find themselves today: thinking that they are special to someone because of something that transpired, only to discover that the other person is spread so thin, the interaction was meaningless. There is a new kind of promiscuity in the world, in other words, that turns out to be as hurtful as the old kind.
  • Who can actually multitask and multi-relate? Who can love everyone without diluting or cheapening the quality of love given to each individual? Who can love everyone without fomenting insecurity and jealousy? Only God can do this.
  • When an individual needs to be healed of the effects of screens and machines, it is real presence that he needs: real people in a real world, ideally a world of God’s own making
  • Nature is restorative, but it is conversation itself, unfolding in real time, that strikes these boys with the force of revelation. More even than the physical vistas surrounding them on a wilderness hike, unrehearsed conversation opens up for them new territory, open-ended adventures. “It was like a stream,” one boy says, “very ongoing. It wouldn’t break apart.”
  • in the waters of baptism, the new man is born, restored to his true parent, and a conversation begins that over the course of his whole life reminds man of who he is, that he is loved, and that someone watches over him always.
  • Even if the Church could keep screens out of her sanctuaries, people strongly attached to them would still be people poorly positioned to take advantage of what the Church has to offer. Anxious people, unable to sit alone with their thoughts. Compulsive people, accustomed to checking their phones, on average, every five and a half minutes. As these behaviors increase in the Church, what is at stake is man’s relationship with truth itself.
‹ Previous 21 - 40 of 307 Next › Last »
Showing 20 items per page