Skip to main content

Home/ TOK Friends/ Group items tagged humanism

Rss Feed Group items tagged

Javier E

E. O. Wilson's Theory of Everything - Magazine - The Atlantic - 0 views

  • Wilson told me the new proposed evolutionary model pulls the field “out of the fever swamp of kin selection,” and he confidently predicted a coming paradigm shift that would promote genetic research to identify the “trigger” genes that have enabled a tiny number of cases, such as the ant family, to achieve complex forms of cooperation.
  • In the book, he proposes a theory to answer what he calls “the great unsolved problem of biology,” namely how roughly two dozen known examples in the history of life—humans, wasps, termites, platypodid ambrosia beetles, bathyergid mole rats, gall-making aphids, one type of snapping shrimp, and others—made the breakthrough to life in highly social, complex societies. Eusocial species, Wilson noted, are by far “the most successful species in the history of life.”
  • Summarizing parts of it for me, Wilson was particularly unsparing of organized religion, likening the Book of Revelation, for example, to the ranting of “a paranoid schizophrenic who was allowed to write down everything that came to him.” Toward philosophy, he was only slightly kinder. Generation after generation of students have suffered trying to “puzzle out” what great thinkers like Socrates, Plato, and Descartes had to say on the great questions of man’s nature, Wilson said, but this was of little use, because philosophy has been based on “failed models of the brain.”
  • ...6 more annotations...
  • His theory draws upon many of the most prominent views of how humans emerged. These range from our evolution of the ability to run long distances to our development of the earliest weapons, which involved the improvement of hand-eye coordination. Dramatic climate change in Africa over the course of a few tens of thousands of years also may have forced Australopithecus and Homo to adapt rapidly. And over roughly the same span, humans became cooperative hunters and serious meat eaters, vastly enriching our diet and favoring the development of more-robust brains. By themselves, Wilson says, none of these theories is satisfying. Taken together, though, all of these factors pushed our immediate prehuman ancestors toward what he called a huge pre-adaptive step: the formation of the earliest communities around fixed camps.
  • “Within groups, the selfish are more likely to succeed,” Wilson told me in a telephone conversation. “But in competition between groups, groups of altruists are more likely to succeed. In addition, it is clear that groups of humans proselytize other groups and accept them as allies, and that that tendency is much favored by group selection.” Taking in newcomers and forming alliances had become a fundamental human trait, he added, because “it is a good way to win.”
  • “The humans become consistent with all the others,” he said, and the evolutionary steps were likely similar—beginning with the formation of groups within a freely mixing population, followed by the accumulation of pre-adaptations that make eusociality more likely, such as the invention of campsites. Finally comes the rise to prevalence of eusocial alleles—one of two or more alternative forms of a gene that arise by mutation, and are found at the same place on a chromosome—which promote novel behaviors (like communal child care) or suppress old, asocial traits. Now it is up to geneticists, he adds, to “determine how many genes are involved in crossing the eusociality threshold, and to go find those genes.”
  • Wilson posits that two rival forces drive human behavior: group selection and what he calls “individual selection”—competition at the level of the individual to pass along one’s genes—with both operating simultaneously. “Group selection,” he said, “brings about virtue, and—this is an oversimplification, but—individual selection, which is competing with it, creates sin. That, in a nutshell, is an explanation of the human condition.
  • “When humans started having a camp—and we know that Homo erectus had campsites—then we know they were heading somewhere,” he told me. “They were a group progressively provisioned, sending out some individuals to hunt and some individuals to stay back and guard the valuable campsite. They were no longer just wandering through territory, emitting calls. They were on long-term campsites, maybe changing from time to time, but they had come together. They began to read intentions in each other’s behavior, what each other are doing. They started to learn social connections more solidly.”
  • If Wilson is right, the human impulse toward racism and tribalism could come to be seen as a reflection of our genetic nature as much as anything else—but so could the human capacity for altruism, and for coalition- and alliance-building. These latter possibilities may help explain Wilson’s abiding optimism—about the environment and many other matters. If these traits are indeed deeply written into our genetic codes, we might hope that we can find ways to emphasize and reinforce them, to build problem-solving coalitions that can endure, and to identify with progressively larger and more-inclusive groups over time.
Javier E

Computer Algorithms Rely Increasingly on Human Helpers - NYTimes.com - 0 views

  • Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.
  • And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
  • Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing. Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama” or “New York City.” These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
  • ...3 more annotations...
  • When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages. “There has been a shift in our thinking,” said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated.”
  • “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement,” Mr. Huffman said.
  • Ben Taylor, 25, is a product manager at FindTheBest, a fast-growing start-up in Santa Barbara, Calif. The company calls itself a “comparison engine” for finding and comparing more than 100 topics and products, from universities to nursing homes, smartphones to dog breeds. Its Web site went up in 2010, and the company now has 60 full-time employees. Mr. Taylor helps design and edit the site’s education pages. He is not an engineer, but an English major who has become a self-taught expert in the arcane data found in Education Department studies and elsewhere. His research methods include talking to and e-mailing educators. He is an information sleuth.
carolinewren

YaleNews | Yale researchers map 'switches' that shaped the evolution of the human brain - 0 views

  • Thousands of genetic “dimmer” switches, regions of DNA known as regulatory elements, were turned up high during human evolution in the developing cerebral cortex, according to new research from the Yale School of Medicine.
  • these switches show increased activity in humans, where they may drive the expression of genes in the cerebral cortex, the region of the brain that is involved in conscious thought and language. This difference may explain why the structure and function of that part of the brain is so unique in humans compared to other mammals.
  • Noonan and his colleagues pinpointed several biological processes potentially guided by these regulatory elements that are crucial to human brain development.
  • ...7 more annotations...
  • “Building a more complex cortex likely involves several things: making more cells, modifying the functions of cortical areas, and changing the connections neurons make with each other
  • Scientists have become adept at comparing the genomes of different species to identify the DNA sequence changes that underlie those differences. But many human genes are very similar to those of other primates, which suggests that changes in the way genes are regulated — in addition to changes in the genes themselves — is what sets human biology apart.
  • First, Noonan and his colleagues mapped active regulatory elements in the human genome during the first 12 weeks of cortical development by searching for specific biochemical, or “epigenetic” modifications
  • same in the developing brains of rhesus monkeys and mice, then compared the three maps to identify those elements that showed greater activity in the developing human brain.
  • wanted to know the biological impact of those regulatory changes.
  • They used those data to identify groups of genes that showed coordinated expression in the cerebral cortex.
  • “While we often think of the human brain as a highly innovative structure, it’s been surprising that so many of these regulatory elements seem to play a role in ancient processes important for building the cortex in all mammals, said first author Steven Reilly
paisleyd

Cadaver Experiment Suggests Human Hands Evolved for Fighting - 0 views

    • paisleyd
       
      Humans brains and bodies have been designed to survive. The decisions we make to run from possibly dangerous situations are a clear example of this. And we have evolved to defend ourselves from possible dangers.
  • idea that human hands evolved not only for manual dexterity, but also for fistfights.
  • fist fighting might have helped to drive the evolution of not only the human hand, but also the human face and the human propensity to walk upright.
  • ...7 more annotations...
  • secured these lines to guitar-tuner knobs that helped apply tension to the tendons
  • hold them open for slaps, weakly clench them into "unbuttressed" fists or strongly curl them into "buttressed" fists.
  • humans can safely strike with 55 percent more force with a buttressed fist than with an unbuttressed fist, and with twice as much force with a buttressed fist than with an open-hand slap.
  • fists can protect hand bones from injuries and fractures
  • reducing the level of strain during striking
  • evolution favored lengthening the big toe and shortening other toes so that humans could run more easily
  • improved understanding of who we are, of human nature, should help us prevent violence of all kinds in the future."
Javier E

Opinion | Humans Are Animals. Let's Get Over It. - The New York Times - 0 views

  • The separation of people from, and the superiority of people to, members of other species is a good candidate for the originating idea of Western thought. And a good candidate for the worst.
  • Like Plato, Hobbes associates anarchy with animality and civilization with the state, which gives to our merely animal motion moral content for the first time and orders us into a definite hierarchy.
  • It is rationality that gives us dignity, that makes a claim to moral respect that no mere animal can deserve. “The moral law reveals to me a life independent of animality,” writes Immanuel Kant in “Critique of Practical Reason.” In this assertion, at least, the Western intellectual tradition has been remarkably consistent.
  • ...15 more annotations...
  • the devaluation of animals and disconnection of us from them reflect a deeper devaluation of the material universe in general
  • In this scheme of things, we owe nature nothing; it is to yield us everything. This is the ideology of species annihilation and environmental destruction, and also of technological development.
  • Further trouble is caused when the distinctions between humans and animals are then used to draw distinctions among human beings
  • Some of us, in short, are animals — and some of us are better than that. This, it turns out, is a useful justification for colonialism, slavery and racism.
  • The classical source for this distinction is certainly Aristotle. In the “Politics,” he writes, “Where then there is such a difference as that between soul and body, or between men and animals (as in the case of those whose business is to use their body, and who can do nothing better), the lower sort are by nature slaves.
  • Every human hierarchy, insofar as it can be justified philosophically, is treated by Aristotle by analogy to the relation of people to animals.
  • One difficult thing to face about our animality is that it entails our deaths; being an animal is associated throughout philosophy with dying purposelessly, and so with living meaninglessly.
  • this line of thought also happens to justify colonizing or even extirpating the “savage,” the beast in human form.
  • Our supposed fundamental distinction from “beasts, “brutes” and “savages” is used to divide us from nature, from one another and, finally, from ourselves
  • In Plato’s “Republic,” Socrates divides the human soul into two parts. The soul of the thirsty person, he says, “wishes for nothing else than to drink.” But we can restrain ourselves. “That which inhibits such actions,” he concludes, “arises from the calculations of reason.” When we restrain or control ourselves, Plato argues, a rational being restrains an animal.
  • In this view, each of us is both a beast and a person — and the point of human life is to constrain our desires with rationality and purify ourselves of animality
  • These sorts of systematic self-divisions come to be refigured in Cartesian dualism, which separates the mind from the body, or in Sigmund Freud’s distinction between id and ego, or in the neurological contrast between the functions of the amygdala and the prefrontal cortex.
  • I don’t know how to refute it, exactly, except to say that I don’t feel myself to be a logic program running on an animal body; I’d like to consider myself a lot more integrated than that.
  • And I’d like to repudiate every political and environmental conclusion ever drawn by our supposed transcendence of the order of nature
  • There is no doubt that human beings are distinct from other animals, though not necessarily more distinct than other animals are from one another. But maybe we’ve been too focused on the differences for too long. Maybe we should emphasize what all us animals have in common.
caelengrubb

How Did Language Begin? | Linguistic Society of America - 0 views

  • The question is not how languages gradually developed over time into the languages of the world today. Rather, it is how the human species developed over time so that we - and not our closest relatives, the chimpanzees and bonobos - became capable of using language.
  • Human language can express thoughts on an unlimited number of topics (the weather, the war, the past, the future, mathematics, gossip, fairy tales, how to fix the sink...). It can be used not just to convey information, but to solicit information (questions) and to give orders.
  • Every human language has a vocabulary of tens of thousands of words, built up from several dozen speech sounds
  • ...14 more annotations...
  • Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation. Many of the sorts of meanings conveyed by chimpanzee communication have counterparts in human 'body language'.
  • The basic difficulty with studying the evolution of language is that the evidence is so sparse. Spoken languages don't leave fossils, and fossil skulls only tell us the overall shape and size of hominid brains, not what the brains could do
  • All present-day languages, including those of hunter-gatherer cultures, have lots of words, can be used to talk about anything under the sun, and can express negation. As far back as we have written records of human language - 5000 years or so - things look basically the same.
  • According to current thinking, the changes crucial for language were not just in the size of the brain, but in its character: the kinds of tasks it is suited to do - as it were, the 'software' it comes furnished with.
  • So the properties of human language are unique in the natural world.
  • About the only definitive evidence we have is the shape of the vocal tract (the mouth, tongue, and throat): Until anatomically modern humans, about 100,000 years ago, the shape of hominid vocal tracts didn't permit the modern range of speech sounds. But that doesn't mean that language necessarily began the
  • Some researchers even propose that language began as sign language, then (gradually or suddenly) switched to the vocal modality, leaving modern gesture as a residue.
  • . In an early stage, sounds would have been used to name a wide range of objects and actions in the environment, and individuals would be able to invent new vocabulary items to talk about new things
  • In order to achieve a large vocabulary, an important advance would have been the ability to 'digitize' signals into sequences of discrete speech sounds - consonants and vowels - rather than unstructured calls.
  • These two changes alone would yield a communication system of single signals - better than the chimpanzee system but far from modern language. A next plausible step would be the ability to string together several such 'words' to create a message built out of the meanings of its parts.
  • This has led some researchers to propose that the system of 'protolanguage' is still present in modern human brains, hidden under the modern system except when the latter is impaired or not yet developed.
  • Again, it's very hard to tell. We do know that something important happened in the human line between 100,000 and 50,000 years ago: This is when we start to find cultural artifacts such as art and ritual objects, evidence of what we would call civilization.
  • One tantalizing source of evidence has emerged recently. A mutation in a gene called FOXP2 has been shown to lead to deficits in language as well as in control of the face and mouth. This gene is a slightly altered version of a gene found in apes, and it seems to have achieved its present form between 200,000 and 100,000 years ago.
  • Nevertheless, if we are ever going to learn more about how the human language ability evolved, the most promising evidence will probably come from the human genome, which preserves so much of our species' history. The challenge for the future will be to decode it.
Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
Javier E

But What Would the End of Humanity Mean for Me? - James Hamblin - The Atlantic - 0 views

  • Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos
  • "I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,"
  • Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”
  • ...17 more annotations...
  • The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.
  • Tegmark told Lex Berko at Motherboard earlier this year, "I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too."
  • "Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.”
  • "This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
  • Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”
  • there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn't kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all
  • “We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” The key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.”
  • “There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention.
  • a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”
  • Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”
  • it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that.
  • Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.
  • “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”
  • Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”
  • Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”
  • “There are seven billion of us on this little spinning ball in space. And we have so much opportunity," Tegmark said. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”
  • Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials.
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
Javier E

Evolution and the American Myth of the Individual - NYTimes.com - 0 views

  • the country’s two main political parties have “fundamental philosophical differences.” But what exactly does that mean?
  • In a broad sense, Democrats, particularly the more liberal among them, are more likely to embrace the communal nature of individual lives and to strive for policies that emphasize that understanding.
  • Republicans, especially libertarians and Tea Party members on the ideological fringe, however, often trace their ideas about freedom and liberty back to Enlightenment thinkers of the 17th and 18th centuries, who argued that the individual is the true measure of human value, and each of us is naturally entitled to act in our own best interests free of interference by others. Self-described libertarians generally also pride themselves on their high valuation of logic and reasoning over emotion.
  • ...13 more annotations...
  • Philosophers from Aristotle to Hegel have emphasized that human beings are essentially social creatures, that the idea of an isolated individual is a misleading abstraction. So it is not just ironic but instructive that modern evolutionary research, anthropology, cognitive psychology and neuroscience have come down on the side of the philosophers who have argued that the basic unit of human social life is not and never has been the selfish, self-serving individual.
  • The irony here is that when it comes to our responsibilities to one another as human beings, religion and evolution nowadays are not necessarily on opposite sides of the fence.
  • in the eyes of many conservative Americans today, religion and evolution do not mix. You either accept what the Bible tells us or what Charles Darwin wrote, but not both
  • Contrary to libertarian and Tea Party rhetoric, evolution has made us a powerfully social species, so much so that the essential precondition of human survival is and always has been the individual plus his or her relationships with others.
  • as Matthew D. Lieberman, a social neuroscience researcher at the University of California, Los Angeles, has written: “we think people are built to maximize their own pleasure and minimize their own pain. In reality, we are actually built to overcome our own pleasure and increase our own pain in the service of following society’s norms.”
  • Why then did Rousseau and others make up stories about human history if they didn’t really believe them? The simple answer, at least during the Enlightenment, was that they wanted people to accept their claim that civilized life is based on social conventions, or contracts, drawn up at least figuratively speaking by free, sane and equal human beings — contracts that could and should be extended to cover the moral and working relationships that ought to pertain between rulers and the ruled.
  • Jean-Jacques Rousseau famously declared in “The Social Contract” (1762) that each of us is born free and yet everywhere we are in chains. He did not mean physical chains. He meant social ones. We now know he was dead wrong. Human evolution has made us obligate social creatures. Even if some of us may choose sooner or later to disappear into the woods or sit on a mountaintop in deep meditation, we humans are able to do so only if before such individualistic anti-social resolve we have first been socially nurtured and socially taught survival arts by others. The distinction Rousseau and others tried to draw between “natural liberty, which is bounded only by the strength of the individual” and “civil liberty, which is limited by the general will” is fanciful, not factual.
  • In short, their aims were political, not historical, scientific or religious.
  • what Rousseau and others crafted as arguments in favor of their ideas all had the earmarks of primitive mythology
  • Bronislaw Malinowski argued almost a century ago: “Myth fulfills in primitive culture an indispensable function: it expresses, enhances, and codifies belief, it safeguards and enforces morality, it vouches for the efficiency of ritual and contains practical rules for the guidance of man.”
  • not all myths make good charters for faith and wisdom. The sanctification of the rights of individuals and their liberties today by libertarians and Tea Party conservatives is contrary to our evolved human nature as social animals. There was never a time in history before civil society when we were each totally free to do whatever we elected to do. We have always been social and caring creatures. The thought that it is both rational and natural for each of us to care only for ourselves, our own preservation, and our own achievements is a treacherous fabrication. This is not how we got to be the kind of species we are today.
  • Myths achieve this social function, he observed, by serving as guides, or charters, for moral values, social order and magical belief.
  • Nor is this what the world’s religions would ask us to believe.
Javier E

The trouble with atheists: a defence of faith | Books | The Guardian - 1 views

  • My daughter has just turned six. Some time over the next year or so, she will discover that her parents are weird. We're weird because we go to church.
  • This means as she gets older there'll be voices telling her what it means, getting louder and louder until by the time she's a teenager they'll be shouting right in her ear. It means that we believe in a load of bronze-age absurdities. That we fetishise pain and suffering. That we advocate wishy-washy niceness. That we're too stupid to understand the irrationality of our creeds. That we build absurdly complex intellectual structures on the marshmallow foundations of a fantasy. That we're savagely judgmental.
  • that's not the bad news. Those are the objections of people who care enough about religion to object to it. Or to rent a set of recreational objections from Richard Dawkins or Christopher Hitchens. As accusations, they may be a hodge-podge, but at least they assume there's a thing called religion which looms with enough definition and significance to be detested.
  • ...25 more annotations...
  • the really painful message our daughter will receive is that we're embarrassing. For most people who aren't New Atheists, or old atheists, and have no passion invested in the subject, either negative or positive, believers aren't weird because we're wicked. We're weird because we're inexplicable; because, when there's no necessity for it that anyone sensible can see, we've committed ourselves to a set of awkward and absurd attitudes that obtrude, that stick out against the background of modern life, and not in some important or respectworthy or principled way, either.
  • Believers are people who try to insert Jee-zus into conversations at parties; who put themselves down, with writhings of unease, for perfectly normal human behaviour; who are constantly trying to create a solemn hush that invites a fart, a hiccup, a bit of subversion. Believers are people who, on the rare occasions when you have to listen to them, like at a funeral or a wedding, seize the opportunity to pour the liquidised content of a primary-school nativity play into your earhole, apparently not noticing that childhood is over.
  • What goes on inside believers is mysterious. So far as it can be guessed at it appears to be a kind of anxious pretending, a kind of continual, nervous resistance to reality.
  • to me, it's belief that involves the most uncompromising attention to the nature of things of which you are capable. Belief demands that you dispense with illusion after illusion, while contemporary common sense requires continual, fluffy pretending – pretending that might as well be systematic, it's so thoroughly incentivised by our culture.
  • The atheist bus says: "There's probably no God. So stop worrying and enjoy your life."
  • the word that offends against realism here is "enjoy". I'm sorry – enjoy your life?
  • If you based your knowledge of the human species exclusively on adverts, you'd think that the normal condition of humanity was to be a good-looking single person between 20 and 35, with excellent muscle-definition and/or an excellent figure, and a large disposable income. And you'd think the same thing if you got your information exclusively from the atheist bus
  • The implication of the bus slogan is that enjoyment would be your natural state if you weren't being "worried" by us believers and our hellfire preaching. Take away the malignant threat of God-talk, and you would revert to continuous pleasure
  • What's so wrong with this, apart from it being total bollocks? Well, in the first place, that it buys a bill of goods, sight unseen, from modern marketing. Given that human life isn't and can't be made up of enjoyment, it is in effect accepting a picture of human life in which those pieces of living where easy enjoyment is more likely become the only pieces that are visible.
  • But then, like every human being, I am not in the habit of entertaining only those emotions I can prove. I'd be an unrecognisable oddity if I did. Emotions can certainly be misleading: they can fool you into believing stuff that is definitely, demonstrably untrue. Yet emotions are also our indispensable tool for navigating, for feeling our way through, the much larger domain of stuff that isn't susceptible to proof or disproof, that isn't checkable against the physical universe. We dream, hope, wonder, sorrow, rage, grieve, delight, surmise, joke, detest; we form such unprovable conjectures as novels or clarinet concertos; we imagine. And religion is just a part of that, in one sense. It's just one form of imagining, absolutely functional, absolutely human-normal. It would seem perverse, on the face of it, to propose that this one particular manifestation of imagining should be treated as outrageous, should be excised if (which is doubtful) we can manage it.
  • suppose, as the atheist bus goes by, you are povertystricken, or desperate for a job, or a drug addict, or social services have just taken away your child. The bus tells you that there's probably no God so you should stop worrying and enjoy your life, and now the slogan is not just bitterly inappropriate in mood. What it means, if it's true, is that anyone who isn't enjoying themselves is entirely on their own. What the bus says is: there's no help coming.
  • Enjoyment is great. The more enjoyment the better. But enjoyment is one emotion. To say that life is to be enjoyed (just enjoyed) is like saying that mountains should only have summits, or that all colours should be purple, or that all plays should be by Shakespeare. This really is a bizarre category error.
  • A consolation you could believe in would be one that wasn't in danger of popping like a soap bubble on contact with the ordinary truths about us. A consolation you could trust would be one that acknowledged the difficult stuff rather than being in flight from it, and then found you grounds for hope in spite of it, or even because of it
  • The novelist Richard Powers has written that the Clarinet Concerto sounds the way mercy would sound, and that's exactly how I experienced it in 1997. Mercy, though, is one of those words that now requires definition. It does not only mean some tyrant's capacity to suspend a punishment he has himself inflicted. It can mean – and does mean in this case – getting something kind instead of the sensible consequences of an action, or as well as the sensible consequences of an action.
  • from outside, belief looks like a series of ideas about the nature of the universe for which a truth-claim is being made, a set of propositions that you sign up to; and when actual believers don't talk about their belief in this way, it looks like slipperiness, like a maddening evasion of the issue.
  • I am a fairly orthodox Christian. Every Sunday I say and do my best to mean the whole of the Creed, which is a series of propositions. But it is still a mistake to suppose that it is assent to the propositions that makes you a believer. It is the feelings that are primary. I assent to the ideas because I have the feelings; I don't have the feelings because I've assented to the ideas.
  • what I felt listening to Mozart in 1997 is not some wishy-washy metaphor for an idea I believe in, and it's not a front behind which the real business of belief is going on: it's the thing itself. My belief is made of, built up from, sustained by, emotions like that. That's what makes it real.
  • I think that Mozart, two centuries earlier, had succeeded in creating a beautiful and accurate report of an aspect of reality. I think that the reason reality is that way – that it is in some ultimate sense merciful as well as being a set of physical processes all running along on their own without hope of appeal, all the way up from quantum mechanics to the relative velocity of galaxies by way of "blundering, low and horridly cruel" biology (Darwin) – is that the universe is sustained by a continual and infinitely patient act of love. I think that love keeps it in being.
  • That's what I think. But it's all secondary. It all comes limping along behind my emotional assurance that there was mercy, and I felt it. And so the argument about whether the ideas are true or not, which is the argument that people mostly expect to have about religion, is also secondary for me.
  • No, I can't prove it. I don't know that any of it is true. I don't know if there's a God. (And neither do you, and neither does Professor Dawkins, and neither does anybody. It isn't the kind of thing you can know. It isn't a knowable item.)
  • let's be clear about the emotional logic of the bus's message. It amounts to a denial of hope or consolation on any but the most chirpy, squeaky, bubble-gummy reading of the human situation
  • It's got itself established in our culture, relatively recently, that the emotions involved in religious belief must be different from the ones involved in all the other kinds of continuous imagining, hoping, dreaming, and so on, that humans do. These emotions must be alien, freakish, sad, embarrassing, humiliating, immature, pathetic. These emotions must be quite separate from commonsensical us. But they aren't
  • The emotions that sustain religious belief are all, in fact, deeply ordinary and deeply recognisable to anybody who has ever made their way across the common ground of human experience as an adult.
  • It's just that the emotions in question are rarely talked about apart from their rationalisation into ideas. This is what I have tried to do in my new book, Unapologetic.
  • You can easily look up what Christians believe in. You can read any number of defences of Christian ideas. This, however, is a defence of Christian emotions – of their intelligibility, of their grown-up dignity.
Javier E

Will ChatGPT Kill the Student Essay? - The Atlantic - 0 views

  • Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.
  • The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up
  • “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.”
  • ...18 more annotations...
  • In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days, is a famously proud illiterate. “I would never read a book,” he once told an interviewer. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before.
  • The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus, but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust.
  • These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences
  • Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.
  • As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide.
  • As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone
  • the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.
  • Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine.
  • the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations.
  • The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major?
  • Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed. The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.
  • now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems
  • Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?
  • despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
  • The humanists will need to understand natural-language processing because it’s the future of language
  • that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.
  • But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.
Javier E

Opinion | Chatbots Are a Danger to Democracy - The New York Times - 0 views

  • longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process
  • Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.
  • In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.
  • ...21 more annotations...
  • In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.
  • around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots.
  • a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side.
  • It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact
  • In the past, despite our differences, we could at least take for granted that all participants in the political process were human beings. This no longer true
  • Increasingly we share the online debate chamber with nonhuman entities that are rapidly growing more advanced
  • a bot developed by the British firm Babylon reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners. The average score for human doctors? 72 percent.
  • If chatbots are approaching the stage where they can answer diagnostic questions as well or better than human doctors, then it’s possible they might eventually reach or surpass our levels of political sophistication
  • chatbots could seriously endanger our democracy, and not just when they go haywire.
  • They’ll likely have faces and voices, names and personalities — all engineered for maximum persuasion. So-called “deep fake” videos can already convincingly synthesize the speech and appearance of real politicians.
  • The most obvious risk is that we are crowded out of our own deliberative processes by systems that are too fast and too ubiquitous for us to keep up with.
  • A related risk is that wealthy people will be able to afford the best chatbots.
  • in a world where, increasingly, the only feasible way of engaging in debate with chatbots is through the deployment of other chatbots also possessed of the same speed and facility, the worry is that in the long run we’ll become effectively excluded from our own party.
  • the wholesale automation of deliberation would be an unfortunate development in democratic history.
  • A blunt approach — call it disqualification — would be an all-out prohibition of bots on forums where important political speech takes place, and punishment for the humans responsible
  • The Bot Disclosure and Accountability Bil
  • would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop PACs, corporations and labor organizations from using bots to disseminate messages advocating candidates, which would be considered “electioneering communications.”
  • A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers.
  • We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human?
  • We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate
  • the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake.
kortanekev

Reasons To Believe : Anthropic Principle: A Precise Plan for Humanity - 0 views

  • The anthropic principle says that the universe appears "designed" for the sake of human life
  • To state the principle more dramatically, a preponderance of physical evidence points to humanity as the central theme of the cosmos.
  • Evidence of specific preparation for human existence shows up in the characteristics of the solar system, as well
  •  
    The Anthropic Principle, arguably the most human sentient there is. This principle is the concept that due to our timeline of creation, the universe has been created specifically for us. But what about  monkeys? They are as much a result of the laws of nature as we are, perhaps the universe was created for them! This idea of human centrality appears in religion as well as early science - through a geocentric model. But as we move further into the universe, it's clear to see there lays much more than us and such sentiments ... 
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
sissij

iPhone manufacturer Foxconn plans to replace almost every human worker with robots - Th... - 0 views

  • The first phase of Foxconn’s automation plans involve replacing the work that is either dangerous or involves repetitious labor humans are unwilling to do.
  • In the long term, robots are cheaper than human labor. However, the initial investment can be costly.
  • There is, however, a central side effect to automation that would specifically benefit a company like Foxconn.
  • ...2 more annotations...
  • So much so in fact that Foxconn had to install suicide netting at factories throughout China and take measures to protect itself against employee litigation.
  • But in doing so, it will ultimately end up putting hundreds of thousands, if not millions, of people out of work.
  •  
    It has always been debatable that to what extent can robot replace human. Foxconn has long been blamed on how it treats its workers. By replacing human by robots, the company can save a lot of money and avoid a lot of condemnations and lawsuits. I think robots are definitely going to replace human on dangerous and tired work, but it is very important that the society is prepared for that change. The government should improve the education so that people can explore other possibilities of what they can do. --Sissi (12/31/2016)
Javier E

New Statesman - All machine and no ghost? - 0 views

  • More subtly, there are many who insist that consciousness just reduces to brain states - a pang of regret, say, is just a surge of chemicals across a synapse. They are collapsers rather than deniers. Though not avowedly eliminative, this kind of view is tacitly a rejection of the very existence of consciousness
  • The dualist, by contrast, freely admits that consciousness exists, as well as matter, holding that reality falls into two giant spheres. There is the physical brain, on the one hand, and the conscious mind, on the other: the twain may meet at some point but they remain distinct entities.
  • Dualism makes the mind too separate, thereby precluding intelligible interaction and dependence.
  • ...11 more annotations...
  • At this point the idealist swooshes in: ladies and gentlemen, there is nothing but mind! There is no problem of interaction with matter because matter is mere illusion
  • idealism has its charms but taking it seriously requires an antipathy to matter bordering on the maniacal. Are we to suppose that material reality is just a dream, a baseless fantasy, and that the Big Bang was nothing but the cosmic spirit having a mental sneezing fit?
  • pan­psychism: even the lowliest of material things has a streak of sentience running through it, like veins in marble. Not just parcels of organic matter, such as lizards and worms, but also plants and bacteria and water molecules and even electrons. Everything has its primitive feelings and minute allotment of sensation.
  • The trouble with panpsychism is that there just isn't any evidence of the universal distribution of consciousness in the material world.
  • it occurred to me that the problem might lie not in nature but in ourselves: we just don't have the faculties of comprehension that would enable us to remove the sense of mystery. Ontologically, matter and consciousness are woven intelligibly together but epistemologically we are precluded from seeing how. I used Noam Chomsky's notion of "mysteries of nature" to describe the situation as I saw it. Soon, I was being labelled (by Owen Flanagan) a "mysterian"
  • The more we know of the brain, the less it looks like a device for creating consciousness: it's just a big collection of biological cells and a blur of electrical activity - all machine and no ghost.
  • mystery is quite pervasive, even in the hardest of sciences. Physics is a hotbed of mystery: space, time, matter and motion - none of it is free of mysterious elements. The puzzles of quantum theory are just a symptom of this widespread lack of understanding
  • The human intellect grasps the natural world obliquely and glancingly, using mathematics to construct abstract representations of concrete phenomena, but what the ultimate nature of things really is remains obscure and hidden. How everything fits together is particularly elusive, perhaps reflecting the disparate cognitive faculties we bring to bear on the world (the senses, introspection, mathematical description). We are far from obtaining a unified theory of all being and there is no guarantee that such a theory is accessible by finite human intelligence.
  • real naturalism begins with a proper perspective on our specifically human intelligence. Palaeoanthropologists have taught us that the human brain gradually evolved from ancestral brains, particularly in concert with practical toolmaking, centring on the anatomy of the human hand. This history shaped and constrained the form of intelligence now housed in our skulls (as the lifestyle of other species form their set of cognitive skills). What chance is there that an intelligence geared to making stone tools and grounded in the contingent peculiarities of the human hand can aspire to uncover all the mysteries of the universe? Can omniscience spring from an opposable thumb? It seems unlikely, so why presume that the mysteries of consciousness will be revealed to a thumb-shaped brain like ours?
  • The "mysterianism" I advocate is really nothing more than the acknowledgment that human intelligence is a local, contingent, temporal, practical and expendable feature of life on earth - an incremental adaptation based on earlier forms of intelligence that no one would reg
  • rd as faintly omniscient. The current state of the philosophy of mind, from my point of view, is just a reflection of one evolutionary time-slice of a particular bipedal species on a particular humid planet at this fleeting moment in cosmic history - as is everything else about the human animal. There is more ignorance in it than knowledge.
Javier E

The psychology of hate: How we deny human beings their humanity - Salon.com - 0 views

  • The cross-cultural psychologist Gustav Jahoda catalogued how Europeans since the time of the ancient Greeks viewed those living in relatively primitive cultures as lacking a mind in one of two ways: either lacking self-control and emotions, like an animal, or lacking reason and intellect, like a child. So foreign in appearance, language, and manner, “they” did not simply become other people, they became lesser people. More specifically, they were seen as having lesser minds, diminished capacities to either reason or feel.
  • In the early 1990ss, California State Police commonly referred to crimes involving young black men as NHI—No Humans Involved.
  • The essence of dehumanization is, therefore, failing to recognize the fully human mind of another person. Those who fight against dehumanization typically deal with extreme cases that can make it seem like a relatively rare phenomenon. It is not. Subtle versions are all around us.
  • ...15 more annotations...
  • Even doctors—those whose business is to treat others humanely— can remain disengaged from the minds of their patients, particularly when those patients are easily seen as different from the doctors themselves. Until the early 1990s, for instance, it was routine practice for infants to undergo surgery without anesthesia. Why? Because at the time, doctors did not believe that infants were able to experience pain, a fundamental capacity of the human mind.
  • Your sixth sense functions only when you engage it. When you do not, you may fail to recognize a fully human mind that is right before your eyes.
  • Although it is indeed true that the ability to read the minds of others exists along a spectrum with stable individual differences, I believe that the more useful knowledge comes from understanding the moment-to-moment, situational influences that can lead even the most social person—yes, even you and me—to treat others as mindless animals or objects.
  • None of the cases described in this chapter so far involve people with chronic and stable personality disorders. Instead, they all come from predictable contexts in which people’s sixth sense remained disengaged for one fundamental reason: distance.
  • This three-part chain—sharing attention, imitating action, and imitation creating experience—shows one way in which your sixth sense works through your physical senses. More important, it also shows how your sixth sense could remain disengaged, leaving you disconnected from the minds of others. Close your eyes, look away, plug your ears, stand too far away to see or hear, or simply focus your attention elsewhere, and your sixth sense may not be triggered.
  • Distance keeps your sixth sense disengaged for at least two reasons. First, your ability to understand the minds of others can be triggered by your physical senses. When you’re too far away in physical space, those triggers do not get pulled. Second, your ability to understand the minds of others is also engaged by your cognitive inferences. Too far away in psychological space—too different, too foreign, too other—and those triggers, again, do not get pulled
  • For psychologists, distance is not just physical space. It is also psychological space, the degree to which you feel closely connected to someone else. You are describing psychological distance when you say that you feel “distant” from your spouse, “out of touch” with your kids’ lives, “worlds apart” from a neighbor’s politics, or “separated” from your employees. You don’t mean that you are physically distant from other people; you mean that you feel psychologically distant from them in some way
  • Interviews with U.S. soldiers in World War II found that only 15 to 20 percent were able to discharge their weapons at the enemy in close firefights. Even when they did shoot, soldiers found it hard to hit their human targets. In the U.S. Civil War, muskets were capable of hitting a pie plate at 70 yards and soldiers could typically reload anywhere from 4 to 5 times per minute. Theoretically, a regiment of 200 soldiers firing at a wall of enemy soldiers 100 feet wide should be able to kill 120 on the first volley. And yet the kill rate during the Civil War was closer to 1 to 2 men per minute, with the average distance of engagement being only 30 yards.
  • Modern armies now know that they have to overcome these empathic urges, so soldiers undergo relentless training that desensitizes them to close combat, so that they can do their jobs. Modern technology also allows armies to kill more easily because it enables killing at such a great physical distance. Much of the killing by U.S. soldiers now comes through the hands of drone pilots watching a screen from a trailer in Nevada, with their sixth sense almost completely disengaged.
  • Other people obviously do not need to be standing right in front of you for you to imagine what they are thinking or feeling or planning. You can simply close your eyes and imagine it.
  • The MPFC and a handful of other brain regions undergird the inferential component of your sixth sense. When this network of brain regions is engaged, you are thinking about others’ minds. Failing to engage this region when thinking about other people is then a solid indication that you’re overlooking their minds.
  • Research confirms that the MPFC is engaged more when you’re thinking about yourself, your close friends and family, and others who have beliefs similar to your own. It is activated when you care enough about others to care what they are thinking, and not when you are indifferent to others
  • As people become more and more different from us, or more distant from our immediate social networks, they become less and less likely to engage our MPFC. When we don’t engage this region, others appear relatively mindless, something less than fully human.
  • The mistake that can arise when you fail to engage with the minds of others is that you may come to think of them as relatively mindless. That is, you may come to think that these others have less going on between their ears than, say, you do.
  • It’s not only free will that other minds might seem to lack. This lesser minds effect has many manifestations, including what appears to be a universal tendency to assume that others’ minds are less sophisticated and more superficial than one’s own. Members of distant out-groups, ranging from terrorists to poor hurricane victims to political opponents, are also rated as less able to experience complicated emotions, such as shame, pride, embarassment, and guilt than close members of one’s own group.
Sophia C

BBC News - Viewpoint: Human evolution, from tree to braid - 0 views

  • What was, in my view, a logical conclusion reached by the authors was too much for some researchers to take.
  • he conclusion of the Dmanisi study was that the variation in skull shape and morphology observed in this small sample, derived from a single population of Homo erectus, matched the entire variation observed among African fossils ascribed to three species - H. erectus, H. habilis and H. rudolfensis.
  • a single population of H. erectus,
  • ...13 more annotations...
  • They all had to be the same species.
  • was not surprising to find that Neanderthals and modern humans interbred, a clear expectation of the biological species concept.
  • I wonder when the penny will drop: when we have five pieces of a 5,000-piece jigsaw puzzle, every new bit that we add is likely to change the picture.
  • e identity of the fourth player remains unknown but it was an ancient lineage that had been separate for probably over a million years. H. erectus seems a likely candidate. Whatever the name we choose to give this mystery lineage, what these results show is that gene flow was possible not just among contemporaries but also between ancient and more modern lineages.
  • cientists succeeded in extracting the most ancient mitochondrial DNA so far, from the Sima de los Huesos site in Atapuerca, Spain.
  • We have built a picture of our evolution based on the morphology of fossils and it was wrong.
    • Sophia C
       
      Kuhn
  • when we know how plastic - or easily changeable - skull shape is in humans. And our paradigms must also change.
  • e must abandon, once and for all, views of modern human superiority over archaic (ancient) humans. The terms "archaic" and "modern" lose all meaning as do concepts of modern human replacement of all other lineages.
  • he deep-rooted shackles that have sought to link human evolution with stone tool-making technological stages - the Stone Ages - even when we have known that these have overlapped with each other for half-a-million years in some instances.
  • e world of our biological and cultural evolution was far too fluid for us to constrain it into a few stages linked by transitions.
  • We have to flesh out the genetic information and this is where archaeology comes into the picture.
  • Rather than focus on differences between modern humans and Neanderthals, what the examples show is the range of possibilities open to humans (Neanderthals included) in different circumstances.
  • research using new technology on old archaeological sites, as at La Chapelle; and
Keiko E

Book Review: The Moral Lives of Animals - WSJ.com - 0 views

  • en less to such accounts than meets the eye. What appear on the surface to be instances of insight, reflection, empathy or higher purpose frequently turn out to be a fairly simple learned behavior, of a kind that every sentient species from humans to earthworms exhibits all the time.
  • The deeper problem, as Mr. Peterson more frankly acknowledges, is that it is the height of anthropomorphic absurdity to project human values and behaviors onto other species—and then to judge them by their similarity to us
  • Recognizing the difficulty of boosting animals, his approach is instead to deflate humans: in particular, to suggest that there is much less to even so vaunted a human trait as morality than we like to believe. Rather than a sophisticated system of language-based laws, philosophical arguments and abstract values that sets mankind apart, morality is, in his view, a set of largely primitive psycho logical instincts.
  • ...2 more annotations...
  • And Mr. Peterson simply ignores several decades worth of recent studies in cognitive science by researchers such as David Povinelli, Bruce Hood, Michael Tomasello and Elisabetta Visalberghi, which have elucidated very real differences between human and nonhuman minds in the realm of conceptual reasoning, particularly with respect to what has been termed "theory of mind." This is the uniquely human ability to have thoughts about thoughts and to perceive that other minds exist and that they can hold ideas and beliefs different from one's own. While human and animal minds share a broadly similar ability to learn from experience, formulate intentions and store memories, careful experiments have repeatedly come up empty when attempting to establish the existence of a theory of mind in nonhumans.
  • This not only detracts from the argument Mr. Peterson seeks to make but reinforces the sense of intellectual parochialism that is the book's chief flaw. Modern evolutionary psychology and cognitive science have done much to illuminate the evolutionary instincts that animate complex human mental processes. Unfortunately, in his determination to level the playing field between human and nonhuman minds, Mr. Peterson has ignored at least half his story.
‹ Previous 21 - 40 of 1412 Next › Last »
Showing 20 items per page