Skip to main content

Home/ TOK Friends/ Group items tagged syntax

Rss Feed Group items tagged

Emily Freilich

How Brains See Music as Language - Adrienne LaFrance - The Atlantic - 0 views

  • jazz improvisation in Manhattan, an experience that's a bit like overhearing a great conversation.
  • putting a musician in a functional MRI machine with a keyboard, and having him play a memorized piece of music and then a made-up piece of music as part of an improvisation with another musician in a control room.
  • The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax.
  • ...7 more annotations...
  • . Though it's difficult to get to the point where you're comfortable enough with music as a language where you can speak freely."
  • "The answer to that probably lies more in figuring out what the nature of language is than what the nature of music is," said Mike Pope, a Baltimore-based pianist and bassist who participated in the study. "When you're talking about something, you're not thinking about how your mouth is moving and you're not thinking about how the words are spelled and you're not thinking about grammar. With music, it's the same thing."
  • "Music communication, we know it means something to the listener, but that meaning can't really be described,
  • So if music is a language without set meaning, what does that tell us about the nature of music?
  • During a spoken conversation, the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But Limb and his colleagues found that brain areas linked to meaning shut down during improvisational jazz interactions.
  • reason to suspect that the auditory brain may have been designed to hear music and speech is a happy byproduc
  • Ukraine's Government Disappears Overnight
Javier E

Dog Might Provide Clues on How Language Is Acquired - NYTimes.com - 0 views

  • their experiments “provide clear evidence that Chaser acquired referential understanding of nouns, an ability normally attributed to children.”
  • Dr. Kaminski said she would not go as far as saying that Chaser’s accomplishments are a step toward language. They show that the dog can combine words for different actions with words for objects. A step toward syntax, she said, would be to show that changing the order of words alters the meaning that Chaser ascribes to them.
  • His goal is to develop methods that will help increase communication between people and dogs. “We are interested in teaching Chaser a receptive, rudimentary language,” he said.
  • ...1 more annotation...
  • Dr. Pilley said that most border collies, with special training, “could be pretty close to where Chaser is.” When he told Chaser’s dog breeder of the experiment, “he wasn’t surprised about the dog’s ability, just that I had had the patience to teach her,” Dr. Pilley said.
Javier E

One of Us - Lapham's Quarterly - 0 views

  • On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly
  • an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
  • Only with the Greeks does there enter the notion of a formal divide between our species, our animal, and every other on earth.
  • ...7 more annotations...
  • there’s that exquisite verse, one of the most beautiful in the Bible, the one that says if God cares deeply about sparrows, don’t you think He cares about you? One is so accustomed to dwelling on the second, human, half of the equation, the comforting part, but when you put your hand over that and consider only the first, it’s a little startling: God cares deeply about the sparrows. Not just that, He cares about them individually. “Are not five sparrows sold for two pennies?” Jesus says. “Yet not one of them is forgotten in God’s sight.”
  • The modern conversation on animal consciousness proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata
  • In On the Origin of Species, Charles Darwin made the intriguing claim that among the naturalists he knew it was consistently the case that the better a researcher got to know a certain species, the more each individual animal’s actions appeared attributable to “reason and the less to unlearnt instinct.” The more you knew, the more you suspected that they were rational. That marks an important pivot, that thought, insofar as it took place in the mind of someone devoted to extremely close and meticulous study of living animals, a mind that had trained itself not to sentimentalize.
  • The sheer number and variety of experiments carried out in the twentieth century—and with, if anything, a renewed intensity in the twenty-first—exceeds summary. Reasoning, language, neurology, the science of emotions—every chamber where “consciousness” is thought to hide has been probed. Birds and chimps and dolphins have been made to look at themselves in mirrors—to observe whether, on the basis of what they see, they groom or preen (a measure, if somewhat arbitrary, of self-awareness). Dolphins have been found to grieve. Primates have learned symbolic or sign languages and then been interrogated with them. Their answers show thinking but have proved stubbornly open to interpretation on the issue of “consciousness,” with critics warning, as always, about the dangers of anthropomorphism, animal-rights bias, etc.
  • If we put aside the self-awareness standard—and really, how arbitrary and arrogant is that, to take the attribute of consciousness we happen to possess over all creatures and set it atop the hierarchy, proclaiming it the very definition of consciousness (Georg Christoph Lichtenberg wrote something wise in his notebooks, to the effect of: only a man can draw a self-portrait, but only a man wants to)—it becomes possible to say at least the following: the overwhelming tendency of all this scientific work, of its results, has been toward more consciousness. More species having it, and species having more of it than assumed.
  • The animal kingdom is symphonic with mental activity, and of its millions of wavelengths, we’re born able to understand the minutest sliver. The least we can do is have a proper respect for our ignorance.
  • The philosopher Thomas Nagel wrote an essay in 1974 titled, “What Is It Like To Be a Bat?”, in which he put forward perhaps the least overweening, most useful definition of “animal consciousness” ever written, one that channels Spinoza’s phrase about “that nature belonging to him wherein he has his being.” Animal consciousness occurs, Nagel wrote, when “there is something that it is to be that organism—something it is like for the organism.” The strangeness of his syntax carries the genuine texture of the problem. We’ll probably never be able to step far enough outside of our species-reality to say much about what is going on with them, beyond saying how like or unlike us they are. Many things are conscious on the earth, and we are one, and our consciousness feels like this; one of the things it causes us to do is doubt the existence of the consciousness of the other millions of species. But it also allows us to imagine a time when we might stop doing that.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • How do you make a search engine that understands if you don’t know how you understand?
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

AMA: How a Weird Internet Thing Became a Mainstream Delight - Alexis C. Madrigal - The ... - 0 views

  • hundreds of people have offered themselves up to be interrogated via Reddit's crowdsourced question-and-answer sessions. They open a new thread on the social network and say, for example, "IamA nanny for a super-rich family in China AMA!"
  • Then, the assembled Redditors ask whatever they want. Questions are voted up and down, and generally speaking, the most popular ones get answered. These interviews can last for as little as an hour or go on for several days.  googletag.cmd.push(function () { googletag.display("adIn-article3"); }); Politicians tend to play things pretty straight, but the regular people and niche celebrities tend to open up in fascinating ways. 
  • Over the last several years, the IamA subreddit has gone from interesting curiosity to a juggernaut of a media brand. Its syntax and abbreviations have invaded the public consciousness like Wired's aged Wired/Tired/Expired rubric. It's a common Twitter joke now to say, "I [did something commonplace], ask me anything." 
  • ...3 more annotations...
  • Reddit was about to become the preeminent place for "real 'expert'" AMAs that were extremely useful and enlightening.
  • AMAs among common folk focus on dishing on what sex, disease, or jobs are really like. The celebrity versions borrow the same idea, but they serve up inside information on celebrity itself (generally speaking) or politics itself. 
  • The AMA is supposed to expose the mechanism. The AMA is about exposing the "inside conversations." The AMA is like the crowdsourced version of those moments when Kevin Spacey turns to the camera in House of Cards and breaks things down. 
Emily Freilich

BBC News - Apostrophe now: Bad grammar and the people who hate it - 0 views

  • The research arm of dating site OKCupid looked at 500,000 first contacts and concluded that "netspeak, bad grammar and bad spelling are huge turn-offs"
  • correct use of apostrophes was appealing. Using "don't" and "won't" caused better than average response rates - 36% and 37% respectively, according to the research.
  • But grammar can be a linguistic minefield. Grammarians argue it ensures clarity and elegance. For others, it is a series of archaic rules beloved of pedants, bearing little relation to how people really communicate.
  • ...4 more annotations...
  • It isn't always obvious what constitutes good and bad grammar.
  • Grammarians push Standard English at the expense of other forms, he asserts. It's an elitist view that ignores, for example, Americanisms and all the different ways of communicating online. Context and appropriateness are what really matter, Crystal believes.
  • Grammar is connected to values in people's minds. "Grammar peevers" in projects such as the Apostrophe Protection Society see "a connection between secure syntax and moral excellence",
  • Or is it just a handy shortcut? People make judgements about each other all the time for superficial reasons. Basing such assessments on use of language is fairer than the alternative,
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
kushnerha

BBC - Future - Will emoji become a new language? - 2 views

  • Emoji are now used in around half of every sentence on sites like Instagram, and Facebook looks set to introduce them alongside the famous “like” button as a way of expression your reaction to a post.
  • If you were to believe the headlines, this is just the tipping point: some outlets have claimed that emoji are an emerging language that could soon compete with English in global usage. To many, this would be an exciting evolution of the way we communicate; to others, it is linguistic Armageddon.
  • Do emoji show the same characteristics of other communicative systems and actual languages? And what do they help us to express that words alone can’t say?When emoji appear with text, they often supplement or enhance the writing. This is similar to gestures that appear along with speech. Over the past three decades, research has shown that our hands provide important information that often transcends and clarifies the message in speech. Emoji serve this function too – for instance, adding a kissy or winking face can disambiguate whether a statement is flirtatiously teasing or just plain mean.
  • ...17 more annotations...
  • This is a key point about language use: rarely is natural language ever limited to speech alone. When we are speaking, we constantly use gestures to illustrate what we mean. For this reason, linguists say that language is “multi-modal”. Writing takes away that extra non-verbal information, but emoji may allow us to re-incorporate it into our text.
  • Emoji are not always used as embellishments, however – sometimes, strings of the characters can themselves convey meaning in a longer sequence on their own. But to constitute their own language, they would need a key component: grammar.
  • A grammatical system is a set of constraints that governs how the meaning of an utterance is packaged in a coherent way. Natural language grammars have certain traits that distinguish them. For one, they have individual units that play different roles in the sequence – like nouns and verbs in a sentence. Also, grammar is different from meaning
  • When emoji are isolated, they are primarily governed by simple rules related to meaning alone, without these more complex rules. For instance, according to research by Tyler Schnoebelen, people often create strings of emoji that share a common meaning
  • This sequence has little internal structure; even when it is rearranged, it still conveys the same message. These images are connected solely by their broader meaning. We might consider them to be a visual list: “here are all things related to celebrations and birthdays.” Lists are certainly a conventionalised way of communicating, but they don’t have grammar the way that sentences do.
  • What if the order did matter though? What if they conveyed a temporal sequence of events? Consider this example, which means something like “a woman had a party where they drank, and then opened presents and then had cake”:
  • These rules may seem like the seeds of grammar, but psycholinguist Susan Goldin-Meadow and colleagues have found this order appears in many other systems that would not be considered a language. For example, this order appears when people arrange pictures to describe events from an animated cartoon, or when speaking adults communicate using only gestures. It also appears in the gesture systems created by deaf children who cannot hear spoken languages and are not exposed to sign languages.
  • In all cases, the doer of the action (the agent) precedes the action. In fact, this pattern is commonly found in both full languages and simple communication systems. For example, the majority of the world’s languages place the subject before the verb of a sentence.
  • describes the children as lacking exposure to a language and thus invent their own manual systems to communicate, called “homesigns”. These systems are limited in the size of their vocabularies and the types of sequences they can create. For this reason, the agent-act order seems not to be due to a grammar, but from basic heuristics – practical workarounds – based on meaning alone. Emoji seem to tap into this same system.
  • Nevertheless, some may argue that despite emoji’s current simplicity, this may be the groundwork for emerging complexity – that although emoji do not constitute a language at the present time, they could develop into one over time.
  • Could an emerging “emoji visual language” be developing in a similar way, with actual grammatical structure? To answer that question, you need to consider the intrinsic constraints on the technology itself.Emoji are created by typing into a computer like text. But, unlike text, most emoji are provided as whole units, except for the limited set of emoticons which convert to emoji, like :) or ;). When writing text, we use the building blocks (letters) to create the units (words), not by searching through a list of every whole word in the language.
  • emoji force us to convey information in a linear unit-unit string, which limits how complex expressions can be made. These constraints may mean that they will never be able to achieve even the most basic complexity that we can create with normal and natural drawings.
  • What’s more, these limits also prevent users from creating novel signs – a requisite for all languages, especially emerging ones. Users have no control over the development of the vocabulary. As the “vocab list” for emoji grows, it will become increasingly unwieldy: using them will require a conscious search process through an external list, not an easy generation from our own mental vocabulary, like the way we naturally speak or draw. This is a key point – it means that emoji lack the flexibility needed to create a new language.
  • we already have very robust visual languages, as can be seen in comics and graphic novels. As I argue in my book, The Visual Language of Comics, the drawings found in comics use a systematic visual vocabulary (such as stink lines to represent smell, or stars to represent dizziness). Importantly, the available vocabulary is not constrained by technology and has developed naturally over time, like spoken and written languages.
  • grammar of sequential images is more of a narrative structure – not of nouns and verbs. Yet, these sequences use principles of combination like any other grammar, including roles played by images, groupings of images, and hierarchic embedding.
  • measured participants’ brainwaves while they viewed sequences one image at a time where a disruption appeared either within the groupings of panels or at the natural break between groupings. The particular brainwave responses that we observed were similar to those that experimenters find when violating the syntax of sentences. That is, the brain responds the same way to violations of “grammar”, whether in sentences or sequential narrative images.
  • I would hypothesise that emoji can use a basic narrative structure to organise short stories (likely made up of agent-action sequences), but I highly doubt that they would be able to create embedded clauses like these. I would also doubt that you would see the same kinds of brain responses that we saw with the comic strip sequences.
mshilling1

The Importance of Logic and Critical Thinking | WIRED - 0 views

  • The rationality of the world is what is at risk. Too many people are taken advantage of because of their lack of critical thinking, logic and deductive reasoning
  • These same people are raising children without these same skills, creating a whole new generation of clueless people.
  • However, valid logic does not always guarantee truth or a sound argument.
  • ...3 more annotations...
  • Valid logic is when the structure of logic is correct in the way of syntax and semantics rather than truth.
  • The basic lesson here is that, while the logic above might seem valid because of the structure of the statement, it takes a further understanding to figure out why it's not necessarily true
  • The underlying lesson here is not to immediately assume everything you read or are told is true, something all children need to and should learn.
johnsonel7

Computers Are Learning to Read-But They're Still Not So Smart | WIRED - 0 views

  • computers still weren’t very good at understanding the written word. Sure, they had become decent at simulating that understanding in certain narrow domains, like automatic translation or sentiment analysis (for example, determining if a sentence sounds “mean or nice,” he said). But Bowman wanted measurable evidence of the genuine article: bona fide, human-style reading comprehension in English. So he came up with a test
  • The machines bombed. Even state-of-the-art neural networks scored no higher than 69 out of 100 across all nine tasks: a D-plus, in letter grade terms. Bowman and his coauthors weren’t surprised. Neural networks — layers of computational connections built in a crude approximation of how neurons communicate within mammalian brains
  • It produced a GLUE score of 80.5. On this brand-new benchmark designed to measure machines’ real understanding of natural language — or to expose their lack thereof — the machines had jumped from a D-plus to a B-minus in just six months.
  • ...4 more annotations...
  • The only problem is that perfect rulebooks don’t exist, because natural language is far too complex and haphazard to be reduced to a rigid set of specifications.
  • Researchers simply fed their neural networks massive amounts of written text copied from freely available sources like Wikipedia — billions of words, preformatted into grammatically correct sentences — and let the networks derive next-word predictions on their own. In essence, it was like asking the person inside a Chinese room to write all his own rules, using only the incoming Chinese messages for reference.“The great thing about this approach is it turns out that the model learns a ton of stuff about syntax,”
  • The nonsequential nature of the transformer represented sentences in a more expressive form, which Uszkoreit calls treelike. Each layer of the neural network makes multiple, parallel connections between certain words while ignoring others — akin to a student diagramming a sentence in elementary school. These connections are often drawn between words that may not actually sit next to each other in the sentence. “Those structures effectively look like a number of trees that are overlaid,” Uszkoreit explained.
  • But instead of concluding that BERT could apparently imbue neural networks with near-Aristotelian reasoning skills, they suspected a simpler explanation: that BERT was picking up on superficial patterns in the way the warrants were phrased.
anniina03

When Did Ancient Humans Start to Speak? - The Atlantic - 0 views

  • The larynx, also called the voice box, is where the trouble begins: Its location is, or was, supposed to be the key to language.
  • Scientists have agreed for a while that the organ is lower down the throat in humans than it is in any other primate, or was in our ancestors. And for decades, they thought that low-down larynx was a sort of secret ingredient to speech because it enabled its bearers to produce a variety of distinctive vowels, like the ones that make beet, bat, and boot sound like different words. That would mean that speech—and, therefore, language—couldn’t have evolved until the arrival of anatomically modern Homo sapiens about 200,000 years ago
  • Those speech abilities could include distinct vowels and consonants, syllables, or even syntax—all of which, according to LDT, should be impossible for any animal without a human vocal tract.
  • ...4 more annotations...
  • In fact, they propose that the necessary equipment—specifically, the throat shape and motor control that produce distinguishable vowels—has been around as long as 27 million years, when humans and Old World monkeys (baboons, mandrills, and the like) last shared a common ancestor.
  • As John Locke, a linguistics professor at Lehman College, put it, “Motor control rots when you die.” Soft tissues like tongues and nerves and brains generally don’t fossilize; DNA sequencing is impossible past a few hundred thousand years; no one has yet found a diary or rap track recorded by a teenage Australopithecus.
  • One of the quantitative models the new study relies on, he says, doesn’t properly represent the shape of the larynx, tongue, and other parts we use to talk: “It would convert a mailing tube into a human vocal tract.” And according to Lieberman, laryngeal descent theory “never claimed language was not possible” prior to the critical changes in our ancestors’ throat anatomy. “They’re trying to set up a straw man,” he said.
  • Rather than 27 million years, Hickok proposes that the earliest bound on any sort of speech ability would be nearer to human ancestors’ split with the Pan genus, which includes chimpanzees and bonobos, our closest living relatives. That split happened about 5 million to 7 million years ago—certainly longer than 200,000 years, but a far cry from 27 million. Lieberman argues that the precursors of speech might have emerged about a little more than 3 million years ago, when artifacts like jewelry appear in the archaeological record. The idea is that both language and jewelry are intimately related to the evolution of symbolic thinking.
blythewallick

The Life Changing Linguistics of... Nigerian Scam Emails | JSTOR Daily - 0 views

  • How do scammers use language to trick their victims?
  • Researchers often have pondered the psychology of how and why certain people fall for scams, especially ones that appear to be so obvious to others. Scammers target certain human traits, including vulnerabilities that can be easily exploited, such as naiveté, overconfidence in one’s intelligence or abilities, an overly optimistic expectation of immediate success in life, or a desire to feel good about helping others. As communication on the internet has gotten more sophisticated, so have the scams. Many still fall for it, as con artists continue to target businesses as well as individuals.
  • Nigerian 419 cons (the number refers to the fraud section of the Nigerian Criminal Code) are practically the oldest scams on the internet. They can actually be traced back to the 1920s, in the form of a confidence trick most gothically known as the Spanish Prisoner. The victims are asked to pay out more and more money to release the wealthy relative of your respectable foreign correspondents (who naturally doesn’t exist), cruelly imprisoned in Spain, in return for a large reward (that never comes).
  • ...3 more annotations...
  • It’s well-known that most of the emails are composed by nonnative speakers of English, as is clear from the many grammatical and punctuation mistakes, broken syntax, missing words, and malapropisms (“the will to personify the façade to its practical conclusion” for “pursue the charade”). With so many mistakes, how can this language really fool anyone? Often, victims know the message comes from a nonnative speaker, or they may be nonnative English speakers themselves and may not always recognize grammatical errors.
  • The scammers make extraordinary efforts to use technical, military, financial or otherwise professional language in a clumsy caricature of what formal, educated English sounds like (“government officials . . . awarded themselves contracts that were grossly over-invoiced in various ministries and parastatals”) to keep up the pretence of being a barrister, a brigadier-general, or a bank official.
  • It’s a simple scam, but appears to be effective when it reaches credulous types who are willing to ignore the warning signs in search of a treasure hunting adventure that can make a life extraordinary for a while, before it all ends in grief.
blythewallick

Having an elder brother is associated with slower language development -- ScienceDaily - 0 views

  • Intuitively, it is tempting to think that a child who has an elder brother or sister will grow up in a stimulating linguistic environment and will develop their language skills faster than the family's firstborn. However, several studies have shown the contrary: the acquisition of language in a child with an elder sibling is reported to be slower than a child who has none.
  • More than 1000 children have been followed from birth to five and a half years old in the mother-child cohort EDEN.* Their language skills were evaluated at 2, 3 and 5.5 years old by tests measuring several aspects of language, such as vocabulary, syntax and verbal reasoning
  • Children who have an elder brother had on average a two-month delay in language development compared with children with an elder sister.
  • ...2 more annotations...
  • The first is that elder sisters, in being more willing to talk to their younger siblings than brothers, may compensate for their parents being less available. Another hypothesis would be that elder sisters compete less than elder brothers for parental attention.
  • Though this study cannot separate these two hypotheses, it does show that early language development in a younger sibling tends to be slower when the elder is a boy.
Javier E

Psychological nativism - Wikipedia - 0 views

  • In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs.
  • Some nativists believe that specific beliefs or preferences are "hard-wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.
  • Nativism has a history in philosophy, particularly as a reaction to the straightforward empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.
  • ...14 more annotations...
  • The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.
  • Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language.
  • For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language
  • In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).
  • A number of other theorists[1][2][3] have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.[4]
  • Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success.[20] Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).[14][21][22][23]
  • The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.
  • This evidence is all the more impressive when one considers that most children do not receive reliable corrections for grammatical errors.[9] Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly.[10] The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.
  • Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.[11
  • experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggest that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.[11]
  • modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".[13]
  • , Chomsky's poverty of the stimulus argument is controversial within linguistics.[14][15][16][17][18][19]
  • Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.[6]
  • Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and the intended outcome is how an organism is meant to develop.[24]
Javier E

Opinion | Chatbots Are a Danger to Democracy - The New York Times - 0 views

  • longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process
  • Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.
  • In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.
  • ...21 more annotations...
  • In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.
  • around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots.
  • a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side.
  • It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact
  • In the past, despite our differences, we could at least take for granted that all participants in the political process were human beings. This no longer true
  • Increasingly we share the online debate chamber with nonhuman entities that are rapidly growing more advanced
  • a bot developed by the British firm Babylon reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners. The average score for human doctors? 72 percent.
  • If chatbots are approaching the stage where they can answer diagnostic questions as well or better than human doctors, then it’s possible they might eventually reach or surpass our levels of political sophistication
  • chatbots could seriously endanger our democracy, and not just when they go haywire.
  • They’ll likely have faces and voices, names and personalities — all engineered for maximum persuasion. So-called “deep fake” videos can already convincingly synthesize the speech and appearance of real politicians.
  • The most obvious risk is that we are crowded out of our own deliberative processes by systems that are too fast and too ubiquitous for us to keep up with.
  • A related risk is that wealthy people will be able to afford the best chatbots.
  • in a world where, increasingly, the only feasible way of engaging in debate with chatbots is through the deployment of other chatbots also possessed of the same speed and facility, the worry is that in the long run we’ll become effectively excluded from our own party.
  • the wholesale automation of deliberation would be an unfortunate development in democratic history.
  • A blunt approach — call it disqualification — would be an all-out prohibition of bots on forums where important political speech takes place, and punishment for the humans responsible
  • The Bot Disclosure and Accountability Bil
  • would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop PACs, corporations and labor organizations from using bots to disseminate messages advocating candidates, which would be considered “electioneering communications.”
  • A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers.
  • We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human?
  • We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate
  • the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake.
1 - 15 of 15
Showing 20 items per page