Skip to main content

Home/ Words R Us/ Group items tagged machines

Rss Feed Group items tagged

Lara Cowell

Teaching a Machine to Have a Conversation Isn't Easy | Adobe Blog - 0 views

  •  
    Voice-based interactions require sophisticated programming and AI to enable a machine to understand and talk to you. In other words, it's a really big deal to program a computer to have a conversation. Walter says the challenge now is to process massive amounts of dialogue-related data to facilitate human-like intellectual and speech functionality in machines. "We are developing the technology that will give machines the cognitive ability to understand the semantics and context of a conversation, respond to topic changes and, in essence, carry on everything from a complex conversation to small talk."
Lara Cowell

AI's Language Problem - 0 views

  •  
    This MIT Technology Review article notes that while Artificial Intelligence has experienced many sophisticated advances, one fundamental capability remains elusive: language. Systems like Siri and IBM's Watson can follow simple spoken or typed commands and answer basic questions, but they can't hold a conversation and have no real understanding of the words they use. In addition, humans, unlike machines, have the ability to learn very quickly from a relatively small amount of data and have a built-in ability to model the world in 3-D very efficiently. Programming machines to comprehend and generate language is a complex task, because the machines would need to mimic human learning, mental model building, and psychology. As MIT cognitive scientist Josh Tenenbaum states, "Language builds on other abilities that are probably more basic, that are present in young infants before they have language: perceiving the world visually, acting on our motor systems, understanding the physics of the world or other agents' goals." ­ If he is right, then it will be difficult to re-create language understanding in machines and AI systems without trying to mimic human learning, mental model building, and psychology.
Lara Cowell

What's Going On In Your Child's Brain When You Read Them A Story? : NPR Ed : NPR - 0 views

  •  
    For the study, conducted by Dr. John Hutton, a researcher and pediatrician at Cincinnati Children's Hospital, and someone with an interest in emergent literacy, 27 children around age 4 went into an FMRI machine. They were presented with the same story in three conditions: audio only; the illustrated pages of a storybook with an audio voiceover; and an animated cartoon. While the children paid attention to the stories, the MRI, the machine scanned for activation within certain brain networks, and connectivity between the networks. Here's what researchers found: In the audio-only condition (too cold): language networks were activated, but there was less connectivity overall. "There was more evidence the children were straining to understand." In the animation condition (too hot): there was a lot of activity in the audio and visual perception networks, but not a lot of connectivity among the various brain networks. "The language network was working to keep up with the story," says Hutton. "Our interpretation was that the animation was doing all the work for the child. They were expending the most energy just figuring out what it means." The children's comprehension of the story was the worst in this condition. The illustration condition was what Hutton called "just right".When children could see illustrations, language-network activity dropped a bit compared to the audio condition. Instead of only paying attention to the words, Hutton says, the children's understanding of the story was "scaffolded" by having the images as clues. Most importantly, in the illustrated book condition, researchers saw increased connectivity between - and among - all the networks they were looking at: visual perception, imagery, default mode and language. One interesting note is that, because of the constraints of an MRI machine, which encloses and immobilizes your body, the story-with-illustrations condition wasn't actually as good as reading on Mom or Dad's lap. The emotional bon
Lisa Stewart

Language 'time machine' a Rosetta stone for lost tongues | Crave - CNET - 3 views

  •  
    The most complete description I've seen (along with graphic) for how the ancient language reconstructions algorithms work.
Lara Cowell

DeepDrumpf 2016 - 0 views

  •  
    Bradley Hayes, a post-doc student at the Massachusetts Institute of Technology, has invented @DeepDrumpf, an amusing bit of artificial intelligence. DeepDrumpf is a bot trained on the publicly available speech transcripts, tweets, and debate remarks of Donald Trump. Using a machine learning model known as a Recurrent Neural Network, the bot generates sequences of words based on priming text and the statistical structure found within its training data. Created to highlight the absurdity of this election cycle, it has amassed over 20,000 followers and has been viewed over 12 million times -- showcasing the consequences of training a machine learning model on a dataset that embodies fearmongering, bigotry, xenophobia, and hypernationalism. Here's a sample tweet: "We have to end education. What they do is unbelievable, how bad. Nobody can do that like me. Believe me."
Lara Cowell

For Those Unable To Talk, A Machine That Speaks Their Voice - 0 views

  •  
    It's hard to imagine a more devastating diagnosis than ALS, also called Lou Gehrig's disease. For most people, it means their nervous system is going to deteriorate until their body is completely immobile. That also means they'll lose their ability to speak. Voice banking, a digital technology, allows ALS patients to record their voice and key messages in preparation for that time.
allstonpleus19

Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future - 0 views

  •  
    Facebook shut down an Artificial Intelligence experiment shortly after developers discovered that the machines were talking in a language the machines made up that humans couldn't understand. The developers first programmed the chatbots so they could talk in English to each other when trading items back and forth. When the chatbots got into a long negotiation, they started talking to each other in their own language. Even though these chatbots were not highly intelligent, it is concerning that they went off on their own so quickly. In 2014, scientist Stephen Hawking warned that Artificial Intelligence could be the end of the human race. Hopefully the Matrix is science fiction, not science prediction.
Lara Cowell

Finding A Pedicure In China, Using Cutting-Edge Translation Apps - 0 views

  •  
    A traveling journalist in Beijing utilizes both Baidu (China's version of Google) and Google voice-translation apps with mixed results. You speak into the apps, they listen and then translate into the language you choose. They do it in writing, by displaying text on the screen as you talk; and out loud, by using your phone's speaker to narrate what you've said once you're done talking. Typically exchanges are brief: 3-4 turns on average for Google, 7-8 for Baidu's translate app. Both Google and Baidu use machine learning to power their translation technology. While a human linguist could dictate all the rules for going from one language to another, that would be tedious, and yield poor results because a lot of languages aren't structured in parallel form. So instead, both companies have moved to pattern recognition through "neural machine translation." They take a mountain of data - really good translations - and load it into their computers. Algorithms then mine through the data to look for patterns. The end product is translation that's not just phrase-by-phrase, but entire thoughts and sentences at a time. Not surprisingly, sometimes translations are successes, and other times, epic fails. Why? As Macduff Hughes, a Google executive, notes, "there's a lot more to translation than mapping one word to another. The cultural understanding is something that's hard to fully capture just in translation."
ellisalang17

How Machines Learned to Speak the Human Language - 0 views

  •  
    This article explains how machines such as "Siri" and "Echo" are able to speak the human language. "Language technologies teach themselves, via a form of pattern-matching. For speech recognition, computers are fed sound files on the one hand, and human-written transcriptions on the other. The system learns to predict which sounds should result in what transcriptions."
ansonlee2017

Linguistics Breakthrough Heralds Machine Translation for Thousands of Rare Languages - 0 views

  •  
    Online translation services work for fewer than 100 of the world's 7,000 languages. However, now a new machine translation technique could change that by providing translations for thousands of other languages.
Lara Cowell

When an Adult Adds a Language, It's One Brain, Two Systems - The New York Times - 1 views

  •  
    Dr. Joy Hirsch, head of Memorial Sloan-Kettering Hospital's functional M.R.I. Laboratory, and her graduate student, Karl Kim, found that second languages are stored differently in the human brain, depending on when they are learned. Babies who learn two languages simultaneously, and apparently effortlessly, have a single brain region for generating complex speech, researchers say. But people who learn a second language in adolescence or adulthood possess two such brain regions, one for each language. To explore where languages lie in the brain, Dr. Hirsch recruited 12 healthy bilingual people from New York City. Ten different languages were represented in the group. Half had learned two languages in infancy. The other half began learning a second language around age 11 and had acquired fluency by 19 after living in the country where the language was spoken. With their heads inside the M.R.I. machine, subjects thought silently about what they had done the day before using complex sentences, first in one language, then in the other. The machine detected increases in blood flow, indicating where in the brain this thinking took place. Activity was noted in Wernicke's area, a region devoted to understanding the meaning of words and the subject matter of spoken language, or semantics, as well as Broca's area, a region dedicated to the execution of speech, as well as some deep grammatical aspects of language. None of the 12 bilinguals had two separate Wernicke's areas, Dr. Hirsch said. But there were dramatic differences in Broca's areas, Dr. Hirsch said. In people who had learned both languages in infancy, there was only one uniform Broca's region for both languages, a dot of tissue containing about 30,000 neurons. Among those who had learned a second language in adolescence, however, Broca's area seemed to be divided into two distinct areas. Only one area was activated for each language. These two areas lay close to each other but were always separate, Dr. Hirsch s
Lara Cowell

9 Tips to Design Conversational Style for Your Bot - 0 views

  •  
    Interesting article re: designing an efficient and responsive bot: leveraging key human linguistic features in order to have the machine "converse" with humans, comprehend their needs, and respond appropriately. Programming a machine to exhibit the conversational nuances and sophisticated comprehension of a normal human=hard.
Lara Cowell

Imagine A Flying Pig: How Words Take Shape In The Brain : NPR - 3 views

  •  
    Just a few decades ago, many linguists thought the human brain had evolved a special module for language . It seemed plausible that our brains have some unique structure or system. After all, no animal can use language the way people can. However, in the 1990s, scientists began testing the language-module theory using "functional" MRI technology that let them watch the brain respond to words. And what they saw didn't look like a module, says Benjamin Bergen, a researcher at the University of California, San Diego, and author of the book _Louder Than Words_. "They found something totally surprising," Bergen says. "It's not just certain specific little regions in the brain, regions dedicated to language, that were lighting up. It was kind of a whole-brain type of process." The brain appears to be taking words, which are just arbitrary symbols, and translating them into things we can see or hear or do; language processing, rather than being a singular module, is "a highly distributed system" encompassing many areas of the brain. Our sensory experiences can also be applied to imagining novel concepts like "flying pigs". Our sensory capacities, ancestral features shared with our primate relatives, have been co-opted for more recent purposes, namely words and language. Bergen comments, "What evolution has done is to build a new machine, a capacity for language, something that nothing else in the known universe can do," he says. "And it's done so using the spare parts that it had lying around in the old primate brain."
deborahwen17

Do dolphins have a spoken language? - CNN.com - 0 views

  •  
    New research suggests that dolphins may have a spoken language of their own; in a recent study by Russian researchers two dolphins communicated using a series of whistles and clicks (called pulses), and didn't ever interrupt each other. They also noted that the pulses sounded like sentences. With new recording technologies, the researchers were able to separate potential words from filler clicks, and the researchers hope to one day build a machine that will allow humans and dolphins to communicate.
joellehiga17

Suicide prevention app could save teenagers' lives - 0 views

  •  
    A machine learning algorithm analyses verbal and non-verbal cues It could correctly identify if someone is suicidal with 93% accuracy Researchers incorporated the algorithm into an app trialed in schools By recording conversations and analysing cues such as pauses and sighs, it could help to flag those most at risk of taking their own life Researchers are developing an app which could help to prevent suicides by flagging those most at risk.
Michael Pang

"Is Technology 'Dumbing Down Our Kids?'" - 0 views

shared by Michael Pang on 31 May 12 - No Cached
  •  
    Experts debate whether today?s kids are helped or hindered by their immersion in technology. Panelists include the lead researcher for the book Growing Up Digital, Mike Dover, the author of The Child and the Machine, Alison Armstrong and Sir Wilfred Laurier Faculty of Education professor Julie Mueller.
Ryan Catalani

Lie-Detection Software Is a Research Quest - NYTimes.com - 7 views

  •  
    "A small band of linguists, engineers and computer scientists, among others, are busy training computers to recognize hallmarks of what they call emotional speech - talk that reflects deception, anger, friendliness and even flirtation. ... Algorithms developed by Dr. Hirschberg and colleagues have been able to spot a liar 70 percent of the time in test situations, while people confronted with the same evidence had only 57 percent accuracy ... His lab has also found ways to use vocal cues to spot inebriation, though it hasn't yet had luck in making its computers detect humor - a hard task for the machines, he said."
Ryan Catalani

Brain doesn't need vision at all in order to 'read' material | Machines Like Us - 3 views

  •  
    "The portion of the brain responsible for visual reading doesn't require vision at all, according to a new study... Brain imaging studies of blind people as they read words in Braille show activity in precisely the same part of the brain that lights up when sighted readers read."
Lisa Stewart

Welcome to The Internet Archive Wayback Machine - 0 views

  •  
    See how websites looked in the past
haleycrabtree17

Linguistic Society of America - 0 views

  •  
    Download this document as a pdf. Yes, and so is every other human language. Language is always changing, evolving, and adapting to the needs of its users. This isn't a bad thing; if English hadn't changed since, say, 1950, we wouldn't have words to refer to modems, fax machines, or cable TV.
1 - 20 of 30 Next ›
Showing 20 items per page