Skip to main content

Home/ Words R Us/ Group items tagged machine-learning

Rss Feed Group items tagged

Lara Cowell

AI's Language Problem - 0 views

  •  
    This MIT Technology Review article notes that while Artificial Intelligence has experienced many sophisticated advances, one fundamental capability remains elusive: language. Systems like Siri and IBM's Watson can follow simple spoken or typed commands and answer basic questions, but they can't hold a conversation and have no real understanding of the words they use. In addition, humans, unlike machines, have the ability to learn very quickly from a relatively small amount of data and have a built-in ability to model the world in 3-D very efficiently. Programming machines to comprehend and generate language is a complex task, because the machines would need to mimic human learning, mental model building, and psychology. As MIT cognitive scientist Josh Tenenbaum states, "Language builds on other abilities that are probably more basic, that are present in young infants before they have language: perceiving the world visually, acting on our motor systems, understanding the physics of the world or other agents' goals." ­ If he is right, then it will be difficult to re-create language understanding in machines and AI systems without trying to mimic human learning, mental model building, and psychology.
Lara Cowell

When an Adult Adds a Language, It's One Brain, Two Systems - The New York Times - 1 views

  •  
    Dr. Joy Hirsch, head of Memorial Sloan-Kettering Hospital's functional M.R.I. Laboratory, and her graduate student, Karl Kim, found that second languages are stored differently in the human brain, depending on when they are learned. Babies who learn two languages simultaneously, and apparently effortlessly, have a single brain region for generating complex speech, researchers say. But people who learn a second language in adolescence or adulthood possess two such brain regions, one for each language. To explore where languages lie in the brain, Dr. Hirsch recruited 12 healthy bilingual people from New York City. Ten different languages were represented in the group. Half had learned two languages in infancy. The other half began learning a second language around age 11 and had acquired fluency by 19 after living in the country where the language was spoken. With their heads inside the M.R.I. machine, subjects thought silently about what they had done the day before using complex sentences, first in one language, then in the other. The machine detected increases in blood flow, indicating where in the brain this thinking took place. Activity was noted in Wernicke's area, a region devoted to understanding the meaning of words and the subject matter of spoken language, or semantics, as well as Broca's area, a region dedicated to the execution of speech, as well as some deep grammatical aspects of language. None of the 12 bilinguals had two separate Wernicke's areas, Dr. Hirsch said. But there were dramatic differences in Broca's areas, Dr. Hirsch said. In people who had learned both languages in infancy, there was only one uniform Broca's region for both languages, a dot of tissue containing about 30,000 neurons. Among those who had learned a second language in adolescence, however, Broca's area seemed to be divided into two distinct areas. Only one area was activated for each language. These two areas lay close to each other but were always separate, Dr. Hirsch s
Lara Cowell

DeepDrumpf 2016 - 0 views

  •  
    Bradley Hayes, a post-doc student at the Massachusetts Institute of Technology, has invented @DeepDrumpf, an amusing bit of artificial intelligence. DeepDrumpf is a bot trained on the publicly available speech transcripts, tweets, and debate remarks of Donald Trump. Using a machine learning model known as a Recurrent Neural Network, the bot generates sequences of words based on priming text and the statistical structure found within its training data. Created to highlight the absurdity of this election cycle, it has amassed over 20,000 followers and has been viewed over 12 million times -- showcasing the consequences of training a machine learning model on a dataset that embodies fearmongering, bigotry, xenophobia, and hypernationalism. Here's a sample tweet: "We have to end education. What they do is unbelievable, how bad. Nobody can do that like me. Believe me."
ellisalang17

How Machines Learned to Speak the Human Language - 0 views

  •  
    This article explains how machines such as "Siri" and "Echo" are able to speak the human language. "Language technologies teach themselves, via a form of pattern-matching. For speech recognition, computers are fed sound files on the one hand, and human-written transcriptions on the other. The system learns to predict which sounds should result in what transcriptions."
Lara Cowell

Finding A Pedicure In China, Using Cutting-Edge Translation Apps - 0 views

  •  
    A traveling journalist in Beijing utilizes both Baidu (China's version of Google) and Google voice-translation apps with mixed results. You speak into the apps, they listen and then translate into the language you choose. They do it in writing, by displaying text on the screen as you talk; and out loud, by using your phone's speaker to narrate what you've said once you're done talking. Typically exchanges are brief: 3-4 turns on average for Google, 7-8 for Baidu's translate app. Both Google and Baidu use machine learning to power their translation technology. While a human linguist could dictate all the rules for going from one language to another, that would be tedious, and yield poor results because a lot of languages aren't structured in parallel form. So instead, both companies have moved to pattern recognition through "neural machine translation." They take a mountain of data - really good translations - and load it into their computers. Algorithms then mine through the data to look for patterns. The end product is translation that's not just phrase-by-phrase, but entire thoughts and sentences at a time. Not surprisingly, sometimes translations are successes, and other times, epic fails. Why? As Macduff Hughes, a Google executive, notes, "there's a lot more to translation than mapping one word to another. The cultural understanding is something that's hard to fully capture just in translation."
joellehiga17

Suicide prevention app could save teenagers' lives - 0 views

  •  
    A machine learning algorithm analyses verbal and non-verbal cues It could correctly identify if someone is suicidal with 93% accuracy Researchers incorporated the algorithm into an app trialed in schools By recording conversations and analysing cues such as pauses and sighs, it could help to flag those most at risk of taking their own life Researchers are developing an app which could help to prevent suicides by flagging those most at risk.
Lara Cowell

Alexa vs. Siri vs. Google: Which Can Carry on a Conversation Best? - 1 views

  •  
    Just in case you were under the misimpression that artificial intelligence will be taking over the world shortly, this article suggests that digital assistants really can't even handle the sort of everyday linguistic interaction that humans take for granted. Still, it is interesting to find out how product engineers are designing the assistants to become "smarter" at comprehending your words and requests. Machine learning algorithms can help devices deal with turn-by-turn exchanges. But each verbal exchange is limited to a simple, three- or four-turn conversation.
kianakomeiji22

How exactly does Google Translate produce results? - 0 views

  •  
    This article discusses how Google Translate functions. Google Translate is a relatively accurate and easy-to-use translator. At first, the system required millions of human-generated translations of texts to identify patterns, in order to provide a pretty accurate translation. Also during this early period, the translator would use English as an intermediary language-languages were translated to English and then from English to the target language. The translator was decent at translating short excerpts, but as the texts got longer, there is a decline in the quality of the translations. In 2016, Google announced they were shifting to a neural network machine learning process, which is supposed to attempt look at the full context of the texts to eliminate discrepancies in translations. This way instead of an intermediary language, the system can just translate from one language to another.
Lara Cowell

Computing for deaf people - The race to teach sign language to computers | Science &amp... - 3 views

  •  
    The World Health Organisation counts 430m people as deaf or hard of hearing. Many use sign languages to communicate. If they cannot also use those languages to talk to computers, they risk being excluded from the digitisation that is taking over everyday life. Sign language poses particular issues in re: its translation to either text or speech. Some challenges include improving the machine-learning algorithms that recognise signs and their meaning and developing the best methods to interpret sign languages' distinctive grammars. The applications of such technology could improve the lives of the deaf, for example, allowing them to use their cell phones to search for directions or look up the meanings of unknown signs, without resorting to the written form of a spoken language.
Lara Cowell

Is language the ultimate frontier of AI research? | Stanford School of Engineering - 0 views

  •  
    Learning the intricacies of human languages is hard even for human children and non-native speakers - but it's particularly difficult for AI. Scientists have already taught computers how to do simple tasks, like translating one language to another or searching for keywords. Artificial intelligence has gotten better at solving these narrow problems. But now scientists are tackling harder problems, like how to build AI algorithms that can piece together bits of information to give a coherent answer for more complicated, nuanced questions. "Language is the ultimate frontier of AI research because you can express any thought or idea in language," states Stanford computer science professor Yoav Shoham. "It's as rich as human thinking." For Shoham, the excitement about artificial intelligence lies not only in what it can do - but also in what it can't. "It's not just mimicking the human brain in silicon, but asking what traits are so innately human that we don't think we can emulate them on a computer," Shoham said. "Our creativity, fairness, emotions, all the stuff we take for granted - machines can't even come close."
Lara Cowell

Natural Language Processing ft. Siri - 0 views

  •  
    Siri uses a variety of advanced machine learning technologies to be able to understand your command and return a response - primarily natural language processing (NLP) and speech recognition. NLP primarily focuses on allowing computers to understand and communicate in human language. In terms of programming, languages are split up into three categories - syntax, semantics, and pragmatics. Whereas syntax describes the structure and composition of the phrases, semantics provide meaning for the syntactic elements. Pragmatics, on the other hand, refers to the composition and context in which the phrase is used.
keonsagara23

The Chinese Room Argument (Stanford Encyclopedia of Philosophy) - 0 views

  •  
    This post discusses the Chinese Room thought experiment, which examines whether or not computers are actually able to understand us and our words, or if they're just stringing together certain words by association.
Lara Cowell

Did My Cat Just Hit On Me? An Adventure in Pet Translation - 0 views

  •  
    The urge to converse with animals is age-old, long predating the time when smartphones became our best friends. A new app, is the product of a growing interest in enlisting additional intelligences - machine-learning algorithms - to decode animal communication. The app detects and analyzes cat utterances in real-time, assigning each one a broadly defined "intent," such as happy, resting, hunting or "mating call." It then displays a conversational, plain English "translation" of whatever intent it detects. MeowTalk uses the sounds it collects to refine its algorithms and improve its performance, the founders said, and pet owners can provide in-the-moment feedback if the app gets it wrong. In 2021, MeowTalk researchers reported that the software could distinguish among nine intents with 90 percent accuracy overall. But the app was better at identifying some than others, not infrequently confusing "happy" and "pain," according to the results. Dogs could soon have their own day. Zoolingua, a start-up based in Arizona, is hoping to create an A.I.-powered dog translator that will analyze canine vocalizations and body language. Still, even sophisticated algorithms may miss critical real-world context and cues, said Alexandra Horowitz, an expert on dog cognition at Barnard College. For instance, much of canine behavior is driven by scent. "How is that going to be translated, when we don't know the extent of it ourselves?" Dr. Horowitz said in an email.
1 - 13 of 13
Showing 20 items per page