Skip to main content

Home/ Words R Us/ Group items tagged recognition

Rss Feed Group items tagged

Ryan Catalani

PLoS ONE: Why Um Helps Auditory Word Recognition: The Temporal Delay Hypothesis - 2 views

  •  
    "Our main conclusion is that delays in word onset facilitate word recognition, and that such facilitation is independent of the type of delay. ... Our findings support the perhaps counterintuitive conclusion that fillers like um can sometimes help (rather than hinder) listeners to identify spoken words. But critically, the data show that the same is true for silent pauses and pauses filled with artificially generated tones. "
Lara Cowell

Mapping language in the brain - 1 views

  •  
    'By studying language in people with aphasia, we can try to accomplish two goals at once: we can improve our clinical understanding of aphasia and get new insights into how language is organized in the mind and brain,' said Daniel Mirman, Professor of Psychology at Drexel University. Mirman is lead author of a new study which examined data from 99 people who had persistent language impairments after a left-hemisphere stroke. In the first part of the study, the researchers collected 17 measures of cognitive and language performance and used a statistical technique to find the common elements that underlie performance on multiple measures. Researchers found that spoken language impairments vary along four dimensions or factors: 1. Semantic Recognition: difficulty recognizing the meaning or relationship of concepts, such as matching related pictures or matching words to associated pictures. 2. Speech Recognition: difficulty with fine-grained speech perception, such as telling "ba" and "da" apart or determining whether two words rhyme. 3. Speech Production: difficulty planning and executing speech actions, such as repeating real and made-up words or the tendency to make speech errors like saying "girappe" for "giraffe." 4. Semantic Errors: making semantic speech errors, such as saying "zebra" instead of "giraffe," regardless of performance on other tasks that involved processing meaning. In the second part of the study, researchers mapped the areas of the brain associated with each of the four dimensions identified above.
Ryan Catalani

Lie-Detection Software Is a Research Quest - NYTimes.com - 7 views

  •  
    "A small band of linguists, engineers and computer scientists, among others, are busy training computers to recognize hallmarks of what they call emotional speech - talk that reflects deception, anger, friendliness and even flirtation. ... Algorithms developed by Dr. Hirschberg and colleagues have been able to spot a liar 70 percent of the time in test situations, while people confronted with the same evidence had only 57 percent accuracy ... His lab has also found ways to use vocal cues to spot inebriation, though it hasn't yet had luck in making its computers detect humor - a hard task for the machines, he said."
Lara Cowell

Brain structure of infants predicts language skills at one year - 2 views

  •  
    Using a brain-imaging technique that examines the entire infant brain, University of Washington researchers have found that the anatomy of certain brain areas - the hippocampus and cerebellum - can predict children's language abilities at one year of age. Infants with a greater concentration of gray and white matter in the cerebellum and the hippocampus showed greater language ability at age 1, as measured by babbling, recognition of familiar names and words, and ability to produce different types of sounds. This is the first study to identify a relationship between language and the cerebellum and hippocampus in infants. Neither brain area is well-known for its role in language: the cerebellum is typically linked to motor learning, while the hippocampus is commonly recognized as a memory processor. "Looking at the whole brain produced a surprising result and scientists live for surprises. It wasn't the language areas of the infant brain that predicted their future linguistic skills, but instead brain areas linked to motor abilities and memory processing," Kuhl said. "Infants have to listen and memorize the sound patterns used by the people in their culture, and then coax their own mouths and tongues to make these sounds in order join the social conversation and get a response from their parents." The findings could reflect infants' abilities to master the motor planning for speech and to develop the memory requirements for keeping the sound patterns in mind. "The brain uses many general skills to learn language," Kuhl said. "Knowing which brain regions are linked to this early learning could help identify children with developmental disabilities and provide them with early interventions that will steer them back toward a typical developmental path."
Lara Cowell

Why We Remember Song Lyrics So Well - 1 views

  •  
    Oral forms like ballads and epics exist in every culture, originating long before the advent of written language. In preliterate eras, tales had to be appealing to the ear and memorable to the mind or else they would simply disappear. After all, most messages we hear are forgotten, or if they're passed on, they're changed beyond recognition - as psychologists' investigations of how rumors evolve have shown. In his classic book Memory in Oral Traditions, cognitive scientist David Rubin notes, "Oral traditions depend on human memory for their preservation. If a tradition is to survive, it must be stored in one person's memory and be passed on to another person who is also capable of storing and retelling it. All this must occur over many generations… Oral traditions must, therefore, have developed forms of organization and strategies to decrease the changes that human memory imposes on the more casual transmission of verbal material." What are these strategies? Tales that last for many generations tend to describe concrete actions rather than abstract concepts. They use powerful visual images. They are sung or chanted. And they employ patterns of sound: alliteration, assonance, repetition and, most of all, rhyme. Such universal characteristics of oral narratives are, in effect, mnemonics - memory aids that people developed over time "to make use of the strengths and avoid the weaknesses of human memory," as Rubin puts it.
Lara Cowell

Why Toy 'Minion' Curse Words Might Just All Be in Your Head - 1 views

  •  
    McDonald's swears up and down that the little yellow "Minions" Happy Meal toy is speaking only nonsense words and not something a little more adult. Experts say the company may be right, and the curse words many hear may be tied to how our brains are primed to find words even when they're not really there. "The brain tries to find a pattern match, even when just receiving noise, and it is good at pattern recognition," says Dr. Steven Novella, a neurologist at the Yale School of Medicine. "Once the brain feels it has found a best match, then that is what you hear. The clarity of the speech actually increases with multiple exposures, or if you are primed by being told what to listen for" - as most people who heard the toy online already had been. The technical name for the phenomenon is "pareidolia," hearing sounds or seeing images that seem meaningful but are actually random. It leads people to see shapes in clouds, a man in the moon or the face of Jesus on a grilled cheese sandwich.
rtakaki16

Sipster, Dready, Bitemize - One Writer Says These Should All Be Words - 2 views

  •  
    Dready, sipster, boudwar, bitemize - none of these are words that you'll find in any official dictionary. But in Lizzie Skurnick's mind, they deserve some recognition in their own right.
Lara Cowell

A life without music - 3 views

  •  
    Amusia is a deficit in musical memory, recognition, and in pitch processing that people can be born with or acquire through brain damage. Some people may think of themselves as being "tone-deaf", but most of these "bad" singers are just that. People with amusia are so unable to hear tones that they even struggle to differentiate between questions and statements when spoken. Language, like music, uses sound to convey meaning; be it a story, or simply an emotion. In fact, music and spoken language use many of the same structural elements: pitch, duration, intensity and melodic contour, to name a few. Melodic contour is the pattern in which pitch changes from high to low over time. This contouring of pitch is often used to express emotion in music. The emotional effect of contouring is appreciated across many cultures and across many age groups. In speech, melodic contour is created by intonation, which allows us to place emphasis upon certain words and distinguish the purpose of the sentence; e.g. whether it is a question, statement or command. These comparisons provide evidence for the overlap of brain areas and mechanisms that underlie speech and music processing. In addition, the storing of sound patterns in short-term memory is also overlapping for both language and music.
Lara Cowell

One Reason Teens Respond Differently To The World - 0 views

  •  
    Recognition of subtle emotional cues may be developmental, according to neurological research. At the McLean Hospital in Belmont, Mass., Deborah Yurgelun-Todd and a group of researchers have studied how adolescents perceive emotion as compared to adults. The scientists looked at the brains of 18 children between the ages of 10 and 18 and compared them to 16 adults using functional magnetic resonance imaging (fMRI). Both groups were shown pictures of adult faces and asked to identify the emotion on the faces. Using fMRI, the researchers could trace what part of the brain responded as subjects were asked to identify the expression depicted in the picture. The results surprised the researchers. The adults correctly identified the expression as fear. Yet the teens answered "shocked, surprised, angry." Moreover, teens and adults used different parts of their brains to process what they were feeling. The teens mostly used the amygdala, a small almond shaped region that guides instinctual or "gut" reactions, while the adults relied on the frontal cortex, which governs reason and planning. As the teens got older, however, the center of activity shifted more toward the frontal cortex and away from the cruder response of the amygdala. Yurgelun-Todd, director of neuropsychology and cognitive neuroimaging at McLean Hospital believes the study goes partway to understanding why the teenage years seem so emotionally turbulent. The teens seemed not only to be misreading the feelings on the adult's face, but they reacted strongly from an area deep inside the brain. The frontal cortex helped the adults distinguish fear from shock or surprise. Often called the executive or CEO of the brain, the frontal cortex gives adults the ability to distinguish a subtlety of expression: "Was this really fear or was it surprise or shock?" For the teens, this area wasn't fully operating.
jessicawilson18

Singing and music as aids to language development and its relevance for children with d... - 1 views

  •  
    Music is such a powerful tool for children, whether it be singing, playing an instrument, or just grooving along to the beat. There are so many types of songs (repetition, recognition, action, imaginative, etc.) that it reaches out to all types of learners and helps develop their language abilities as well! This article explores how music can really help people with down syndrome!
sarahyip17

Computer linguists are developing an intelligent system aid for air traffic controllers - 0 views

  •  
    This article explains the new system created for air traffic controllers and pilots. AcListant is a system that will listen to air controllers' radio conversations to help make suggestions for commands that fit the situation. The system can filter through basic greetings like "Hello" and "Good Morning" and focus on commands instead. AcListant can help with better communication especially with pilots who speak very fast or with an accent.
Lara Cowell

Finding A Pedicure In China, Using Cutting-Edge Translation Apps - 0 views

  •  
    A traveling journalist in Beijing utilizes both Baidu (China's version of Google) and Google voice-translation apps with mixed results. You speak into the apps, they listen and then translate into the language you choose. They do it in writing, by displaying text on the screen as you talk; and out loud, by using your phone's speaker to narrate what you've said once you're done talking. Typically exchanges are brief: 3-4 turns on average for Google, 7-8 for Baidu's translate app. Both Google and Baidu use machine learning to power their translation technology. While a human linguist could dictate all the rules for going from one language to another, that would be tedious, and yield poor results because a lot of languages aren't structured in parallel form. So instead, both companies have moved to pattern recognition through "neural machine translation." They take a mountain of data - really good translations - and load it into their computers. Algorithms then mine through the data to look for patterns. The end product is translation that's not just phrase-by-phrase, but entire thoughts and sentences at a time. Not surprisingly, sometimes translations are successes, and other times, epic fails. Why? As Macduff Hughes, a Google executive, notes, "there's a lot more to translation than mapping one word to another. The cultural understanding is something that's hard to fully capture just in translation."
ellisalang17

How Machines Learned to Speak the Human Language - 0 views

  •  
    This article explains how machines such as "Siri" and "Echo" are able to speak the human language. "Language technologies teach themselves, via a form of pattern-matching. For speech recognition, computers are fed sound files on the one hand, and human-written transcriptions on the other. The system learns to predict which sounds should result in what transcriptions."
laureltamayo17

Shakespeare play helps children with autism communicate - 0 views

  •  
    14 children with autism spectrum disorder participated in the "Hunter Heartbeat Method" which is a drama-based social skills intervention. The children play games that work on skills like facial emotion recognition, personal space, social improvisation, and pragmatics of dialogue exchange. The games are based on the plot of The Tempest and are taught in a relaxed and playful environment. At the end of the ten week program, children showed better language skills and were able to better recognize facial expressions.
Lara Cowell

De-Stigmatizing Hawaii's Creole Language - 1 views

  •  
    The Atlanticʻs Alia Wong writes about the U.S. Censusʻ recognition of Hawai`i Creole English (sometimes termed "pidgin" in local Hawai`i parlance). Wong sees it as a symbolic gesture acknowledging the "legitimacy of a tongue widely stigmatized, even among locals who dabble in it, as a crass dialect reserved for the uneducated lower classes and informal settings. It reinforces a long, grassroots effort by linguists and cultural practitioners to institutionalize and celebrate the language-to encourage educators to integrate it into their teaching, potentially elevating the achievement of Pidgin-speaking students. And it indicates that, elsewhere in the country, the speakers of comparable linguistic systems-from African American Vernacular English, or ebonics, to Chicano English-may even see similar changes one day, too."
Lisa Stewart

Pidgin and Educatino - 12 views

  • When asked what it would be like if he couldn't speak Pidgin, one Oahu man said "Would take me long time fo' say stuff." Another Oahu man compared speaking Standard English and Pidgin in this way: "When I speak Standard English I gotta tink what I going say... Pidgin, I jus' open my mout' and da ting come out."
  • wo programs in Hawai`i in the 1980s to early 1990s (Project Holopono and Project Akamai) included some activities to help Pidgin speaking students recognize differences between their language and Standard English. This recognition of the children's home language was further supported with the use of some local literature using Pidgin. Both projects reported success in helping the students develop Standard English proficiency.
  • When the home language is acknowledged and made use of rather than denigrated at school, it has been found to have these positive consequences: it helps students make the transition into primary school with greater ease; it increases appreciation for the students' own culture and identity and improves self-esteem; it creates positive attitudes towards school; it promotes academic achievement; and it helps to clarify differences between the languages of home and school.
  • ...1 more annotation...
  • causal aswai.
    • Lisa Stewart
       
      or the "swa swa"
marisaiha21

China's language input system in the digital age affects children's reading development - 0 views

  •  
    This study looks at the advancement in technology in China and how it has impacted children and their language skills. Chinese children have learned to use pinyin input on electronic devices, which is typing in what a character sounds like, without having to actually write it out. Use of pinyin input and other e-tools are negatively impacting character recognition skills and Chinese reading acquisition, while handwriting works oppositely.
luralooper21

The power of priming - part one | The Marketing Society - 0 views

  •  
    While there's an infinite number of stimuli in our daily lives, words that prime have been heavily researched. It has been shown that priming words can cause faster recognition or identification of something and also cause later actions that are similar to the ones read about. Because one cannot control the priming that occurs in "system 1" of the subconscious mind, people often incorrectly attribute their thoughts or actions to their own emotions, thoughts, and view points. Since many regions of the brain understand both social warmth as well as physical warmth, or both rough experiences as well rough texture, priming works without us realizing it because it creates neural linkages that only seem to connect subtly.
kaiadunford20

What does research show about the benefits of language learning? - 2 views

This study aimed to validate the effects of second language learning on children's linguistic awareness. More particularly, it examined whether bilingual background improves the ability to manipula...

language brain

Lara Cowell

Natural Language Processing ft. Siri - 0 views

  •  
    Siri uses a variety of advanced machine learning technologies to be able to understand your command and return a response - primarily natural language processing (NLP) and speech recognition. NLP primarily focuses on allowing computers to understand and communicate in human language. In terms of programming, languages are split up into three categories - syntax, semantics, and pragmatics. Whereas syntax describes the structure and composition of the phrases, semantics provide meaning for the syntactic elements. Pragmatics, on the other hand, refers to the composition and context in which the phrase is used.
1 - 20 of 20
Showing 20 items per page