Skip to main content

Home/ Words R Us/ Group items tagged speeches

Rss Feed Group items tagged

jessicali19

Polite vs. Informal Speech in Korean - 3 views

  •  
    This post is about Korean language and the two main sections of speech styles in Korea. It will help you better understand the different speaking styles in Korean and when and how to use them. The two styles of speech, 존댓말 (Polite speech) and 반말 (Informal speech), are spoken based on hierarchy since Korean culture has strong Confucian influence due to the country's history. The hierarchy is mainly based on age and social status. For example, when speaking to a teacher in school, you would speak to them with polite speech because they are older than you and know more than you. Phrases and sentences can be said in different ways depending on the style of speech used, but will still have the same meaning.
Lara Cowell

Speech Means Using Both Sides of Brain - 3 views

  •  
    We use both sides of our brain for speech, New York University researchers have discovered: a finding that alters previous conceptions about neurological activity. Many in the scientific community have posited that both speech and language are lateralized -- that is, we use only one side of our brains for speech, which involves listening and speaking, and language, which involves constructing and understanding sentences. However, the conclusions pertaining to speech generally stem from studies that rely on indirect measurements of brain activity, raising questions about characterizing speech as lateralized. In their examination, the researchers tested the parts of the brain that were used during speech. Here, the study's subjects were asked to repeat two "non-words" -- "kig" and "pob." Using non-words as a prompt to gauge neurological activity, the researchers were able to isolate speech from language. An analysis of brain activity as patients engaged in speech tasks showed that both sides of the brain were used -- that is, speech is, in fact, bi-lateral. The results also offer insights into addressing speech-related inhibitions caused by stroke or injury and lay the groundwork for better rehabilitation methods.
Ryan Catalani

Are mirror neurons the basis of speech perception? « Replicated Typo - 1 views

  •  
    According to a new study: "The human mirror system/motor speech system is not critical for speech perception. Temporal lobe structures, rather than motor structures, are the primary substrate for speech perception."
Lisa Stewart

The effect of gesture on speech production and comprehension. | Goliath Business News - 7 views

  • one primary objective of this study was to examine the relationships among gesture, speech production, and listener comprehension. In doing so, we address two questions: First, do gestures enhance listener comprehension? Second, if gesture enhances comprehension, how does it do so? Does gesture have a direct effect on listener comprehension, or does gesture enhance listener comprehension only because it aids the speaker in producing more effective speech? Thus our first objective in this study was to examine the extent to which gesture enhances listener comprehension and the extent to which this relationship is mediated by the effect of gesture on speech production.
  • they gesture more on certain types of words or phrases. For example, Rauscher et al. (1996) found that gesturing was nearly five times more frequent on "spatial content phrases" (phrases containing spatial prepositions such as "under" and "on") than on nonspatial phrases. Moreover, they found that not being able to gesture was more damaging when the speaker attempted to convey spatial content. Therefore, a second objective of this study was to examine whether gesture (or not being able to gesture) is more important for some types of speech than for others.
  •  
    can't access full article, but this describes how they set up research experiments to answer their questions about the relationship between speech and gesture
Ryan Catalani

Research Gives Insight into Brain Function of Adults Who Stutter - 3 views

  •  
    "[New research] is suggesting that atypical brain function is a fundamental aspect of speech production tasks for adults who stutter. ... "Now because many speech areas are interconnected across the two hemispheres through the corpus callosum, it might suggest that hemispheric dominance for speech and language has not been established to the same degree as it has been for normally fluent adults." ... Loucks then used functional magnetic resonance imaging of brain activity to study participants who stutter and found that even brief, simple speech tasks - such as producing a single word to name a picture - is associated with altered functional activity."
Ryan Catalani

Futurity.org - How we hear ourselves speak - 2 views

  •  
    "This shows that our brain has a complex sensitivity to our own speech that helps us distinguish between our vocalizations and those of others, and makes sure that what we say is actually what we meant to say," Flinker says.... While the study doesn't specifically address why humans need to track their own speech so closely, Flinker theorizes that, among other things, tracking our own speech is important for language development, monitoring what we say and adjusting to various noise environments.
Lara Cowell

Letting a baby play on an iPad might lead to speech delays, study says - 0 views

  •  
    A new study, conducted by Dr. Catherine Birken, a pediatrician and scientist at the Hospital for Sick Children in Toronto, Ontario, revealed the following: the more time children between the ages of six months and two years spent using handheld screens such as smartphones, tablets and electronic games, the more likely they were to experience speech delays. In the study, which involved nearly 900 children, parents reported the amount of time their children spent using screens in minutes per day at age 18 months. Researchers then used an infant toddler checklist, a validated screening tool, to assess the children's language development also at 18 months. They looked at a range of things, including whether the child uses sounds or words to get attention or help and puts words together, and how many words the child uses. Twenty percent of the children spent an average of 28 minutes a day using screens, the study found. Every 30-minute increase in daily screen time was linked to a 49% increased risk of what the researchers call expressive speech delay, which is using sounds and words. Commenting on the study, Michelle MacRoy-Higgins and Carlyn Kolker, both speech pathologists/therapists and co-authors of "Time to Talk: What You Need to Know About Your Child's Speech and Language Development," offered this advice: interact with your child. The best way to teach them language is by interacting with them, talking with them, playing with them, using different vocabulary, pointing things out to them and telling them stories.
Lara Cowell

The Center for Advanced Research on Language Acquisition (CARLA): Pragmatics and Speech... - 1 views

  •  
    An important area of the field of second/foreign language teaching and learning is pragmatics -- the appropriate use of language in conducting speech acts such as apologizing, requesting, complimenting, refusing, thanking. Meaning is not just encoded in word semantics alone, but is affected by the situation, the speaker and the listener.A speech act is, according to linguist Kent Bach, "the performance of several acts at once, distinguished by different aspects of the speaker's intention: there is the act of saying something, what one does in saying it, such as requesting or promising, and how one is trying to affect one's audience". Speech acts can be broken down into 3 levels: 1. locutionary: saying something 2. illocutionary: the speaker's intent in performing the act. For example, if the locutionary act in an interaction is the question "Is there any salt?" the implied illocutionary request is "Can someone pass the salt to me?"; 3. In some instances, there's a third perlocutionary level: the act's effect on the feelings, thoughts or actions of either the speaker or the listener, e.g., inspiring, persuading or deterring. The Center for Advanced Research on Language Acquisition (CARLA) at University of Minnesota provides a collection of descriptions of speech acts, as revealed through empirical research. The material is designed to help language teachers and advanced learners to be more aware of the sociocultural use of the language they are teaching or learning. These speech acts include: Apologies Complaints Compliments/Responses Greetings Invitations Refusals Requests Thanks
Lara Cowell

Pittsburgh and the Dilemma of Anti-Semitic Speech Online - The Atlantic - 0 views

  •  
    Robert Bowers, the alleged Pittsburgh synagogue killer, had an online life like many thousands of anti-Semitic Americans. He had Twitter and Facebook accounts and was an active user of Gab, a right-wing Twitter knockoff with a hands-off approach to policing speech. The Times of Israel reported that among anti-Semitic conspiracy theories and slurs, Bowers had recently posted a picture of "a fiery oven like those used in Nazi concentration camps used to cremate Jews, writing the caption 'Make Ovens 1488F Again,'" a white-supremacist reference. Then he made one last post, saying, "I'm going in," and allegedly went to kill 11 people at the Tree of Life synagogue in Pittsburgh. Only then did his accounts come down, just like Cesar Sayoc's, the mail-bomb suspect. This is how it goes now. Both of these guys made nasty, violent, prejudiced posts. Yet, as reporter after reporter has noted, their online lives were-to the human eye at least-indistinguishable from the legions of other trolls who say despicable things. There is just no telling who will stay in the comments section and who will try to kill people in the real world. It was not long ago that free-speech absolutism was the order of the day in Silicon Valley. But that was before anti-Semitic attacks spiked; before the Charlottesville, Virginia, killing; before the kind of open racism that had lost purchase in American culture made its ugly resurgence. Each new incident ratchets up the pressure on technology companies to rid themselves of their trolls. But the culture they've created will not prove easy to stamp out.
Lara Cowell

The Music-Speech-Rehab Connection - 3 views

  •  
    Author Sena Moore writes about how music can re-wire our brains for speech. Singing and speaking activate similar areas on both sides of the brain, primarily in the motor production and sensory feedback areas. Singing, however, also activates the right hemisphere in some areas more strongly than the left. Speech is a left-hemisphere-dominate function. In other words, similar networks in the brain associated with vocal production are activated when a person is singing and when s/he is speaking. And the "stronger right hemisphere" activation supports the clinical observation that those who cannot speak because of damage to the left hemisphere speech areas known as Broca's area can still produce words by singing them.
Lara Cowell

Speech Accent Archive (George Mason University) - 1 views

  •  
    This speech accent archive, headed by Steven Weinberger, a linguistics professor at George Mason University, is a project of the linguistics program in the Department of English, the College of Arts and Science's Technology Across the Curriculum program, and the Center for History and New Media at George Mason University. The archive uniformly presents a large set of speech samples from a variety of language backgrounds. Native and non-native speakers of English read the same paragraph and are carefully transcribed. The archive is used by people who wish to compare and analyze the accents of different English speakers. This website allows users to compare the demographic and linguistic backgrounds of the speakers in order to determine which variables are key predictors of each accent. The speech accent archive demonstrates that accents are systematic rather than merely mistaken speech. Each individual sample page contains a sound control bar, a set of the answers to 7 demographic questions, a phonetic transcription of the sample,1 a set of the speaker's phonological generalizations, a link to a map showing the speaker's place of birth, and a link to the Ethnologue language database. The archive also contains a set of native language phonetic inventories so that you can perform some contrastive analyses.
Lara Cowell

Mapping language in the brain - 1 views

  •  
    'By studying language in people with aphasia, we can try to accomplish two goals at once: we can improve our clinical understanding of aphasia and get new insights into how language is organized in the mind and brain,' said Daniel Mirman, Professor of Psychology at Drexel University. Mirman is lead author of a new study which examined data from 99 people who had persistent language impairments after a left-hemisphere stroke. In the first part of the study, the researchers collected 17 measures of cognitive and language performance and used a statistical technique to find the common elements that underlie performance on multiple measures. Researchers found that spoken language impairments vary along four dimensions or factors: 1. Semantic Recognition: difficulty recognizing the meaning or relationship of concepts, such as matching related pictures or matching words to associated pictures. 2. Speech Recognition: difficulty with fine-grained speech perception, such as telling "ba" and "da" apart or determining whether two words rhyme. 3. Speech Production: difficulty planning and executing speech actions, such as repeating real and made-up words or the tendency to make speech errors like saying "girappe" for "giraffe." 4. Semantic Errors: making semantic speech errors, such as saying "zebra" instead of "giraffe," regardless of performance on other tasks that involved processing meaning. In the second part of the study, researchers mapped the areas of the brain associated with each of the four dimensions identified above.
Lara Cowell

Meta to break language barriers with AI, builds universal speech translator - 1 views

  •  
    Mark Zuckerberg, the CEO of Meta, which owns Facebook, wants to break language barriers across the globe using artificial intelligence (AI). Meta announced an ambitious AI driven project, which will be key to building its Metaverse. The company said that it is building a universal speech translator, along with an AI powered virtual assistant. CEO Mark Zuckerberg, in an online presentation, stated, "The ability to communicate with anyone in any language - that's a superpower people have dreamed of forever, and AI is going to deliver that within our lifetime.For people who understand languages like English, Mandarin, or Spanish, it may seem like today's apps and web tools already provide the translation technology we need. Nearly half the world's population can't access online content in their preferred language today. No Language Left Behind is a single system capable of translating between all written languages. "We're also working on Universal Speech Translator, an AI system that provides instantaneous speech-to-speech translation across all languages, even those that are mostly spoken," said the company in a blog.
Lara Cowell

Sophomoric? Members Of Congress Talk Like 10th Graders, Analysis Shows : NPR - 5 views

  •  
    Members of Congress are often criticized for what they do - or rather, what they don't do. But what about what they say and, more specifically, how they say it? It turns out that the sophistication of congressional speech-making is on the decline, according to the open government group the Sunlight Foundation.
  •  
    Here's a follow-up on the same study, examining the speech of Hawaii's senators and representatives: http://www.staradvertiser.com/news/breaking/157017545.html?id=157017545. U.S. Sen. Daniel Akaka speaks at a college sophomore level, according to an analysis of his speeches by the Sunlight Foundation, a Washington group that pushes for government transparency. The analysis ranks Akaka in the top five among members of Congress for his use of longer sentences and more complex words. U.S. Sen. Daniel Inouye isn't far behind. His speeches use words and sentences on the level of a college freshman. U.S. Rep. Mazie Hirono speaks at the level of a high school senior, while U.S. Rep. Colleen Hanabusa's speeches are at the high school freshman level, according to the study. Of course, longer sentences and more complex vocabulary use don't necessarily make for better communication, nor indicate effectiveness in re: serving one's constituents.
Jake Van Meter

The 35 Greatest Speeches in History - 1 views

  •  
    Some great excerpts of some great speeches.
Kisa Matlin

Between Speech and Song - Association for Psychological Science - 1 views

  •  
    Research about the association between music and speech. Tonal languages, such as Mandarin, support theories of language developing out of a "protolanguage" comprised of sounds that were more similar to tones than words.
Lara Cowell

Language acquisition: From sounds to the meaning: Do young infants know that words in l... - 0 views

  •  
    Without understanding the 'referential function' of language (words as 'verbal labels', symbolizing other things) it is impossible to learn a language. Is this implicit knowledge already present early in infants? Marno, Nespor, and Mehler of the International School of Advanced Studies conducted experiments with infants (4 months old). Babies watched a series of videos where a person might (or might not) utter an (invented) name of an object, while directing (or not directing) their gaze towards the position on the screen where a picture of the object would appear. By monitoring the infants' gaze, Marno and colleagues observed that, in response to speech cues, the infant's gaze would look faster for the visual object, indicating that she is ready to find a potential referent of the speech. However, this effect did not occur if the person in the video remained silent or if the sound was a non-speech sound. "The mere fact of hearing verbal stimuli placed the infants in a condition to expect the appearance, somewhere, of an object to be associated with the word, whereas this didn't happen when there was no speech, even when the person in the video directed the infant's gaze to where the object would appear, concludes Marno. "This suggests that infants at this early age already have some knowledge that language implies a relation between words and the surrounding physical world. Moreover, they are also ready to find out these relations, even if they don't know anything about the meanings of the words yet. Thus, a good advice to mothers is to speak to their infants, because infants might understand much more than they would show, and in this way their attention can be efficiently guided by their caregivers."
Lara Cowell

Controversial Speeches on Campus Are Not Violence - The Atlantic - 0 views

  •  
    Free speech, properly understood, is not violence. It is a cure for violence. Freedom of speech is the eternally radical idea that individuals will try to settle their differences through debate and discussion, through evidence and attempts at persuasion, rather than through the coercive power of administrative authorities-or violence. The authors of this article assert that while it may feel unpleasant grappling with ideas and perspectives that run counter to one's own, it creates positive stress that strengthens one's resilience and allows one to reap the longer-term benefits of learning.
alishiraishi21

View of A Collaboration Between Music Therapy and Speech Pathology in a Paediatric Reha... - 0 views

  •  
    This article shows the importance of music therapy practice when focusing on communication skills with a speech pathologist within a pediatric rehabilitation setting. There is a case about a kid named Sam who is an eleven year old boy who sustained a severe garrotting injury. The article goes over how the individual music therapy program helped him to maximize his potential and motivation in achieving his communication goals, while speech pathology provided therapeutic intervention and outcomes while he re-learned his speech skills.
alishiraishi21

What Happens When You Have A Speech Disorder? · Frontiers for Young Minds - 0 views

  •  
    This article talks about how speech and language disorders can occur in a variety of different ways. Sometimes, people's brains have problems figuring out how to make their mouths and tongues move in the proper way to make the sounds they want to make. The article goes over how these children might have problems learning others things as well such as reading. In other cases, some children have speech language disorders because of cerebral palsy which means that the muscles in their bodies do not work as well as they should, making it harder to make your mouth create the right sounds. or, children might be deaf, and unable to hear that they're making wrong sounds. The article states many different reasons why people might have speech and language disorders
1 - 20 of 580 Next › Last »
Showing 20 items per page