Multimodal Affect Recognition in Intelligent Tutoring Systems - 1 views
-
leventmetu on 10 Nov 13In human-interaction, 55% of affective information is carried by the body whilst 38% by the voice tone and volume, and only 7% person by the words spoken [1]. Ekman [2] further suggests that non-verbal behaviours are the primary vehicles for expressing emotion. With the availability of computational power, and great advances in the fields of computer vision and speech recognition, it is now possible to create systems that can detect facial expressions, gestures and body postures from video and audio feed. Furthermore, systems that can integrate different modalities can offer powerful and much more pleasant computer experiences as they would be embracing users' natural behaviour.
-
leventmetu on 10 Nov 13In the paper it says "According to Wolcott teachers rely on nonverbal means such as eye contact, facial expressions and body language to determine the cognitive states of students, which indicate the degree of success in the instructional transaction". I really wonder what is your opinion about it and would it be succesful to implement affect recognition (after voice-recognition) in intelligent tutoring systems.