Automatic Recognition of Emotions in Speech: Models and Methods

Methods in acoustic speech recognition and natural language processing can be utilised in a modified form to detect and analyse further affective information transported by the acoustic signal: emotional content, intentions, and involvement in a situation. We describe technical steps for software-supported affect annotation and automatic emotion recognition, report on the data used for evaluation of these methods, and show possible applications in companion systems and in dialog control.