Authors
Thurid Vogt, Elisabeth André
Publication date
2006
Description
Feature extraction is still a disputed issue for the recognition of emotions from speech. Differences in features for male and female speakers are a well-known problem and it is established that gender-dependent emotion recognizers perform better than gender-independent ones. We propose a way to improve the discriminative quality of gender-dependent features: The emotion recognition system is preceded by an automatic gender detection that decides upon which of two gender-dependent emotion classifiers is used to classify an utterance. This framework was tested on two different databases, one with emotional speech produced by actors and one with spontaneous emotional speech from a Wizard-of-Oz setting. Gender detection achieved an accuracy of about 90% and the combined gender and emotion recognition system improved the overall recognition rate of a gender-independent emotion recognition system by 2–4%.
Total citations
200620072008200920102011201220132014201520162017201820192020202120222023202414131016137616132219231919141293