Please use this identifier to cite or link to this item:
metadata.conference.dc.title: Improved Speaker-Independent Emotion Recognition from Speech Using Two-Stage Feature Reduction
metadata.conference.dc.contributor.*: Hasrul Mohd Nazid
Hariharan Muthusamy
Vikneswaran Vijean
Sazali Yaacob
metadata.conference.dc.subject: Emotional speech
cepstral features
feature reduction
emotion recognition 2015
metadata.conference.dc.publisher: Universiti Utara Malaysia Press
metadata.conference.dc.identifier.citation: Hasrul Mohd Nazid, Hariharan Muthusamy, Vikneswaran Vijean, Sazali Yaacob. 2015. “Improved Speaker-Independent Emotion Recognition from Speech Using Two-Stage Feature Reduction.” Journal of ICT 14: 57–76.
metadata.conference.dc.description.abstract: In the recent years, researchers are focusing to improve the accuracy of speech emotion recognition. Generally, high emotion recognition accuracies were obtained for two-class emotion recognition, but multi-class emotion recognition is still a challenging task. The main aim of this work is to propose a two-stage feature reduction using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for improving the accuracy of the speech emotion recognition (ER) system. Short-term speech features were extracted from the emotional speech signals. Experiments were carried out using four different supervised classifiers with two different emotional speech databases. From the experimental results, it can be inferred that the proposed method provides better accuracies of 87.48% for speaker dependent (SD) and gender dependent (GD) ER experiment, 85.15% for speaker independent (SI) ER experiment, and 87.09% for gender independent (GI) experiment.
metadata.conference.dc.description: This artilce index by SCOPUS. Sazali Yaacob (UniKL MSI)
metadata.conference.dc.identifier.issn: 1675-414X
Appears in Collections:Journal Articles

Items in UniKL IR are protected by copyright, with all rights reserved, unless otherwise indicated.