Indexing Metadata

1 Title of the Article Gender-Driven Emotion Recognition System Using Speech Signals For Human Computer Intelligent Interaction
2 Author's name Mekhala Sridevi Sameera,: Computer Science and Engineering, Dhanekula Institute of Engineering & Technology, Vijayawada, India
3 Author's name A Satish Kumar, Kotte Sandeep
4 Subject Computer Science and Engineering,
5 Keyword(s) Classification, Emotion Recognition, Feature Extraction, Human Computer Interaction
6 Abstract

This paper proposes a peculiar and very important developing area concerns the remote monitoring of elderly or ill people. Indeed, due to the increasing aged population, Human-Computer Intelligent Interaction (HCII) systems able to help live independently are regarded as useful tools. In this context recognizing people emotional state and giving a suitable feedback may play a crucial role. The purpose of speech emotion recognition system is to automatically classify speaker's utterances into seven emotional states including anger, boredom, disgust, fear, happiness, sadness and neutral state. Emotions have been classified separately for male and female based on the fact male and female voice has altogether different range. It provides a solution by improving interaction among human and computers, thus allowing human-computer intelligent interaction. The system is composed of two subsystems: 1) gender recognition (GR) and 2) emotion recognition (ER). It distinguishes a single emotion versus all other possible ones as in proposed numerical results. Speech based emotion recognition system consists of four principle parts: Feature Extraction, Feature Selection, Database and Classification. Nowadays, the research is focused on finding powerful combinations of classifiers that increases the classification efficiency in real-life speech emotion recognition applications. From these acoustic signals, this project will calculate pitch, short time energy, zero crossing rate and Mel frequency cepstral coefficients, and correlate it to emotions of the driver. We also define these features and the feature extraction methods. In paper, a demonstration on how one can distinguish the emotion based on these features (or combination of features) by testing them over Berlin emotion database

7 Publisher Innovative Research Publication
8 Journal Name; vol., no. International Journal of Innovative Research in Computer Science & Technology (IJIRCST); Volume-3 Issue-3
9 Publication Date May 2015
10 Type Peer-reviewed Article
11 Format PDF
12 Uniform Resource Identifier https://ijircst.org/view_abstract.php?title=Gender-Driven-Emotion-Recognition-System-Using-Speech-Signals-For-Human-Computer-Intelligent-Interaction&year=2015&vol=3&primary=QVJULTE5NA==
13 Digital Object Identifier(DOI)  
14 Language English
15 Page No 37-39

Indexed by

Crossref logo