Emotion Recognition in Human–Robot Interaction
Multimodal Fusion, Deep Learning, and Ethical Considerations
Publication Info
- Author: Yuan Gao
- Venue: DAML / Scitepress
- Year: 2024
- Link: https://www.scitepress.org/Link.aspx?doi=10.5220/0013526100004619
Research Motivation
As robots increasingly interact with humans in social, service, and healthcare contexts, the ability to recognize and respond to human emotions has become essential. This project investigates how emotion recognition can enhance the naturalness, effectiveness, and empathy of human–robot interaction.
Research Scope
This work surveys and analyzes emotion recognition techniques in HRI, including perception-based methods, EEG-based physiological approaches, and multimodal fusion systems. The study focuses on real-time applicability and interaction robustness.
Methods Overview
- Review of facial expression and speech-based emotion recognition methods.
- Analysis of EEG-based emotion detection using deep learning models.
- Examination of multimodal fusion techniques combining visual, auditory, and physiological signals.
- Discussion of CNN-, LSTM-, and Transformer-based architectures.
Key Contributions
- Synthesized recent advances in multimodal emotion recognition for HRI.
- Compared strengths and limitations of perception-based, EEG-based, and fusion approaches.
- Highlighted the role of deep learning in improving real-time emotion recognition performance.
Ethical Considerations
The study addresses ethical challenges including data privacy, user consent, emotional manipulation risks, and bias in emotion-aware AI systems. It emphasizes the importance of responsible deployment and regulatory frameworks in social robotics.
Publication
Emotion Recognition in Human–Robot Interaction: Multimodal Fusion, Deep Learning, and Ethical Considerations.
Proceedings of DAML / Scitepress, 2024.