12 hours of scripted and improvised dialogue with emotional labels.
IEMOCAP is a multimodal dataset containing acted and scripted dialogues designed to capture a wide range of human emotions. Developed by the SAIL Lab at the University of Southern California, the dataset includes audio recordings, transcriptions, motion capture data, and emotion annotations. It focuses on expressive speech and interpersonal interaction, providing rich multimodal signals for emotion analysis.
Iemocap Is Widely Used For Research In Speech Emotion Recognition, Affective Computing, And Multimodal Interaction Modeling. It Supports Development Of Systems That Detect Emotional States From Speech, Text, And Non-verbal Cues. The Dataset Is Valuable For Studying Human–computer Interaction, Conversational Agents, And Empathetic Ai Systems.
Other
© 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.