RAVDESS: The Emotional Speech Database Behind a Generation of Affective Computing

How 24 actors performing scripted emotions in a Toronto studio became the backbone of affective computing research worldwide.

Livingstone, Steven R., Russo, Frank A.|2018|1,189,323|View on Zenodo →
Neutral
Calm
Happy
Sad
Angry
Fearful
Disgust
Surprised
Song variants

What happens when you try to teach a machine to recognize sadness?

In 2018, two researchers at Ryerson University solved a problem that had been quietly blocking an entire field: there was no reliable, standardized way to train AI models to recognize human emotion in voice. So they built one. Twenty-four professional actors. Eight emotions. Hundreds of scripted recordings each. The result was RAVDESS — and it changed everything.

The dataset's genius is in its standardization. Every actor performed every emotion at two intensity levels, in both speech and song. This systematic structure meant researchers could finally make apples-to-apples comparisons between models, institutions, and approaches. Before RAVDESS, emotion AI research was fragmented. After it, a common benchmark existed.

Over 1.2 million downloads later, RAVDESS is cited in thousands of papers spanning healthcare, human-computer interaction, automotive safety, and mental health monitoring. The 24 actors who performed in a Toronto studio have, unknowingly, shaped how machines across the world are learning to understand the texture of human feeling.

Recordings per emotion category

Distribution of audio files across 8 primary emotion classes

Annual downloads growth

RAVDESS download trajectory showing explosive adoption in AI research

7,356total recordings
24professional actors
8emotion categories
2018published, still dominant+340% downloads since 2020
Twenty-four actors performing scripted sadness in Toronto are now the reason your phone might one day notice you sound stressed.
01
Used in over thousands of peer-reviewed papers across medicine, HCI, and automotive safety
02
Equal gender split — 12 female, 12 male actors — making it one of the most balanced emotion datasets
03
Available in audio-only and audio-visual formats, supporting both speech and multimodal AI research
🔬

Scientific Impact

RAVDESS established the benchmark for affective computing research, enabling reproducible comparison across thousands of emotion recognition studies worldwide.

🏛️

Policy Relevance

As emotion AI enters healthcare and automotive systems, standardized training data like RAVDESS becomes central to regulatory conversations about AI reliability and bias.

🌍

Broader Context

The dataset's CC BY-NC-SA license has kept it in academic hands, but commercial applications of emotion AI trained on similar data are already embedded in mental health apps, call centers, and driver monitoring systems.

Share this story

View on Zenodo →