[go: up one dir, main page]

WeresearchandoptimizeaudiodatasetsforotherfrontiervoiceAIlabs

who are we

We are a frontier voice AI lab

As a frontier research lab, we know what it takes to curate large scale quantities of data that create successful, interactive, and empathic multimodal models. Scaling data is at an inflection point for voice, and we're excited to help labs scale audio pre-training and post-training for speech-language models.

We provide the data and research tooling to help scale model capabilities.

EVERYTHING YOUR MODEL NEEDS

Capabilities

Teach your model to speak 50+ languages, generate voices from prompts, code switch, adopt specific emotions, and more.

Request samples

Explore samples that align with your intended languages, use cases, and model goals.

License access

Scale up access to our large-scale off-the-shelf data, evaluation pipelines, and our voice gym.

Iterate

Collaborate with our researchers to diagnose remaining areas of improvement for your model.

Recent Publications

Peer-reviewed insights

View all
Frontiers in Psychology·May 2024

How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan

Alan Cowen
Jeff
GP
+13
Alan Cowen, Jeffrey Brooks, Gautam Prasad and 13 more

Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression.

iScience·Feb 2024

Deep learning reveals what facial expressions mean to people in different cultures

Jeff
LK
MO
+10
Jeffrey Brooks, Lauren Kim, Michael Opara and 10 more

Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries.

Current Directions in Psychological Science·Feb 2023

Semantic Space Theory: Data-Driven Insights Into Basic Emotions

Dacher Keltner
Jeff
Alan Cowen
Dacher Keltner, Jeffrey Brooks, and Alan Cowen

Here we present semantic space theory and the data-driven methods it entails. Across the largest studies to date of emotion-related experience, expression, and physiology, we find that emotion is high dimensional, defined by blends of upward of 20 distinct kinds of emotions, and not reducible to low-dimensional structures and conceptual processes as assumed by constructivist accounts. Specific emotions are not separated by sharp boundaries, contrary to basic emotion theory, and include states that often blend. Emotion concepts such as “anger” are primary in the unfolding of emotional experience and emotion recognition, more so than core affect processes of valence and arousal. We conclude by outlining studies showing how these data-driven discoveries are a basis of machine-learning models that are serving larger-scale, more diverse studies of naturalistic emotional behavior.

Everything your model needs

Why Our Datasets

World-class data for pre-training and fine-tuning your emotion AI models, backed by years of scientific research.

Contact us

Ethically Sourced

All data collected with informed consent and rigorous privacy protections.

Globally Diverse

Representative samples across cultures, ages, genders, and demographics.

Expert Annotated

Labeled by trained researchers using validated scientific frameworks.

Research Ready

Clean, structured formats optimized for modern ML pipelines.

Research Areas

Where Hume enables research

From fundamental affective computing to applied behavioral research, our tools power studies across the full spectrum of emotion science.

Affective Computing

Study how AI systems can recognize, interpret, and respond to human emotions across modalities.

Human-AI Interaction

Research the dynamics of emotional exchange between humans and AI systems.

Psychology & Behavior

Use emotion recognition to study human behavior, mental health, and psychological phenomena.

Speech & Language

Analyze prosodic features, sentiment, and emotional expression in human communication.

Multimodal Learning

Explore how emotion manifests simultaneously across face, voice, and language.

Ethics & AI Safety

Study the ethical implications of emotionally-aware AI systems and develop guidelines.

From the Blog

Latest research updates

View all

Stay in the loop

Get the latest on empathic AI research, product updates, and company news.

Join the community

Connect with other developers, share projects, and get help from the team.

Join our Discord