[go: up one dir, main page]

Yehia et al., 1999 - Google Patents

Using speech acoustics to drive facial motion

Yehia et al., 1999

View PDF
Document ID
3776162231970022101
Author
Yehia H
Kuratate T
Vatikiotis-Bateson E
Publication year
Publication venue
Proc. the 14th International Congress of Phonetic Sciences

External Links

Snippet

This paper describes and evaluates a method to estimate facial motion during speech from the speech acoustics. It is a statistical method based on simultaneous measurements of facial motion and speech acoustics. Experiments were carried out for one American English …
Continue reading at www.internationalphoneticassociation.org (PDF) (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices

Similar Documents

Publication Publication Date Title
Yehia et al. Linking facial animation, head motion and speech acoustics
Yehia et al. Using speech acoustics to drive facial motion
Yehia et al. Quantitative association of vocal-tract and facial behavior
Kuratate et al. Audio-visual synthesis of talking faces from speech production correlates.
CN101887728B (en) Method for multi-sensory speech enhancement
Kangas On the analysis of pattern sequences by self-organizing maps
JP2003255993A (en) Speech recognition system, speech recognition method, speech recognition program, speech synthesis system, speech synthesis method, speech synthesis program
DE4317372A1 (en) Acoustic and visual input speech recognition system - monitors lip and mouth movements by video camera to provide motion vector input to neural network based speech identification unit.
CN118398033B (en) A speech-based emotion recognition method, system, device and storage medium
CN118800277B (en) Digital human interaction system and method based on big data information
Yehia et al. Facial animation and head motion driven by speech acoustics
CN120086807A (en) Adaptive teaching strategy adjustment method based on sentiment analysis, computer device
Pitermann et al. An inverse dynamics approach to face animation
Monaci et al. Learning bimodal structure in audio–visual data
Rani et al. Speech recognition using neural network
Lee et al. Articulatory Feature Prediction from Surface EMG during Speech Production
Kagalkar et al. Mobile Application Based Translation of Sign Language to Text Description in Kannada Language.
Brooke Talking heads and speech recognisers that can see: The computer processing of visual speech signals
Sharma et al. Gesture recognition system
Csapó Extending text-to-speech synthesis with articulatory movement prediction using ultrasound tongue imaging
US20070154033A1 (en) Audio source separation based on flexible pre-trained probabilistic source models
Vatikiotis-Bateson et al. Speaking mode variability in multimodal speech production
JPH02232783A (en) Syllable recognizing device by brain wave topography
Barbosa et al. Temporal characterization of auditory-visual coupling in speech
Bergsland et al. Examining the correlation between dance and electroacoustic music phrases: a pilot study