[go: up one dir, main page]

Chu et al., 2025 - Google Patents

DCPTalk: Speech-Driven 3D Face Animation With Personalized Facial Dynamic Coupling Properties

Chu et al., 2025

Document ID
3417299908631409385
Author
Chu Z
Guo K
Xing X
Liu P
Cai B
Xu X
Publication year
Publication venue
IEEE Transactions on Multimedia

External Links

Snippet

Speech-driven 3D facial animation has emerged as a hot topic. During this process, movements in different facial regions are interdependent, influenced by the intricate interactions among facial muscles, and manifest personalized differences. The existing …
Continue reading at ieeexplore.ieee.org (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00268Feature extraction; Face representation
    • G06K9/00281Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00268Feature extraction; Face representation
    • G06K9/00275Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means

Similar Documents

Publication Publication Date Title
Li et al. Write-a-speaker: Text-based emotional and rhythmic talking-head generation
Busso et al. Rigid head motion in expressive speech animation: Analysis and synthesis
Le et al. Live speech driven head-and-eye motion generators
Sadoughi et al. Speech-driven expressive talking lips with conditional sequential generative adversarial networks
CN112581569B (en) Adaptive emotion expression speaker facial animation generation method and electronic device
Ding et al. Laughter animation synthesis
Rebol et al. Passing a non-verbal turing test: Evaluating gesture animations generated from speech
Ding et al. Modeling multimodal behaviors from speech prosody
Fan et al. Joint audio-text model for expressive speech-driven 3d facial animation
CN119378647B (en) Training method, system and medium for generating 5D digital human based on AIGC
CN119516063B (en) A digital human driving and presentation system and method for enhanced emotion
Yi et al. Predicting personalized head movement from short video and speech signal
Chu et al. CorrTalk: Correlation between hierarchical speech and facial activity variances for 3D animation
Liu et al. Data-driven 3D neck modeling and animation
Li et al. A survey of computer facial animation techniques
Xu et al. Kmtalk: Speech-driven 3d facial animation with key motion embedding
Wu et al. ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE
Park et al. Df-3dface: One-to-many speech synchronized 3d face animation with diffusion
Čereković et al. Multimodal behavior realization for embodied conversational agents
Feng et al. Emospeaker: One-shot fine-grained emotion-controlled talking face generation
Ding et al. Audio-driven laughter behavior controller
Chu et al. DCPTalk: Speech-Driven 3D Face Animation With Personalized Facial Dynamic Coupling Properties
Mascaró et al. Laughter and smiling facial expression modelling for the generation of virtual affective behavior
Medina et al. Phisanet: Phonetically informed speech animation network
Fares et al. TranSTYLer: Multimodal Behavioral Style Transfer for Facial and Body Gestures Generation