Xue et al., 2024 - Google Patents
Emotional Experience Design Strategy for In-Vehicle Intelligent Voice AssistantXue et al., 2024
View PDF- Document ID
- 9610733019738411612
- Author
- Xue L
- Zicen L
- et al.
- Publication year
- Publication venue
- The Frontiers of Society, Science and Technology
External Links
Snippet
In the context of the rapid development of the intelligent automotive industry, the continuous exploration of new technologies and experiences in automotive intelligent cockpit products, and the increasing expectations of users for automotive intelligence, this paper analyzes the …
- 238000013461 design 0 title abstract description 113
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Taking into account non-speech caracteristics
- G10L2015/228—Taking into account non-speech caracteristics of application context
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
- G10L15/265—Speech recognisers specially adapted for particular applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/20—Handling natural language data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/027—Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/30943—Information retrieval; Database structures therefor; File system structures therefor details of database functions independent of the retrieved data type
- G06F17/30964—Querying
- G06F17/30967—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/44—Arrangements for executing specific programmes
- G06F9/4443—Execution mechanisms for user interfaces
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Marge et al. | Spoken language interaction with robots: Recommendations for future research | |
JP7691523B2 (en) | Using large language models in generating automated assistant responses | |
US10839583B2 (en) | Emotive advisory system and method | |
Weng et al. | Conversational in-vehicle dialog systems: The past, present, and future | |
Wandke* | Assistance in human–machine interaction: a conceptual framework and a proposal for a taxonomy | |
Berg | Modelling of natural dialogues in the context of speech-based information and control systems | |
CN111145721A (en) | Personalized prompt language generation method, device and equipment | |
Alvarez et al. | Designing driver-centric natural voice user interfaces | |
Park et al. | Effects of autonomous driving context and anthropomorphism of in-vehicle voice agents on intimacy, trust, and intention to use | |
Huang et al. | A study on the application of voice interaction in automotive human machine interface experience design | |
Zhou et al. | Research on personality traits of in-vehicle intelligent voice assistants to enhance driving experience | |
CN119600999A (en) | Method and apparatus for classifying utterance intent taking into account context surrounding the vehicle and driver | |
Xue et al. | Emotional Experience Design Strategy for In-Vehicle Intelligent Voice Assistant | |
Molina-Markham et al. | “You can do it baby”: Non-task talk with an in-car speech enabled system | |
Neßelrath et al. | SiAM-dp: A platform for the model-based development of context-aware multimodal dialogue applications | |
Araki et al. | Spoken dialogue system for learning Braille | |
Gunarto | Applications of AI-empowered electric vehicles for voice recognition in Asian and Austronesian languages | |
Lin | Towards Inclusive Voice User Interfaces: A Systematic Review of Voice Technology Usability for Users with Communication Disabilities | |
Wang et al. | Multimodal Interaction Design in Intelligent Vehicles | |
Chen et al. | User Interface Design | |
Van Over et al. | Communication in vehicles: Cultural variability in speech systems | |
Marriott et al. | gUI: Specifying complete user interaction | |
Henkens et al. | The sound of progress: AI voice agents in service | |
CN119180335A (en) | Intelligent network connection automobile digital virtual man dialogue method device, vehicle and medium | |
ZENG et al. | Robot-like In-vehicle Agent for a Level 3 Automated Vehicle with Emotions |