[go: up one dir, main page]

CN1701287A - Interactive device - Google Patents

Interactive device Download PDF

Info

Publication number
CN1701287A
CN1701287A CNA038252929A CN03825292A CN1701287A CN 1701287 A CN1701287 A CN 1701287A CN A038252929 A CNA038252929 A CN A038252929A CN 03825292 A CN03825292 A CN 03825292A CN 1701287 A CN1701287 A CN 1701287A
Authority
CN
China
Prior art keywords
mentioned
user
unit
action model
suggestion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA038252929A
Other languages
Chinese (zh)
Inventor
山本真一
山本浩司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN1701287A publication Critical patent/CN1701287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Toys (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

An interactive apparatus 1 which is able to decide on an action pattern in accordance with health conditions of a user without a necessity of putting a biometric sensor on a human body is provided. The interactive apparatus 1 comprises: detection means 50b for detecting a health condition of a user; deciding means 50c for deciding on an action pattern in accordance with the health condition of the user; execution instructing means 50g for instructing execution of the action pattern; offering means 50e for making an offer of the action pattern to the user with a speech before instructing execution of the action pattern; and determination means 50f for determining whether an answer of the user to the offered action pattern is an answer to accept the offered action pattern or not. The execution instructing means 50g instructs execution of the offered action pattern when the answer of the user is determined to be the answer to accept the offered action pattern.

Description

对话型装置conversational device

技术领域technical field

本发明涉及一种能与用户进行对话的对话型装置。The present invention relates to a dialog device capable of having a dialog with a user.

背景技术Background technique

已公开的一种音响装置是,监测生活状态信息,逐步将居住者喜好的音频信号调整到适合于该居住者的当前生活状态和健康状况的电平进行播放(例如,参照专利文献1)。该音响装置利用设置在房间中的传感器来掌握居住者的生活状态,对居住者所装备的便携型收发机(包括生物传感器)发出的识别信号、生活状态信息进行监测,逐步将居住者喜好的音频信号调整到适合于该居住者的当前生活状态和健康状况的电平进行播放。A disclosed audio device monitors life status information, and gradually adjusts the audio signal preferred by the occupant to a level suitable for the occupant's current life status and health status for playback (for example, refer to Patent Document 1). The audio device uses the sensors installed in the room to grasp the living conditions of the occupants, monitors the identification signals and living status information sent by the portable transceivers (including biosensors) equipped by the occupants, and gradually integrates the occupants' preferences. The audio signal is adjusted to be played at a level suitable for the occupant's current state of life and health.

专利文献1:特开平11-221196号公报Patent Document 1: Japanese Unexamined Patent Application Publication No. 11-221196

但是,上述专利文献1所述的现有技术中存在着这样的课题,为了获取生物信息等,需要居住者装备便携型收发机,对于居住者来说装备麻烦、非常不方便,而且居住者的生活一直受到设置在房间中的传感器的监视,使人感到不快。However, in the prior art described in the above-mentioned Patent Document 1, there is such a problem that in order to obtain biological information, etc., the resident needs to be equipped with a portable transceiver, which is troublesome and inconvenient for the resident, and the resident's Life is constantly monitored by sensors placed in the room, which makes people feel uncomfortable.

本发明的目的在于提供一种无需将生物传感器装备到人体、就能够根据用户的健康状态决定行动模式的对话型装置。An object of the present invention is to provide an interactive device capable of determining an action mode according to a user's health state without equipping a human body with a biosensor.

发明内容Contents of the invention

本发明的对话型装置具有:掌握单元,用来掌握用户的健康状态;决定单元,用来根据上述掌握单元所掌握的上述用户的健康状态决定与之相适应的行动模式;执行指示单元,用来指示执行由上述决定单元所决定的行动模式;建议单元,用来在指示执行由上述决定单元所决定的行动模式之前,通过声音向上述用户建议上述行动模式;判定单元,根据上述用户对上述所建议的行动模式的答复判断其是否接受了上述所建议的行动模式;当判断结果认为上述用户的答复是接受了上述所建议的行动模式的答复时,上述执行指示单元指示执行上述所建议的行动模式,由此实现上述目的。The conversational device of the present invention has: a grasping unit for grasping the health state of the user; a decision unit for determining an appropriate action pattern according to the health state of the user grasped by the grasping unit; an execution instruction unit for to instruct the execution of the action pattern determined by the determination unit; the suggestion unit is used to suggest the above-mentioned action pattern to the user through voice before instructing the execution of the action pattern determined by the above-mentioned determination unit; The reply of the suggested action mode judges whether it has accepted the above-mentioned suggested action mode; mode of action, thereby achieving the above-mentioned purpose.

上述掌握单元也可以根据上述用户的语言表达来掌握上述用户的健康状态。The grasping unit may also grasp the health status of the user according to the user's language expression.

上述掌握单元也可以根据上述用户所说的关键字来掌握上述用户的健康状态。The grasping unit may also grasp the health status of the user according to the keywords spoken by the user.

也可以进一步具有建议必要性有无判定单元,用来在指示执行由上述决定单元所决定的行动模式之前,判断是否需要向上述用户建议上述行动模式;当判断结果认为在指示执行上述行动模式之前有必要向上述用户建议上述行动模式时,上述建议单元通过声音向上述用户建议上述行动模式。It is also possible to further have a suggestion necessity determination unit, which is used to determine whether the above-mentioned action mode needs to be suggested to the above-mentioned user before instructing to execute the action mode determined by the above-mentioned decision unit; When it is necessary to suggest the action pattern to the user, the suggestion unit suggests the action pattern to the user by voice.

上述建议必要性有无判定单元也可以根据预先分配给上述行动模式的表示建议必要性有无的标志值来判断建议有无必要性。The advice necessity determination unit may determine the necessity of the advice based on a flag value indicating the necessity of the advice assigned in advance to the action pattern.

上述建议必要性有无判定单元也可以根据上述行动模式被执行的执行次数的时间分布来判断建议有无必要性。The suggestion necessity determination unit may also determine the necessity of the suggestion according to the time distribution of execution times of the above-mentioned action pattern.

上述决定单元也可以将分别被分配了优先顺序的多个行动模式之一决定为与上述用户的健康状态相适应的行动模式,根据上述用户是否接受了该行动模式,来调整分配给该行动模式的优先顺序。The determination unit may determine one of the plurality of action modes assigned with priorities as an action mode suitable for the health state of the user, and adjust the action mode assigned to the action mode according to whether the user accepts the action mode. order of priority.

也可以进一步具有存储单元,用来存储与上述用户的健康状态相适应的行动模式;上述决定单元使用上述存储单元中存储的行动模式来决定上述行动模式。It may further have a storage unit for storing an action pattern suitable for the health state of the user; the determination unit uses the action pattern stored in the storage unit to determine the action pattern.

由上述建议单元向上述用户建议的行动模式也可以包含播放设备中所播放的内容的选择。The action pattern suggested to the user by the suggestion unit may also include selection of content played in the playback device.

上述内容也可以包含声音数据、影像数据和照明控制数据,上述播放设备根据照明控制数据来改变照明装置的光量及光线颜色中的至少一项。The content may also include audio data, image data and lighting control data, and the playing device changes at least one of light quantity and light color of the lighting device according to the lighting control data.

上述对话型装置也可以具有代理功能及移动功能中的至少一个功能。The above-mentioned interactive device may have at least one of an agent function and a mobile function.

上述用户的健康状态也可以表示上述用户的感情及上述用户的健康状况中的至少一个方面。The health status of the user may represent at least one aspect of the user's emotion and the user's health status.

本发明的对话型装置具有:声音输入部,用来将用户发出的声音变换为声音信号;声音识别部,基于上述声音输入部所输出的声音信号识别用户所说的语言;对话数据库,用来预先登录上述用户可能说出的语言,并保存上述登录语言与上述用户的健康状态的对应关系;掌握单元,将由上述声音识别部识别得到的语言与上述对话数据库中登录的语言进行对照,根据该对照结果确定上述用户的健康状态,由此掌握上述用户的健康状态;决定单元,基于保存了上述用户的健康状态与对话型装置的行动模式的对应关系的行动模式表,来决定与上述掌握单元所掌握的上述用户的健康状态相适应的行动模式;执行指示单元,用来指示执行由上述决定单元所决定的行动模式;建议单元,用来在指示执行由上述决定单元所决定的行动模式之前,基于上述掌握单元的输出结果和上述决定单元的输出结果合成建议语句,通过声音向上述用户发出上述行动模式的建议;判定单元,根据上述用户对上述所建议的行动模式的答复判断其是否接受了上述所建议的行动模式;当判断结果认为上述用户的答复是接受了上述所建议的行动模式的答复时,上述执行指示单元指示执行上述所建议的行动模式,由此实现上述目的。The interactive device of the present invention has: a voice input unit, which is used to convert the voice emitted by the user into a voice signal; a voice recognition unit, which recognizes the language spoken by the user based on the voice signal output by the voice input unit; and a dialogue database for Pre-register the language that the user may speak, and save the correspondence between the registered language and the user's health status; the grasping unit compares the language recognized by the voice recognition unit with the language registered in the dialogue database, The comparison results determine the health status of the user, thereby grasping the health status of the user; the determination unit determines the relationship with the grasping unit based on the action pattern table that stores the correspondence between the health status of the user and the action mode of the interactive device. The grasped action pattern adapted to the health status of the user; the execution instruction unit is used to instruct the execution of the action pattern determined by the decision unit; the suggestion unit is used to instruct the execution of the action pattern determined by the decision unit Synthesizing a suggestion sentence based on the output result of the above-mentioned grasping unit and the output result of the above-mentioned decision unit, sending the suggestion of the above-mentioned action pattern to the above-mentioned user through voice; the determination unit judges whether the user accepts the above-mentioned suggested action pattern according to the reply The above-mentioned suggested action mode is confirmed; when the judgment result considers that the reply of the above-mentioned user is a reply of accepting the above-mentioned suggested action mode, the above-mentioned execution instructing unit instructs to execute the above-mentioned suggested action mode, thereby achieving the above-mentioned purpose.

也可以进一步具有:接受单元,用来接受上述用户针对上述所建议的行动模式反向建议的行动模式;用来判断上述对话型装置能否执行上述反向建议的行动模式的单元;以及当判断结果认为上述对话型装置能够执行上述反向建议的行动模式时,对上述行动模式表中保存的上述用户的健康状态与上述对话型装置的行动模式的对应关系进行更新的单元。It may also further have: an accepting unit, which is used to accept the action mode suggested by the above-mentioned user against the suggested action mode; a unit for judging whether the above-mentioned dialogue device can execute the above-mentioned counter-suggested action mode; and when judging A unit for updating the correspondence relationship between the health status of the user and the action pattern of the interactive device stored in the action pattern table when the interactive device is considered to be capable of executing the action pattern suggested in the opposite direction.

附图的简要说明Brief description of the drawings

图1是表示作为本发明的对话型装置的一个实例的机器人1的外观的图。FIG. 1 is a diagram showing the appearance of a robot 1 as an example of an interactive device of the present invention.

图2是表示机器人1的内部结构的一个实例的图。FIG. 2 is a diagram showing an example of the internal structure of the robot 1 .

图3是表示对话数据库140中保存的用户所说的关键字与用户的健康状态的关系的一个实例的图。FIG. 3 is a diagram showing an example of the relationship between keywords uttered by the user and the health status of the user stored in the dialogue database 140 .

图4是表示信息数据库160中保存的用户的健康状态与机器人1的行动模式的关系的一个实例的图。FIG. 4 is a diagram showing an example of the relationship between the user's health status and the behavior pattern of the robot 1 stored in the information database 160 .

图5是表示机器人1掌握用户的健康状态、指示执行与用户的健康状态相适应的行动模式的处理程序的一个实例的流程图。FIG. 5 is a flowchart showing an example of a processing procedure for the robot 1 to grasp the user's health status and instruct execution of an action pattern suited to the user's health status.

图6是表示使得声音数据和/或影像数据与照明控制数据能够同步播放的播放装置2100的结构的一个实例的图。FIG. 6 is a diagram showing an example of a configuration of a playback device 2100 that enables synchronous playback of audio data and/or video data and lighting control data.

图7是表示声音识别部40的内部结构的一个实例的图。FIG. 7 is a diagram showing an example of the internal configuration of the voice recognition unit 40 .

图8a是表示图2所示的处理部50的内部结构的一个实例的图。Fig. 8a is a diagram showing an example of the internal structure of the processing unit 50 shown in Fig. 2 .

图8b是表示图2所示的处理部50的内部结构的另一个实例的图。FIG. 8b is a diagram showing another example of the internal structure of the processing unit 50 shown in FIG. 2 .

图8c是表示图2所示的处理部50的内部结构的又一个实例的图。FIG. 8c is a diagram showing still another example of the internal structure of the processing unit 50 shown in FIG. 2 .

图9是用来说明利用建议单元50e制作建议语句的方法的一个实例的图。FIG. 9 is a diagram for explaining an example of a method of creating a suggestion sentence using the suggestion unit 50e.

图10是表示建议必要性有无判定单元50d的内部结构的一个实例的图。FIG. 10 is a diagram showing an example of the internal configuration of the advice necessity determination unit 50d.

图11是表示行动建议必要性表162的结构的一个实例的图。FIG. 11 is a diagram showing an example of the structure of the action suggestion necessity table 162 .

实施发明的最佳方式The best way to practice the invention

下面,参照附图说明本发明的实施方式。Hereinafter, embodiments of the present invention will be described with reference to the drawings.

本说明书中所说的“用户的健康状态”表示用户的感情及用户的健康状况中的至少一个方面。所谓的“用户”是指对话型装置的所有者。The "user's health status" referred to in this specification means at least one of the user's emotion and the user's health status. The so-called "user" refers to the owner of the interactive device.

图1表示了作为本发明的对话型装置的一个实例的机器人1的外观。机器人1被设计为能够与用户进行对话的结构。FIG. 1 shows the appearance of a robot 1 as an example of the interactive device of the present invention. The robot 1 is designed to be able to have a dialogue with a user.

图1所示的机器人1具有相当于“眼睛”的摄像头10、相当于“口”的扬声器110和天线62、相当于“耳朵”的话筒30和天线62、相当于“头”或“臂”的可动部180。The robot 1 shown in Fig. 1 has a camera 10 equivalent to "eyes", a speaker 110 and antenna 62 equivalent to "mouth", a microphone 30 and antenna 62 equivalent to "ears", and a "head" or "arm". The movable part 180.

机器人1可以是具有使自己能够移动的移动部160的自控行走型机器人(移动式机器人),也可以是不能自己移动的类型的机器人。The robot 1 may be an autonomous walking robot (mobile robot) having a moving part 160 capable of moving itself, or may be a type of robot that cannot move by itself.

可以采用任意的机械装置使机器人1能够移动。例如,机器人1的结构既可以是通过控制设置在手脚上的滚轮的旋转来前进、后退,也可以是轮胎式或足式等移动式机器人。机器人1既可以是模仿人类这样的双足直立行走动物的人形机器人,也可以是模仿4足行走动物的宠物型机器人。Any mechanical means may be used to enable movement of the robot 1 . For example, the structure of the robot 1 can be forward and backward by controlling the rotation of the rollers arranged on the hands and feet, or it can be a mobile robot such as tire type or foot type. The robot 1 can be a humanoid robot imitating biped upright animals such as human beings, or a pet robot imitating quadruped animals.

此外,虽然作为对话型装置的一个实例对对话型机器人进行了说明,但对话型装置并不限于此。对话型装置可以是具有能与用户进行对话的结构的任意装置。对话型装置可以是例如对话型玩具,也可以是对话型移动设备(包含移动式电话),也可以是对话型代理(agent)。In addition, although an interactive robot has been described as an example of an interactive device, the interactive device is not limited thereto. An interactive device may be any device having a structure capable of conducting a dialog with a user. The interactive device may be, for example, an interactive toy, or an interactive mobile device (including a mobile phone), or an interactive agent.

对话型代理最好是能在因特网这样的信息空间中活动,具有代替人类来进行信息检索、过滤、日程调整等信息处理的功能(软件代理的功能)。对话型代理就像人一样与人对话,因此也被称为拟人化代理。It is preferable that a conversational agent can act in an information space such as the Internet, and have functions of information processing such as information retrieval, filtering, and schedule adjustment instead of humans (function of a software agent). Conversational agents talk to people like people, so they are also called anthropomorphic agents.

对话型装置也可以具有代理功能及移动功能中的至少一个功能。The interactive device may have at least one of an agent function and a mobile function.

图2表示机器人1的内部结构的一个实例。FIG. 2 shows an example of the internal structure of the robot 1 .

图像识别部20从摄像头10(图像输入部)获取图像,对所获取的图像进行识别,将该识别结果输出到处理部50。The image recognition unit 20 acquires an image from the camera 10 (image input unit), recognizes the acquired image, and outputs the recognition result to the processing unit 50 .

声音识别部40从话筒30(声音输入部)获取声音,对所获取的声音进行识别,将该识别结果输出到处理部50。The voice recognition unit 40 acquires voice from the microphone 30 (voice input unit), recognizes the acquired voice, and outputs the recognition result to the processing unit 50 .

图7表示声音识别部40的内部结构的一个实例。FIG. 7 shows an example of the internal structure of the voice recognition unit 40 .

声音由声音输入部30(话筒)转换为声音信号波形。声音信号波形被输出到声音识别部40。声音识别部40包含声音检测单元71、比较运算单元72、识别单元73、登录声音数据库74。The voice is converted into a voice signal waveform by the voice input unit 30 (microphone). The audio signal waveform is output to the audio recognition unit 40 . The voice recognition unit 40 includes a voice detection unit 71 , a comparison calculation unit 72 , a recognition unit 73 , and a registered voice database 74 .

声音检测单元71将声音输入部30输入的声音信号波形中满足一定基准的波形部分作为用户实际发声的声音区间提取出来,将该区间的声音信号波形作为声音波形输出到比较运算单元72。这里所说的用来提取声音区间的一定基准可以是,例如,一般的人的声音频带1kHz以下的频率范围中的信号波形的功率达到一定水平以上。The voice detection unit 71 extracts the waveform portion of the voice signal waveform input from the voice input unit 30 that satisfies a certain standard as the voice interval that the user actually utters, and outputs the voice signal waveform of this interval to the comparison operation unit 72 as a voice waveform. The certain standard for extracting the voice interval mentioned here may be, for example, that the power of the signal waveform in the frequency range below 1 kHz of the general human voice frequency band is above a certain level.

用户可能说出的语言的声音波形与该语言被预先对应起来登录在登录声音数据库74中。Voice waveforms of languages that the user can speak are associated with the language and registered in the registered voice database 74 in advance.

比较运算单元72顺序比较从声音检测单元71输入的声音波形和登录声音数据库74中登录的声音波形,计算出与登录声音数据库74中登录的每个声音波形的相似度,将该计算结果输出到识别单元73。这里所采用的比较2个声音波形的方法可以是将对声音波形进行傅立叶变换等频率分析后各频率的功率分量的差分合计进行比较的方法,也可以是在频率分析后进一步进行在极坐标变换后的倒谱(cepstrum)参数或梅尔倒谱(Mel cepstrum)参数中考虑了时间上的伸缩的DP Matching的方法。为了提高比较运算的效率,也可以将登录声音数据库74中登录的声音波形用作比较运算单元72中的比较因子(例如,各频率的功率分量)。另外,登录声音数据库74中,还将用户的咳嗽或呻吟声等无意发声的情况下的声音波形登录下来,对应的语言则登录为“无意发声”。由此,能够将用户有意的发声与无意发声区别开。The comparison operation unit 72 sequentially compares the sound waveform input from the sound detection unit 71 and the sound waveform registered in the registered sound database 74, calculates the degree of similarity with each sound waveform registered in the registered sound database 74, and outputs the calculation result to Identification unit 73. The method of comparing the two sound waveforms used here may be a method of comparing the total difference of the power components of each frequency after performing frequency analysis such as Fourier transform on the sound waveform, or it may be further carried out in polar coordinates after the frequency analysis. The following cepstrum (cepstrum) parameter or Mel cepstrum (Mel cepstrum) parameter considers the time-scaled DP Matching method. In order to improve the efficiency of the comparison calculation, the voice waveform registered in the registered voice database 74 may be used as a comparison factor (for example, the power component of each frequency) in the comparison calculation unit 72 . In addition, in the registered voice database 74, the voice waveform of the user's unintentional utterance such as coughing or moaning is also registered, and the corresponding language is registered as "unintentional utterance". This makes it possible to distinguish between the user's intentional utterance and unintentional utterance.

识别单元73从比较运算单元72输入的各个声音波形的相似度之中检测出具有最高相似度的声音波形,通过决定与从登录声音数据库74之中检测出的声音波形相对应的语言,将声音波形转换成文字,并将该文字输出到处理部50。当在各个相似度中找不出大的差异时,就认为输入声音是噪声,不进行声音波形到文字的转换。或者,也可以转换为“噪声”这样的文字。The recognition unit 73 detects the voice waveform having the highest similarity among the similarities of the respective voice waveforms input from the comparison operation unit 72, and by determining the language corresponding to the voice waveform detected from the registered voice database 74, the voice is The waveform is converted into characters, and the characters are output to the processing unit 50 . When no large difference is found among the similarities, the input voice is considered to be noise, and conversion from the voice waveform to characters is not performed. Alternatively, it can also be converted to a text such as "noise".

图8a表示图2所示的处理部50的内部结构的一个实例。FIG. 8a shows an example of the internal structure of the processing unit 50 shown in FIG. 2 .

处理部50(处理单元50a)基于声音识别部40的声音识别结果检索对话数据库140,生成应答语句。该应答语句被输出到声音合成部100。声音合成部100将应答语句合成为声音。所合成的声音从扬声器等声音输出部110输出。The processing unit 50 (processing unit 50a) searches the dialogue database 140 based on the voice recognition result of the voice recognition unit 40, and generates a response sentence. The response sentences are output to the speech synthesis unit 100 . The voice synthesis unit 100 synthesizes the response sentence into voice. The synthesized audio is output from an audio output unit 110 such as a speaker.

对话数据库140中保存了对话模式和应答语句的生成规则。并且对话数据库140中还保存了用户所说的语言(关键字)与用户健康状态的关系。The dialog database 140 stores dialog patterns and rules for generating answer sentences. And the relationship between the language (keyword) spoken by the user and the user's health status is also stored in the dialog database 140 .

图3表示对话数据库140中保存的用户所说的关键字与用户的健康状态的关系的一个实例。FIG. 3 shows an example of the relationship between the keywords spoken by the user and the health status of the user stored in the conversation database 140 .

在图3所示实例中,用户所说的关键字与用户的健康状态的关系表示为表格形式。例如,该表格的第31行表示“困”、“累”、“没有胃口”这些关键字与“疲劳”这样的用户健康状态(身体状况)相对应。该表格的第32行表示“干得好!”、“太好了!”这些关键字与“喜悦”这样的用户健康状态(感情)相对应。In the example shown in FIG. 3 , the relationship between the keywords spoken by the user and the health status of the user is expressed in a tabular form. For example, row 31 of the table indicates that keywords such as "sleepy", "tired" and "no appetite" correspond to the user's health state (physical condition) such as "fatigue". The 32nd row of the table indicates that keywords such as "Good job!" and "Great!" correspond to the user's health status (feeling) such as "joy".

此外,用户所说的关键字与用户的健康状态的关系的表示方法并不限于图3所示的方法。用户所说的关键字与用户的健康状态的关系可以以任意的表示方法进行表示。In addition, the method of expressing the relationship between the keywords spoken by the user and the user's health status is not limited to the method shown in FIG. 3 . The relationship between the keywords spoken by the user and the user's health status can be represented by any representation method.

处理部50(掌握单元50b)从声音识别部40的声音识别结果中提取关键字,用该关键字检索对话数据库140。其结果是,处理部50(掌握单元50b)根据该关键字掌握用户的健康状态。例如,当从声音识别结果中提取的关键字是“困”、“累”、“没有胃口”之一时,处理部50(掌握单元50b)参照图3所示的表,判断用户的健康状态为“疲劳”状态。The processing unit 50 (grasp means 50b) extracts a keyword from the voice recognition result of the voice recognition unit 40, and searches the dialog database 140 using the keyword. As a result, the processing unit 50 (ascertainment means 50b) grasps the health status of the user based on the keyword. For example, when the keyword extracted from the voice recognition result is one of "sleepy", "tired" and "no appetite", the processing unit 50 (grasp unit 50b) refers to the table shown in FIG. "Fatigue" state.

此外,也可以取代上述使用关键字的方法,或者在上述使用关键字的方法的基础上,进一步根据声音识别结果掌握用户声音的强度或紧张状态,由此掌握用户的健康状态。例如,当处理部50(掌握单元50b)检测到用户声音的强度或紧张状态为规定水平或以下时,处理部50(掌握单元50b)即认为用户的健康状态为“疲劳”状态。In addition, instead of the above-mentioned method of using keywords, or on the basis of the above-mentioned method of using keywords, the strength or tension of the user's voice can be further grasped according to the voice recognition result, thereby grasping the user's health status. For example, when the processing unit 50 (controlling unit 50b) detects that the intensity or tension of the user’s voice is at or below a predetermined level, the processing unit 50 (controlling unit 50b) considers the user’s health status to be “fatigue”.

此外,也可以在声音识别部40的声音识别结果基础上,进一步使用图像识别部20的图象识别结果来掌握用户的健康状态。或者,也可以单独使用图像识别部20的图象识别结果来掌握用户的健康状态。例如,当处理部50(掌握单元50b)检测到用户眨眼频度很高(或者用户打呵欠)时,处理部50(掌握单元50b)即认为用户的健康状态为“疲劳”状态。In addition, in addition to the voice recognition result of the voice recognition unit 40, the image recognition result of the image recognition unit 20 may be further used to grasp the user's health status. Alternatively, the image recognition result of the image recognition unit 20 may be used alone to grasp the user's health status. For example, when the processing unit 50 (controlling unit 50b) detects that the user blinks frequently (or the user yawns), the processing unit 50 (controlling unit 50b) considers the user's health status to be “fatigue”.

依照此种方式,处理部50(掌握单元50b)基于用户所说的语言或基于图象识别结果来掌握用户的健康状态,发挥掌握单元的功能。In this way, the processing unit 50 (grasp 50b) grasps the user's health status based on the language spoken by the user or based on the image recognition result, and functions as a grasp unit.

在信息数据库160中,保存了当天的天气或新闻等信息、各种一般常识等知识、机器人1的用户(所有者)相关的信息(例如,性别、年龄、姓名、职业、性格、兴趣、出生年月日等信息)、及机器人1的相关信息(例如,型号、内部结构等信息)。当天的天气或新闻等信息例如由机器人1通过发送接收部60(通信部)及处理部50从外部取得,保存到信息数据库160中。并且,信息数据库160中还将用户的健康状态与行动模式的关系作为行动模式表161保存起来。In the information database 160, information such as weather or news of the day, knowledge such as various general knowledge, information related to the user (owner) of the robot 1 (for example, gender, age, name, occupation, personality, interest, birth date, etc.) are stored. Information such as year, month and day), and related information of the robot 1 (for example, information such as model, internal structure, etc.). For example, information such as the weather and news of the day is obtained from the outside by the robot 1 through the transmitting and receiving unit 60 (communication unit) and the processing unit 50 , and stored in the information database 160 . In addition, the information database 160 also stores the relationship between the user's health status and action patterns as the action pattern table 161 .

图4表示信息数据库160中保存的行动模式表161的一个实例。行动模式表161定义了用户的健康状态与机器人1的行动模式的关系。FIG. 4 shows an example of the action pattern table 161 stored in the information database 160 . The action pattern table 161 defines the relationship between the user's health status and the action pattern of the robot 1 .

在图4所示实例中,用户的健康状态与机器人1的行动模式的关系表示为表格的形式。例如,该表格的第41行表示“疲劳”这一用户的健康状态与机器人1的3个行动模式相对应。3个行动模式如下。In the example shown in FIG. 4 , the relationship between the user's health status and the action pattern of the robot 1 is expressed in the form of a table. For example, row 41 of the table indicates that the user's health state of "tiredness" corresponds to the three action modes of the robot 1 . The 3 modes of action are as follows.

1)内容的选择、播放:选择用来发挥“治疗”“催眠”效果的内容(软件),将所选择的内容(软件)通过播放设备播放。1) Selection and playback of content: Select the content (software) used to exert the effects of "therapy" and "hypnosis", and play the selected content (software) through the playback device.

2)准备洗澡水:为了建议用户入浴而准备洗澡水。2) Prepare bath water: prepare bath water in order to advise the user to take a bath.

3)饮料食品的食谱的选择、调理:选择“增进食欲”“营养丰富”饮料食品的食谱,根据所选择的食谱调理饮料食品。3) Selection and conditioning of food and beverage recipes: select food and beverage recipes of "appetite-enhancing" and "nutrition-rich", and prepare food and beverages according to the selected recipes.

另外,该表格的第42行表示“喜悦”这一用户的健康状态与“欢呼姿势”这一机器人1的行动模式相对应。In addition, the 42nd row of the table indicates that the user's health state of "joy" corresponds to the action pattern of the robot 1 "cheering posture".

此外,用户的健康状态与机器人1的行动模式的关系的表示方法并不限于图4所示方法。用户的健康状态与机器人1的行动模式的关系可以以任意的表示方法进行表示。In addition, the method of expressing the relationship between the user's health state and the action pattern of the robot 1 is not limited to the method shown in FIG. 4 . The relationship between the user's health status and the action pattern of the robot 1 can be represented by any representation method.

机器人1的行动模式的实例包括:选择适合于用户健康状态的内容(软件),并将所选择的内容(软件)通过播放设备进行播放;选择适合于用户健康状态的饮料食品食谱,并根据所选择的食谱调理饮料食品;准备洗澡水;为了取悦用户而表演笑话等。The example of the action mode of robot 1 includes: select the content (software) that is suitable for the user's health state, and the selected content (software) is played through the playback device; Select the beverage and food recipe that is suitable for the user's health state, and Selected recipes prepare drinks and food; prepare bath water; perform jokes to please users, etc.

处理部50(行动模式决定单元50c)响应掌握单元50b所输出的时序信号t1对对话数据库140进行检索,由此,利用通过上述检索所掌握的用户的健康状态检索信息数据库160(行动模式表161)。其结果是,处理部50(行动模式决定单元50c)确定了与用户的健康状态相适应的机器人1的行动模式。例如,当用户的健康状态为“疲劳”状态时,处理部50(行动模式决定单元50c)参照图4所示的表(行动模式表161),将对应于“疲劳”状态而定义的3个行动模式之一确定为机器人1的行动模式。The processing unit 50 (behavior pattern determination unit 50c) searches the dialogue database 140 in response to the timing signal t1 output by the grasping unit 50b, thereby using the user's health status grasped by the above search to search the information database 160 (behavior pattern table 161 ). As a result, the processing unit 50 (action pattern determining means 50c) specifies the action pattern of the robot 1 that is suitable for the user's health state. For example, when the user's health status is "fatigue", the processing unit 50 (action pattern determination unit 50c) refers to the table (action pattern table 161) shown in FIG. One of the action patterns is determined as the action pattern of the robot 1 .

这里,处理部50(行动模式决定单元50c)能够通过各种方式来将3个行动模式之一确定为机器人1的行动模式。例如,当预先分配了3个行动模式各自的优先顺序时,可以从该优先顺序高的开始按顺序来确定为机器人1的行动模式。也可以根据时段改变该优先顺序。例如,可以在18:00~22:00的时段将“准备洗澡水”的优先顺序置为最高,在6:00~8:00、11:00~13:00、17:00~19:00的时段将“饮料食品的食谱的选择、调理”的优先顺序置为最高,在其他时段将“内容的选择、播放”的优先顺序置为最高。Here, the processing unit 50 (action pattern determining means 50c) can specify one of the three action patterns as the action pattern of the robot 1 in various ways. For example, when the priorities of the three action patterns are assigned in advance, the action patterns of the robot 1 can be determined in order from the highest priority. The order of priority may also be changed according to time periods. For example, the priority of "preparing bath water" can be set to the highest during the period from 18:00 to 22:00, and the priority can be set at 6:00 to 8:00, 11:00 to 13:00, and 17:00 to 19:00 During the period of time, the priority of "selection and preparation of food and drink recipes" is set to the highest, and in other periods of time, the priority of "selection and playback of content" is set to the highest.

这样,处理部50(行动模式决定单元50c)根据基于掌握单元50b掌握的用户健康状态决定与之相适应行动模式,发挥决定单元的功能。In this way, the processing unit 50 (action pattern determination means 50c) determines an appropriate action pattern based on the user's health state grasped based on the grasping means 50b, and functions as a determination means.

处理部50(执行指示单元50g)响应行动模式决定单元50c所输出的时序信号t2生成与所决定的行动模式相应的控制信号,并将该控制信号输出到动作控制部120。The processing unit 50 (execution instructing unit 50g) generates a control signal corresponding to the determined action mode in response to the timing signal t2 output by the action mode determining unit 50c, and outputs the control signal to the action control unit 120.

动作控制部120根据处理部50(执行指示单元50g)输出的控制信号来驱动各种致动器130。由此可以使机器人1在期望的样态下动作。The operation control unit 120 drives various actuators 130 based on control signals output from the processing unit 50 (execution instruction unit 50g). Thereby, the robot 1 can be operated in a desired state.

例如,当所决定的行动模式为“欢呼姿势”时,动作控制部120根据处理部50(执行指示单元50g)输出的控制信号来驱动使机器人1的“手臂”上下动作的致动器(致动器130的一部分)。或者也可以是,当所决定的行动模式为“内容的选择、播放”时,动作控制部120根据处理部50(执行指示单元50g)输出的控制信号来驱动用来控制机器人1的“手指”(抓握单元)的致动器(致动器130的一部分),使其抓住碟片并将碟片插入播放设备。例如,假定多个碟片按规定顺序整理存放在托架中。For example, when the determined action pattern is "cheering posture", the motion control unit 120 drives the actuator (actuator) that moves the "arm" of the robot 1 up and down according to the control signal output by the processing unit 50 (execution instruction unit 50g). part of device 130). Alternatively, when the determined action mode is "selection and playback of content", the motion control unit 120 drives the "finger" ( Gripping Unit) actuator (a part of the actuator 130), so that it grips the disc and inserts the disc into the playback device. For example, assume that a plurality of discs are arranged and stored in a tray in a prescribed order.

这样,处理部50(执行指示单元50g)指示动作控制部120执行由行动模式决定单元50c所决定的行动模式,发挥执行指示单元的功能。In this way, the processing unit 50 (execution instructing means 50g) instructs the action control unit 120 to execute the action pattern determined by the action pattern determining means 50c, and functions as an execution instructing means.

或者也可以是,当所决定的行动模式为“准备洗澡水”时,处理部50(执行指示单元50g)控制遥控部70向热水供应控制装置发送遥控信号。热水供应控制装置根据遥控信号向浴缸中适量供给适当温度的热水(或者,在向浴缸中适量供水后,将水加热至适当温度)。这种情况下,处理部50(执行指示单元50g)指示遥控部70执行由行动模式决定单元50c所决定的行动模式,发挥执行指示单元的功能。Alternatively, when the determined action mode is "preparing bath water", the processing unit 50 (execution instruction unit 50g) controls the remote control unit 70 to send a remote control signal to the hot water supply control device. The hot water supply control device supplies an appropriate amount of hot water of an appropriate temperature to the bathtub according to the remote control signal (or, after supplying an appropriate amount of water to the bathtub, heats the water to an appropriate temperature). In this case, the processing unit 50 (execution instructing means 50g) instructs the remote control unit 70 to execute the action pattern determined by the action pattern determining means 50c, and functions as an execution instructing means.

或者也可以是,当所决定的行动模式为“内容的选择、播放”时,处理部50(执行指示单元50g)控制遥控部70向播放设备发送遥控信号。播放设备根据遥控信号从插入到播放设备中的碟片中选择内容,并进行播放。当播放设备连接到能插入多片碟片的碟片自动换片器时,该播放设备根据遥控信号从多片碟片中选择内容,并进行播放。此外,可以将包含了多片碟片中全部曲目的选曲单保存到处理部50的存储器中,也可以由播放设备从碟片的头部开始读取该碟片的选曲单,通过发送接收部60保存到处理部50内的存储器中。这种情况下,处理部50(执行指示单元50g)指示遥控部70执行由行动模式决定单元50c所决定的行动模式,发挥执行指示单元的功能。Alternatively, when the determined action mode is "selection and playback of content", the processing unit 50 (execution instructing unit 50g) controls the remote control unit 70 to send a remote control signal to the playback device. The playback device selects content from the disc inserted into the playback device according to the remote control signal, and plays it. When the playback device is connected to an automatic disc changer capable of inserting multiple discs, the playback device selects content from the plurality of discs according to a remote control signal and plays them. In addition, the song selection list containing all the tracks in the multi-disc can be stored in the memory of the processing unit 50, or the playback device can read the selection list of the disc from the head of the disc, and send The receiving unit 60 stores the data in the memory in the processing unit 50 . In this case, the processing unit 50 (execution instructing means 50g) instructs the remote control unit 70 to execute the action pattern determined by the action pattern determining means 50c, and functions as an execution instructing means.

图8b表示图2所示的处理部50的内部结构的另一个实例。在图8b所示的实例中,处理部50(建议单元50e)在指示执行所决定的行动模式前,以声音形式向用户建议该行动模式。例如,当所决定的行动模式是“准备洗澡水”时,处理部50(建议单元50e)响应行动模式决定单元50c所输出的时序信号t2,参照对话数据库140,生成“您看起来累了。需要准备洗澡水吗?”这样的疑问句(建议语句),并输出到声音合成部100。声音合成部100将该疑问句合成为声音。所合成的声音从声音输出部110输出。FIG. 8b shows another example of the internal structure of the processing unit 50 shown in FIG. 2 . In the example shown in FIG. 8 b , the processing unit 50 (suggestion unit 50 e ) suggests the action pattern to the user in the form of voice before instructing to execute the action pattern. For example, when the determined action pattern is "prepare bath water", the processing unit 50 (suggestion unit 50e) refers to the dialog database 140 in response to the timing signal t2 output by the action pattern determination unit 50c, and generates "You look tired. Need Are you ready for bath water?" Interrogative sentences (advice sentences) are output to the voice synthesis unit 100 . The voice synthesis unit 100 synthesizes the interrogative sentence into voice. The synthesized voice is output from the voice output unit 110 .

接着,参照图9说明建议单元50e生成建议语句的方法。建议单元50e内部具有建议语句合成部。对话数据库140内部具有建议语句格式数据库,该建议语句格式数据库中记录保存着与多个建议表达相对应的多个建议语句格式。这里所说的“建议表达”是指例如图9实例所示的“您A了,需要B吗?”、“您看起来A,需要B吗?”这样的表示建议动机的原因(A)与相对应的响应(B)的表达形式。Next, a method for generating a suggestion sentence by the suggestion unit 50e will be described with reference to FIG. 9 . The advice unit 50e has an advice sentence synthesis unit inside. The dialog database 140 has a suggested sentence format database inside, and the suggested sentence format database stores a plurality of suggested sentence formats corresponding to a plurality of suggested expressions. The "suggestion expression" mentioned here refers to the reasons (A) and The corresponding expression of the response (B).

首先,建议单元(建议语句合成部)50e根据掌握单元50b输入的“所掌握的健康状态”和行动模式决定单元50c输入的“所决定的行动模式”,从建议语句格式数据库中选择与上述“所掌握的健康状态”匹配的建议语句格式。其次,建议单元(建议语句合成部)50e将“所掌握的健康状态”插入到建议语句格式的A处,将“所决定的行动模式”插入到建议语句格式的B处,由此合成建议语句。例如,“所掌握的健康状态”为“疲劳”、“所决定的行动模式”为“准备洗澡水”时,建议单元(建议语句合成部)50e合成“您看起来疲劳了,需要准备洗澡水吗?”这样的建议语句。建议语句输出到声音合成部100。声音合成部100将建议语句合成为声音。所合成的声音从声音输出部110输出。First, the suggestion unit (suggested sentence synthesizing unit) 50e selects from the suggested sentence format database the above-mentioned " Mastered health status" matching suggested sentence format. Next, the suggestion unit (suggestion statement synthesizing unit) 50e inserts "the health status grasped" into the position A of the suggestion statement format, and inserts the "decided action mode" into the position B of the suggestion statement format, thereby synthesizing the suggestion statement . For example, when the "mastered health state" is "fatigue" and the "decided action mode" is "prepare bath water", the suggestion unit (suggestion sentence synthesis department) 50e synthesizes "You look tired, you need to prepare bath water Is it?" Suggestions like this. The suggested words are output to the voice synthesis unit 100 . The voice synthesis unit 100 synthesizes the suggested sentence into voice. The synthesized voice is output from the voice output unit 110 .

依照此种方式,处理部50(建议单元50e)使用对话数据库(建议语句格式数据库)140和声音合成部100、声音输出部110,在指示执行由行动模式决定单元50c所决定的行动模式前,通过声音向用户建议该行动模式,发挥建议单元的功能。According to this mode, the processing part 50 (suggestion unit 50e) uses the dialogue database (suggestion sentence format database) 140 and the sound synthesis part 100, the sound output part 110, before instructing to execute the action pattern determined by the action pattern determination unit 50c, This mode of action is suggested to the user by voice, and functions as a suggestion unit.

用户对机器人1的建议作出答复,要么接受该建议,要么不接受。例如,用户接受(Yes)该建议时的意见表示可以是“好”、“是呀”、“那样做吧”等答复,拒绝(No)该建议时的意见表示可以是“不”、“不必”、“不需要”等答复。这样的答复的模式预先保存在对话数据库140中。The user responds to Robot 1's suggestion by either accepting the suggestion or not. For example, the user's opinion when accepting (Yes) the suggestion can be "good", "yes", "do that", etc., and the opinion when rejecting (No) the suggestion can be "no", "don't have to." ", "No need" and other answers. Patterns of such replies are stored in the dialogue database 140 in advance.

处理部50(建议应诺判定单元50f)响应建议单元50e输出的时序信号t5,参照对话数据库140分析声音识别部40的声音识别结果,由此判定用户的答复是接受(Yes)建议的答复还是不接受(No)建议的答复。The processing part 50 (suggestion acceptance determination unit 50f) responds to the time-series signal t5 output by the suggestion unit 50e, refers to the dialog database 140 and analyzes the voice recognition result of the voice recognition unit 40, thereby judging whether the user's answer is to accept (Yes) the suggested answer or not. Accept (No) the suggested answer.

依照此种方式,处理部50(建议应诺判定单元50f)利用声音识别部40和对话数据库140,判定用户对所建议的行动模式的答复是否是接受该建议的行动模式的答复,发挥建议应诺判定单元的作用。In this manner, the processing unit 50 (suggestion acceptance determination unit 50f) utilizes the voice recognition unit 40 and the dialog database 140 to determine whether the user's reply to the suggested action pattern is a reply to accept the suggested action pattern, and to play a role in the suggestion acceptance judgment. The role of the unit.

图8c表示图2所示的处理部50的内部结构的又一个实例。也可以在指示执行所决定的行动模式之前,判定是否需要向用户建议该行动模式。例如,预先设定图11所示的行动建议必要性有无表162,其中为图4所示的表的各行动模式预先分配了表示是否有必要建议的标志,由此,处理部50(建议必要性有无判定单元50d)能够根据该标志的值判定是否有必要建议。例如,处理部50(建议必要性有无判定单元50d)在行动模式被分配的标志值为“1”的情况下,在指示执行该行动模式之前向用户建议该行动模式;在行动模式被分配的标志值为“0”的情况下,不在指示执行该行动模式之前向用户建议该行动模式。FIG. 8c shows still another example of the internal structure of the processing unit 50 shown in FIG. 2 . Before instructing to execute the determined action mode, it may be determined whether the action mode needs to be suggested to the user. For example, the action suggestion necessity table 162 shown in FIG. The necessity determination unit 50d) can determine whether the suggestion is necessary based on the value of the flag. For example, the processing unit 50 (advice necessity determination unit 50d) suggests the action pattern to the user before instructing execution of the action pattern when the flag value assigned to the action pattern is "1"; When the value of the flag of is "0", the action mode is not suggested to the user until execution of the action mode is instructed.

例如,对于“准备洗澡水”这一行动模式,最好是要求事先向用户进行建议。用户是否入浴很大程度上取决于当时的心情,如果不要求事先向用户进行建议,恐怕会显得强加于人。再例如,对于“欢呼姿势”这一行动模式,最好是不要求事先向用户进行建议。这是因为,逐一向用户事前请示后再欢呼,会给人一种例行公事的感觉。For example, for the "prepare bath water" action mode, it is better to require prior advice to the user. Whether the user takes a bath depends largely on the mood at the time. If the user is not required to advise the user in advance, it may appear to be imposed on others. For another example, for the action mode of "cheering gesture", it is best not to require a suggestion to the user in advance. This is because cheering after asking users one by one for instructions in advance will give people a sense of routine.

依照此种方式,处理部50(建议必要性有无判定单元50d)利用信息数据库160(行动建议必要性表162)在指示执行所决定的行动模式之前,判断是否有必要向用户建议该行动模式,发挥建议必要性有无判定单元的作用。In this manner, the processing unit 50 (advice necessity determination unit 50d) uses the information database 160 (action suggestion necessity table 162) to determine whether it is necessary to suggest the action pattern to the user before instructing execution of the decided action pattern. , to play the role of a judgment unit for the necessity of a suggestion.

在确定了执行行动模式的时段的情况下,或者在该行动模式被频繁执行的情况下,通常希望不要逐一向用户建议该行动模式。反之,在通常不会被执行的行动模式的情况下,最好是在指示执行该行动模式之前向用户建议该行动模式,由此确认用户是否希望执行该行动模式。In a case where a time period for executing an action pattern is determined, or in a case where the action pattern is frequently executed, it is generally desirable not to suggest the action pattern to the user one by one. Conversely, in the case of an action pattern that is not usually performed, it is preferable to suggest the action pattern to the user before instructing to execute the action pattern, thereby confirming whether the user desires to execute the action pattern.

参照图10说明用来实现上述功能的建议必要性有无判定单元50d。时间分布记录保存部90包括时刻计测部91、累计部92、时间分布数据库93。建议必要性有无判定单元50d内部具有比较决定部。时刻计测部91接受执行指示单元50g的输入,计测行动模式被执行的时刻,输出到累计部92。时间分布数据库93记录保存各行动模式在每个时刻的执行次数,累计部92在每次从时刻计测部91接收到输入时,将时间分布数据库93中记录的对应于所计测的时刻的执行次数增加1次进行累计。这样,时间分布记录保存部90将每个时刻所执行的行动模式的履历信息储存下来。建议必要性有无判定单元(比较决定部)50d持有预先设定的值,在从行动模式决定单元50c接收到输入时,从时间分布记录保存部90参考该时刻(或时段)的该行动模式的过去的执行次数,并与上述预先设定的值比较。比较决定部在该行动模式的过去执行次数小于预先设定的值的情况下,判定为有必要建议该行动模式;在该行动模式的过去执行次数大于预先设定的值的情况下,判定为不需要建议该行动模式。该判定结果作为建议必要性有无判定单元50d的判定结果从建议必要性有无判定单元50d输出。The advice necessity determination means 50d for realizing the above-mentioned function will be described with reference to FIG. 10 . The time distribution record storage unit 90 includes a time measurement unit 91 , an accumulation unit 92 , and a time distribution database 93 . The advice necessity determination unit 50d has a comparison determination unit inside. The time measurement unit 91 receives an input from the execution instruction unit 50g, measures the time when the action pattern is executed, and outputs it to the integration unit 92 . The time distribution database 93 records and saves the number of executions of each action pattern at each time, and the accumulation unit 92, when receiving an input from the time measurement unit 91 each time, records the number of times corresponding to the time measured in the time distribution database 93 . The number of executions is increased by 1 for accumulation. In this way, the time distribution record storage unit 90 stores the history information of the action patterns executed at each time. Advice necessity determination unit (comparison determination unit) 50d holds a preset value, and when receiving an input from action pattern determination unit 50c, refers to the action at that time (or period) from time distribution record storage unit 90 The number of past executions of the pattern and compare with the above preset value. The comparison determination unit determines that it is necessary to suggest the action mode when the past execution times of the action mode is less than a preset value; This mode of action need not be suggested. This determination result is output from the advice necessity determination means 50d as the determination result of the advice necessity determination means 50d.

依照此种方式,建议必要性有无判定单元50d根据行动模式被执行的执行次数的时间分布来判断建议有无必要性。In this manner, the advice necessity determination unit 50d judges the necessity of advice based on the time distribution of the number of times the action pattern is executed.

图5表示机器人1掌握用户的健康状态、指示执行与用户的健康状态相适应的行动模式的处理程序。FIG. 5 shows a processing procedure for the robot 1 to grasp the user's health status and instruct execution of an action pattern suited to the user's health status.

步骤ST1:掌握了用户的健康状态。Step ST1: Master the user's health status.

例如,处理部50(掌握单元50b)从声音识别部40的声音识别结果中提取关键字,用该关键字检索对话数据库140。其结果是,处理部50(掌握单元50b)能够根据该关键字掌握用户的健康状态。For example, the processing unit 50 (grasp means 50b) extracts a keyword from the voice recognition result of the voice recognition unit 40, and searches the dialogue database 140 using the keyword. As a result, the processing unit 50 (grasp means 50b) can grasp the user's health status based on the keyword.

以下表示用户与机器人1的对话的实例。这里,U表示用户所说的语言,S表示机器人1所说的语言。An example of the conversation between the user and the robot 1 is shown below. Here, U represents the language spoken by the user, and S represents the language spoken by the robot 1 .

U:今天真累~。U: I'm really tired today~.

S:看起来是很累了啊。S: It looks very tired.

这样,当用户说出“困”、“累”、“没有胃口”这些关键字时,处理部50(掌握单元50b)即认为用户的健康状态为“疲劳”状态。In this way, when the user utters keywords such as "sleepy", "tired", and "no appetite", the processing unit 50 (grasp unit 50b) considers the user's health status to be "tired".

步骤ST2:决定与步骤ST1中掌握的用户的健康状态相适应的行动模式。Step ST2: Determine an action pattern suitable for the user's health state grasped in step ST1.

例如,处理部50(行动模式决定单元50c)利用用户的健康状态检索信息数据库160(行动模式表161)。其结果是,处理部50(行动模式决定单元50c)能够决定与用户的健康状态相适应的行动模式。行动模式最好是作为已经推断出的用户的期望行为而预先设定好。For example, the processing unit 50 (action pattern determining means 50c) searches the information database 160 (action pattern table 161) using the user's health status. As a result, the processing unit 50 (action pattern determination means 50c) can determine an action pattern suitable for the user's health state. The behavior pattern is preferably preset as the already inferred desired behavior of the user.

步骤ST3:在指示执行在步骤ST2中所决定的行动模式之前,利用建议必要性有无判定单元50d判断是否需要向用户建议该行动模式。Step ST3: Before instructing to execute the action pattern determined in step ST2, it is judged whether the action pattern needs to be suggested to the user by the advice necessity determination unit 50d.

当步骤ST3的判定结果为“Yes”时,处理进入步骤ST4;当步骤ST3的判定结果为“No”时,处理进入步骤ST6。When the determination result of step ST3 is "Yes", the process proceeds to step ST4; when the determination result of step ST3 is "No", the process proceeds to step ST6.

步骤ST4:在指示执行在步骤ST2中由建议单元50e所决定的行动模式之前,向用户建议该行动模式。Step ST4: Before instructing to execute the action mode determined by the suggestion unit 50e in step ST2, suggest the action mode to the user.

以下表示用户与机器人1的对话的实例。这里,U表示用户所说的语言,S表示机器人1所说的语言。An example of the conversation between the user and the robot 1 is shown below. Here, U represents the language spoken by the user, and S represents the language spoken by the robot 1 .

S:您看起来很累了。那就播放有治疗效果的内容(软件)吧?S: You look tired. How about playing content (software) that has therapeutic effects?

U:拜托了。U: Please.

步骤ST5:利用建议应诺判定单元50f判断用户对机器人1在步骤ST4中所建议的行动模式是否作出了接受的答复。Step ST5: Use the suggestion acceptance determination unit 50f to determine whether the user has given an acceptance response to the action pattern suggested by the robot 1 in step ST4.

当步骤ST5的判定结果为“Yes”时,处理进入步骤ST6;当步骤ST5的判定结果为“No”时,处理进入步骤ST7。When the determination result of step ST5 is "Yes", the process proceeds to step ST6; when the determination result of step ST5 is "No", the process proceeds to step ST7.

步骤ST6:利用执行指示单元50g指示执行在步骤ST2中所决定的行动模式。Step ST6: Use the execution instructing unit 50g to instruct to execute the action pattern determined in step ST2.

步骤ST7:将所建议的行动模式与用户没有接受建议(拒绝)的情况作为履历信息保存到信息数据库160中。Step ST7: Save the suggested action pattern and the fact that the user did not accept the suggestion (rejection) in the information database 160 as history information.

在步骤ST2中当决定今后的行动模式的内容时参照该履历信息。对用户没有接受的行动模式可以分配较低的优先顺序。This history information is referred to when determining the content of the future action pattern in step ST2. Modes of action that are not accepted by the user may be assigned a lower priority.

此外,取代步骤ST7或者在步骤ST7的基础上,当在步骤ST5中用户接受了建议时,也可以将所建议的行动模式与用户接受建议(接受)的情况作为履历信息保存到信息数据库160中。在步骤ST2中当决定今后的行动模式的内容时参照该履历信息。将已经分配给用户已经接受的行动模式可以分配较高的优先顺序升高。In addition, instead of step ST7 or on the basis of step ST7, when the user accepts the suggestion in step ST5, the suggested action pattern and the user's acceptance (acceptance) of the suggestion may also be stored in the information database 160 as history information. . This history information is referred to when determining the content of the future action pattern in step ST2. Action modes already assigned to users that have accepted may be assigned higher priorities.

最好是依照此种方式,根据用户是否接受了所建议的行动模式,来改变该行动模式被分配的优先顺序。由此,能够将用户的癖好等反映到行动模式的决定中。其结果是,能够提高机器人1所决定的行动模式与用户的健康状态的实际契合比例。Preferably in this way, the order of priority assigned to the suggested action modes is changed depending on whether the user accepts the suggested action modes. Thereby, the user's preference and the like can be reflected in the determination of the action pattern. As a result, it is possible to increase the actual matching ratio between the action pattern determined by the robot 1 and the user's health state.

此外,当在步骤ST5中用户没有接受建议时,也可以由用户反向作出建议。这种情况下,机器人1接到该反向建议,判断该反向建议是否有可能执行。当判断认为该反向建议有可能执行时,机器人1在更新信息数据库160中保存的用户的健康状态与机器人1的行动模式的关系(例如,改变图4所示表中的行动模式的优先顺序,或者在图4所示表中追加新的行动模式)之后,指示执行该反向建议。当判断认为该反向建议不可执行时,机器人1通知用户“无法执行该反向建议”。这样,通过由用户反向建议,能够将用户的癖好等反映到行动模式的决定中。其结果是,能够提高机器人1所决定的行动模式与用户的健康状态的实际契合比例。In addition, when the user does not accept the suggestion in step ST5, the user may also make a reverse suggestion. In this case, robot 1 receives the reverse suggestion and judges whether the reverse suggestion is likely to be executed. When it is judged that the reverse suggestion may be executed, the relationship between the health state of the user and the action pattern of the robot 1 saved by the robot 1 in the update information database 160 (for example, changing the priority order of the action pattern in the table shown in FIG. 4 , or after adding a new action mode in the table shown in FIG. 4 ), instruct to execute the counter suggestion. When it is judged that the reverse suggestion is not executable, the robot 1 notifies the user that "the reverse suggestion cannot be executed". In this way, the user's preference and the like can be reflected in the determination of the action pattern by the user's reverse suggestion. As a result, it is possible to increase the actual matching ratio between the action pattern determined by the robot 1 and the user's health state.

此外,可以省略图5中的步骤ST3。这种情况下,对于根据用户的健康状态所决定的全部行动模式,在指示执行该行动模式之前,都要向用户进行建议。Furthermore, step ST3 in FIG. 5 may be omitted. In this case, for all action patterns determined according to the user's health state, the user is advised before the execution of the action pattern is instructed.

另外,可以省略图5中的步骤ST3、ST4、ST5、ST7。这种情况下,对于根据用户的健康状态所决定的全部行动模式,无需等待用户的答复就可立即指示执行该行动模式。In addition, steps ST3, ST4, ST5, and ST7 in FIG. 5 may be omitted. In this case, for all the action patterns determined according to the user's health status, the execution of the action pattern can be instructed immediately without waiting for the user's reply.

如上所述,借助于本实施方式,可以掌握用户的健康状态,决定与用户的健康状态相适应的行动模式。由此使用户从装备各种传感器的麻烦中解放出来。进而,用户会觉得机器人是在为自己的健康状态分忧的好朋友。As described above, according to the present embodiment, the user's health status can be grasped, and an action pattern suitable for the user's health status can be determined. This frees the user from the hassle of equipping various sensors. Furthermore, users will feel that the robot is a good friend who shares their health status.

进一步,也可以采用在指示执行行动模式之前向用户建议该行动模式的形式。这种情况下,由于用户拥有是否接受该建议的最终权力,因此,不会被机器人强制接受该建议,用户酌情处理的自由度很大。由此,既能够阻止机器人的鲁莽从事,又能够使人切身体会到机器人的友好。Furthermore, a form may be adopted in which the action pattern is suggested to the user before the action pattern is instructed to be executed. In this case, since the user has the final right to accept the suggestion, the robot will not be forced to accept the suggestion, and the user has a large degree of freedom in discretion. As a result, reckless actions of the robot can be prevented, and people can experience the friendliness of the robot.

日本效率协会综合研究所进行的问卷调查显示,消费者心目中理想的机器人中,最受欢迎的是“更接近实物的宠物机器人”。大家所期待的是与人类共享居住空间的贴近生活型的共栖或娱乐型机器人。According to a questionnaire survey conducted by the General Research Institute of the Japan Efficiency Association, among the ideal robots in the minds of consumers, the most popular is "pet robots that are closer to the real thing." What everyone is looking forward to is a life-like symbiotic or entertainment robot that shares a living space with humans.

可以理解,作为本发明的对话型装置的一个实例的机器人是贴近生活型的友好而且有用的机器人。这样的机器人能够对用户的生活有所帮助,成为用户的好朋友。It can be understood that the robot as an example of the interactive device of the present invention is a friendly and useful robot close to life. Such a robot can help the user's life and become a good friend of the user.

此外,播放设备所播放的内容(软件)也可以包含影像数据、声音数据、照明控制数据中的至少1种。可以在播放记录介质(DVD等)中记录的影像数据时同步地播放记录介质中记录的声音数据。进一步,可以在播放记录介质(DVD等)中记录的声音数据和/或影像数据时同步地再现记录介质中记录的照明控制数据。利用这样的同步播放,能够实现具有高效“治疗”效果或“催眠”效果的内容(软件)。In addition, the content (software) played by the playback device may include at least one of video data, audio data, and lighting control data. Audio data recorded on the recording medium (DVD, etc.) can be reproduced synchronously with video data recorded on the recording medium (DVD, etc.). Furthermore, the lighting control data recorded on the recording medium (DVD etc.) can be reproduced synchronously with the playback of audio data and/or video data recorded on the recording medium (DVD, etc.). With such synchronized playback, content (software) having a highly effective "healing" effect or "hypnotic" effect can be realized.

图6表示使得声音数据和/或影像数据与照明控制数据能够同步播放的播放装置2100的结构的实例。通过将播放装置2100与声音输出装置(例如扬声器)和影像输出装置(例如TV)和照明装置连接,播放装置2100能够与记录介质所提供的音乐和/或影像联动,改变照明装置的照明模式(例如照明装置的光量和光线颜色中的至少一项)。FIG. 6 shows an example of the configuration of a playback device 2100 that enables synchronous playback of audio data and/or video data and lighting control data. By connecting the playback device 2100 with a sound output device (such as a speaker) and an image output device (such as a TV) and a lighting device, the playback device 2100 can be linked with the music and/or video provided by the recording medium, and change the lighting mode of the lighting device ( For example, at least one of the light quantity and light color of the lighting device).

播放装置2100包含控制器2220、接口控制器(I/F控制器)2230、及读取部2120。The playback device 2100 includes a controller 2220 , an interface controller (I/F controller) 2230 , and a reading unit 2120 .

控制器2220根据由I/F控制器2230输入的用户的操作命令或解码部2140供给的控制信号控制播放装置2100的整体动作。The controller 2220 controls the overall operation of the playback device 2100 according to a user's operation command input from the I/F controller 2230 or a control signal supplied from the decoding unit 2140 .

I/F控制器2230探测用户的操作(例如,来自遥控部70(图2)的遥控信号),将对应于该操作的操作命令(例如,播放命令)输出到控制器2220。The I/F controller 2230 detects a user's operation (for example, a remote control signal from the remote control unit 70 ( FIG. 2 )), and outputs an operation command (for example, a play command) corresponding to the operation to the controller 2220 .

读取部2120读取记录介质2110中记录的信息。The reading unit 2120 reads information recorded on the recording medium 2110 .

典型的记录介质2110是DVD(Digital Versatile Disk:数字多功能光盘)。但是,记录介质2110并不限于DVD。记录介质2110可以是任意类型的记录介质。但在下面的说明中,以记录介质2110是DVD的场合为例进行说明。这种情况下,读取部2120例如是光学式读取。A typical recording medium 2110 is a DVD (Digital Versatile Disk: Digital Versatile Disc). However, the recording medium 2110 is not limited to DVD. The recording medium 2110 may be any type of recording medium. However, in the following description, the case where the recording medium 2110 is a DVD will be described as an example. In this case, the reading unit 2120 is, for example, optical reading.

记录介质2110中记录的数据的格式是将DVD-Video规格为标准的格式加以改变后的格式。即,所使用的是在VOBU之中,新增设了照明数据包(L_PCK:Lighting Pack)后的格式。L_PCK的数据是为了与视听数据同步地输出照明控制数据而设的数据。The format of data recorded on the recording medium 2110 is a format obtained by modifying the standard format of the DVD-Video standard. That is, the format in which the lighting pack (L_PCK: Lighting Pack) is newly added to the VOBU is used. The data of L_PCK is data provided for outputting lighting control data in synchronization with audiovisual data.

MPEG-2(Moving Picture Experts Group 2:运动图像专家组2)为了对应广泛的应用,规定了2种方式用于将任意数量的编码流进行多路复用,并同步取得各个流进行播放。这2种方式是节目流(PS:Program Stream)方式、传输流(TS:Transport Stream)方式。在DVD等数字存储媒体中,采用节目流(PS:Program Stream)方式。在下面的说明中,将MPEG-2中规定的节目流(PS:ProgramStream)方式简写为“MPEG-PS方式”;将MPEG-2中规定的传输流(TS:Transport Stream)方式简写为“MPEG-TS方式”。In order to correspond to a wide range of applications, MPEG-2 (Moving Picture Experts Group 2) specifies two methods for multiplexing any number of encoded streams and synchronously obtaining each stream for playback. These two methods are program stream (PS: Program Stream) method and transport stream (TS: Transport Stream) method. In digital storage media such as DVD, the method of program stream (PS: Program Stream) is adopted. In the following description, the program stream (PS: Program Stream) method specified in MPEG-2 is abbreviated as "MPEG-PS method"; the transport stream (TS: Transport Stream) method specified in MPEG-2 is abbreviated as "MPEG -TS way".

NV_PCK、A_PCK、V_PCK、SP_PCK都是采用遵循MPEG-PS方式的格式。因此,L_PCK也是采用遵循MPEG-PS方式的格式。NV_PCK, A_PCK, V_PCK, and SP_PCK all adopt the format following the MPEG-PS method. Therefore, L_PCK also adopts the format following the MPEG-PS method.

进而播放装置2100还包含流数据生成部2130和解码部2140。Furthermore, the playback device 2100 further includes a stream data generating unit 2130 and a decoding unit 2140 .

流数据生成部2130根据读取部2120的输出生成包含编码后的AV数据和编码后的照明控制数据的流数据。这里所说的“编码后的AV数据”是指至少包含编码后的声音数据和编码后的影像数据其中之一的数据。The stream data generating unit 2130 generates stream data including encoded AV data and encoded lighting control data based on the output of the reading unit 2120 . The "encoded AV data" referred to here refers to data including at least one of encoded audio data and encoded video data.

由流数据生成部2130生成的流数据具有遵循MPEG-PS方式的格式。这样的流数据是通过例如以RF信号的形式接收DVD2120中记录的信息、将该RF信号数字化并放大后施加EFM和解调处理而得到的。流数据生成部2130的结构可以与众所周知的结构相同,因此这里省略详细说明。The stream data generated by the stream data generating unit 2130 has a format conforming to the MPEG-PS method. Such streaming data is obtained, for example, by receiving information recorded on DVD 2120 as an RF signal, digitizing and amplifying the RF signal, and applying EFM and demodulation processing. The structure of the stream data generation unit 2130 can be the same as the well-known structure, so detailed description is omitted here.

解码部2140包含分解部2150、AV数据解码部2160、照明控制数据解码部2170、STC生成部2180、及同步控制器(控制部)2190。The decoding unit 2140 includes a decomposition unit 2150 , an AV data decoding unit 2160 , an illumination control data decoding unit 2170 , an STC generation unit 2180 , and a synchronization controller (control unit) 2190 .

分解部2150从流数据生成部2130接收具有遵循MPEG-PS方式的格式的流数据,将该流数据分解为编码后的AV数据和编码后的照明控制数据。这种分解通过参照PES包的头部中的识别码(stream_id)来实现。分解部2150例如是多路分解器(demultiplexer)。The decomposer 2150 receives stream data having a format conforming to the MPEG-PS method from the stream data generation unit 2130 , and decomposes the stream data into encoded AV data and encoded lighting control data. This decomposition is realized by referring to the identification code (stream_id) in the header of the PES packet. The decomposition unit 2150 is, for example, a demultiplexer (demultiplexer).

AV数据解码部2160将编码后的AV数据解码,以输出AV数据。这里所说的“AV数据”是指至少包含声音数据和影像数据其中之一的数据。The AV data decoding unit 2160 decodes the encoded AV data to output the AV data. The term "AV data" here refers to data including at least one of audio data and video data.

AV数据解码部2160包含:视频缓冲器2161,用来暂时保存分解部2150所输出的编码后的影像数据;视频解码器2162,用来将编码后的影像数据解码以输出影像数据;音频缓冲器2163,用来暂时保存分解部2150所输出的编码后的声音数据;音频解码器2164,用来将编码后的声音数据解码以输出声音数据。The AV data decoding part 2160 includes: a video buffer 2161, which is used to temporarily store the encoded image data output by the decomposition part 2150; a video decoder 2162, which is used to decode the encoded image data to output image data; an audio buffer 2163, used to temporarily save the encoded audio data output by the decomposition unit 2150; the audio decoder 2164, used to decode the encoded audio data to output the audio data.

照明控制数据解码部2170将编码后的照明控制数据解码,以输出照明控制数据。这里所说的“照明控制数据”是指用来控制照明装置中所含的多个像素的数据。The illumination control data decoding unit 2170 decodes the encoded illumination control data to output the illumination control data. The "illumination control data" as used herein refers to data for controlling a plurality of pixels included in an illumination device.

照明控制数据解码部2170包含:照明缓冲器2171,用来暂时保存分解部2150所输出的编码后的照明控制数据;照明解码器2172,用来将编码后的照明控制数据解码以输出照明控制数据。The lighting control data decoding part 2170 includes: a lighting buffer 2171, which is used to temporarily store the coded lighting control data output by the decomposition part 2150; a lighting decoder 2172, which is used to decode the coded lighting control data to output the lighting control data .

STC生成部2180生成STC(Syetem Time Clock:系统时钟)。STC通过基于SCR对27MHz的基准时钟的频率进行调整(即进行增减)而获得。STC是用来在解码编码后的数据时,数据编码时所用的基准时间再现的时间。The STC generation unit 2180 generates STC (Syetem Time Clock: system clock). The STC is obtained by adjusting (that is, increasing or decreasing) the frequency of the 27 MHz reference clock based on the SCR. The STC is a time used to reproduce the reference time used in data encoding when decoding encoded data.

同步控制器2190控制AV数据解码部2160和照明控制数据解码部2170,使得AV数据解码部2160输出AV数据的时序与照明控制数据解码部2170输出照明控制数据的时序同步。The synchronization controller 2190 controls the AV data decoding unit 2160 and the lighting control data decoding unit 2170 so that the timing at which the AV data decoding unit 2160 outputs AV data is synchronized with the timing at which the lighting control data decoding unit 2170 outputs lighting control data.

这种同步播放控制是通过例如当STC与PTS一致时控制视频解码器2162使其输出影像数据的存取单位(access unit)、当STC与PTS一致时控制音频解码器2164使其输出声音数据的存取单位、当STC与PTS一致时控制照明解码器2172使其输出照明控制数据的存取单位而实现的。This synchronous playback control is achieved by, for example, controlling the video decoder 2162 to output an access unit of image data when the STC matches the PTS, and controlling the audio decoder 2164 to output audio data when the STC matches the PTS. The access unit is realized by controlling the lighting decoder 2172 to output the lighting control data when the STC and the PTS match.

同步控制器2190也可以控制AV数据解码部2160和照明控制数据解码部2170,使得AV数据解码部2160解码AV数据的时序与照明控制数据解码部2170解码照明控制数据的时序同步。The synchronization controller 2190 may also control the AV data decoding unit 2160 and the lighting control data decoding unit 2170 so that the timing of decoding the AV data by the AV data decoding unit 2160 is synchronized with the timing of decoding the lighting control data by the lighting control data decoding unit 2170 .

这种同步播放控制是通过例如当STC与DTS一致时控制视频解码器2162使其解码影像数据的存取单位、当STC与DTS一致时控制音频解码器2164使其解码声音数据的存取单位、当STC与DTS一致时控制照明解码器2172使其解码照明控制数据的存取单位而实现的。Such synchronous playback control is performed by, for example, controlling the video decoder 2162 to decode the access unit of video data when the STC matches the DTS, and controlling the audio decoder 2164 to decode the access unit of the audio data when the STC matches the DTS. This is realized by controlling the lighting decoder 2172 to decode the access unit of the lighting control data when the STC matches the DTS.

依照此种方式,也可以在影像数据、声音数据、照明控制数据的存取单位输出时序控制的基础上,或者是取代影像数据、声音数据、照明控制数据的存取单位输出时序控制,进行影像数据、声音数据、照明控制数据的存取单位解码时序控制。这是因为,存取单位的输出时序(顺序)与存取单位的解码时序(顺序)有可能不同。通过这种控制,可以同步播放影像数据、声音数据、及照明控制数据。According to this method, video data, audio data, and lighting control data can also be controlled on the basis of the timing control of the access unit output of the video data, audio data, and lighting control data, or instead of the timing control of the access unit output of the video data, audio data, and lighting control data. Decoding timing control of access unit of data, audio data, lighting control data. This is because the output timing (order) of the access unit may differ from the decoding timing (order) of the access unit. Through such control, video data, audio data, and lighting control data can be played synchronously.

视频解码器2162所输出的影像数据通过NTSC编码器2200输出到外部设备(例如TV)。视频解码器2162与TV可以通过播放装置2100的输出端子2240直接连接,也可以通过家庭LAN间接连接。The image data output by the video decoder 2162 is output to an external device (such as a TV) through the NTSC encoder 2200 . The video decoder 2162 and the TV may be directly connected through the output terminal 2240 of the playback device 2100, or may be indirectly connected through a home LAN.

音频解码器2164所输出的声音数据通过数字/模拟转换器(DAC)2210输出到外部设备(例如扬声器)。音频解码器2164与扬声器可以通过播放装置2100的输出端子2250直接连接,也可以通过家庭LAN间接连接。The sound data output from the audio decoder 2164 is output to an external device such as a speaker through a digital/analog converter (DAC) 2210 . The audio decoder 2164 and the speaker may be directly connected through the output terminal 2250 of the playback device 2100, or may be indirectly connected through a home LAN.

照明解码器2172输出的照明控制数据被输出到外部设备(例如照明装置)。照明解码器2172与照明装置可以通过播放装置2100的输出端子2260直接连接,也可以通过家庭LAN间接连接。The lighting control data output from the lighting decoder 2172 is output to an external device (such as a lighting device). The lighting decoder 2172 and the lighting device may be directly connected through the output terminal 2260 of the playback device 2100, or may be indirectly connected through a home LAN.

此外,由流数据生成部2130生成的流数据可以包含编码后的副影像数据,也可以包含导引数据。例如,当流数据包含了编码后的副影像数据与导引数据时,分解部2150将该流数据分解为编码后的副影像数据和导引数据。虽然在图6中未作图示,解码部2140可以进一步包含导引包电路(navipack circuit)、子图解码器(sub-videodecoder)、闭合字幕数据解码器(closed caption data decoder)。导引包电路通过处理导引数据来生成控制信号,并将该控制信号输出到控制器2220。子图解码器通过解码编码后的副影像数据,将副影像数据输出到NTSC编码器2200。闭合字幕数据解码器通过解码编码后的影像数据中所含的编码后的闭合字幕数据来将闭合字幕数据输出到NTSC编码器2200。这些电路的功能是众所周知的,与本发明的主题无关,因此省略详细说明。依照此种方式,解码部2140也可以包含图6中没有图示的众所周知的结构。In addition, the stream data generated by the stream data generating unit 2130 may include coded sub-video data or navigation data. For example, when the stream data includes encoded sub-video data and navigation data, the decomposition unit 2150 decomposes the stream data into encoded sub-video data and navigation data. Although not shown in FIG. 6 , the decoding unit 2140 may further include a navipack circuit, a sub-video decoder, and a closed caption data decoder. The navigation packet circuit generates a control signal by processing the navigation data, and outputs the control signal to the controller 2220 . The sub-picture decoder decodes the encoded sub-picture data, and outputs the sub-picture data to the NTSC encoder 2200 . The closed caption data decoder outputs the closed caption data to the NTSC encoder 2200 by decoding the coded closed caption data included in the coded video data. The functions of these circuits are well known and irrelevant to the subject matter of the present invention, so detailed description is omitted. In this way, the decoding unit 2140 may include well-known structures not shown in FIG. 6 .

如上述说明,利用图6所示的播放装置2100,提供了一种能够在播放记录介质中记录的声音数据和/或影像数据的同时同步地播放该记录介质中记录的照明控制数据的播放装置。通过在该播放装置上连接声音输出装置(例如扬声器)、影像输出装置(例如TV)和照明装置,可以与记录介质所提供的音乐和/或影像联动改变照明模式。具有“治疗”效果的照明模式例如是表现阳光透过枝叶的照明效果。As described above, using the playback device 2100 shown in FIG. 6, a playback device capable of simultaneously playing the lighting control data recorded in the recording medium while playing the audio data and/or image data recorded in the recording medium is provided. . By connecting an audio output device (such as a speaker), an image output device (such as a TV) and a lighting device to the playback device, the lighting mode can be changed in conjunction with the music and/or video provided by the recording medium. A lighting mode with a "healing" effect is, for example, a lighting effect in which sunlight penetrates through branches and leaves.

工业适用性Industrial Applicability

如上所述,借助于本发明的对话型装置,可以掌握用户的健康状态,决定与用户的健康状态相适应的行动模式。由此使用户从装备各种传感器的麻烦中解放出来。进而,用户会觉得对话型装置是在为自己的健康状态分忧的好朋友。其结果是,对话型装置的价值得到提高,用户对于对话型装置的满足感、需求欲得到提高。As described above, with the interactive device of the present invention, the user's health status can be grasped, and an action pattern suitable for the user's health status can be determined. This frees the user from the hassle of equipping various sensors. Furthermore, the user will feel that the interactive device is a good friend who shares worries about his health. As a result, the value of the interactive device increases, and the user's sense of satisfaction and desire for the interactive device increase.

Claims (14)

1. conversational device, it has:
Grasp the unit, be used for grasping user's health status;
The decision unit is used for the action model that the above-mentioned user's that grasped according to above-mentioned grasp unit health status decision adapts with it;
Carry out indicating member, be used to refer to the action model that execution is determined by above-mentioned decision unit;
The suggestion unit is used for advising above-mentioned action model by sound to above-mentioned user before the action model that is determined by above-mentioned decision unit is carried out in indication;
Identifying unit judges to the answer of above-mentioned action model of advising whether it has accepted above-mentioned action model of advising according to above-mentioned user,
When judged result thought that above-mentioned user's answer is the answer of having accepted above-mentioned action model of advising, above-mentioned action model of advising was carried out in above-mentioned execution indicating member indication.
2. conversational device as claimed in claim 1, above-mentioned user's health status is grasped in above-mentioned grasp unit according to above-mentioned user's language performance.
3. conversational device as claimed in claim 2, above-mentioned grasp unit are grasped above-mentioned user's health status according to the said key word of above-mentioned user.
4. conversational device as claimed in claim 1, it further has the suggestion necessity and has or not identifying unit, is used for judging whether to be necessary to advise above-mentioned action model to above-mentioned user before the action model that is determined by above-mentioned decision unit is carried out in indication;
Before thinking to indicate the above-mentioned action model of execution, judged result is necessary that when above-mentioned user advises above-mentioned action model above-mentioned suggestion unit is advised above-mentioned action model by sound to above-mentioned user.
5. conversational device as claimed in claim 4, above-mentioned suggestion necessity have or not identifying unit to judge suggestion whether it is necessary or not property according to allocating the value of statistical indicant that has or not to the expression of above-mentioned action model suggestion necessity in advance.
6. conversational device as claimed in claim 4, above-mentioned suggestion necessity have or not the time of the execution number of times that identifying unit is performed according to above-mentioned action model to distribute to judge suggestion whether it is necessary or not property.
7. conversational device as claimed in claim 1, above-mentioned decision unit adapts one of a plurality of action models decision that has been assigned with priority respectively for the health status with above-mentioned user action model, whether accept this action pattern according to above-mentioned user, adjusted the priority of distributing to this action pattern.
8. conversational device as claimed in claim 1, it further has storage unit, is used for storing the action model that the health status with above-mentioned user adapts;
Above-mentioned decision unit uses the action model of storing in the said memory cells to decide above-mentioned action model.
9. conversational device as claimed in claim 1, the selection of the content that comprises in the playback equipment to be play to the action model of above-mentioned user suggestion by above-mentioned suggestion unit.
10. conversational device as claimed in claim 9, foregoing comprise voice data, image data, illumination control data; Above-mentioned playback equipment changes the light quantity of lighting device and at least one item in the light color according to the illumination control data.
11. conversational device as claimed in claim 1, above-mentioned conversational device has at least one function in agent functionality and the locomotive function.
12. conversational device as claimed in claim 1, above-mentioned user's health status are represented at least one aspect in above-mentioned user's emotion and above-mentioned user's the health status.
13. a conversational device, it has:
The sound input part, being used for the sound mapping that the user sends is voice signal;
Voice recognition portion discerns the said language of user based on the voice signal that the tut input part is exported;
Dialog database is used for logining the language that above-mentioned user can say in advance, and preserves the corresponding relation of above-mentioned language that is logged and above-mentioned user's health status;
Grasp the unit, the language of logining in language that will be obtained by the identification of tut identification part and the above-mentioned dialog database contrasts, and determines above-mentioned user's health status according to this results of comparison, grasps above-mentioned user's health status thus;
The decision unit, based on the action model table of the corresponding relation of the action model of health status of having preserved above-mentioned user and conversational device, the action model that decides the above-mentioned user's who is grasped with above-mentioned grasp unit health status to adapt;
Carry out indicating member, be used to refer to the action model that execution is determined by above-mentioned decision unit;
The suggestion unit was used for before the action model that is determined by above-mentioned decision unit is carried out in indication, based on the output result of above-mentioned grasp unit and the synthetic suggestion of the output result statement of above-mentioned decision unit, advised above-mentioned action model by sound to above-mentioned user;
Identifying unit judges to the answer of above-mentioned action model of advising whether it has accepted above-mentioned action model of advising according to above-mentioned user;
When judged result thought that above-mentioned user's answer is the answer of having accepted above-mentioned action model of advising, above-mentioned action model of advising was carried out in above-mentioned execution indicating member indication.
14. conversational device as claimed in claim 13, it further has:
Accept the unit, be used for accepting the action model that above-mentioned user oppositely advises at above-mentioned action model of advising;
Be used for judging that can above-mentioned conversational device carry out the unit of the action model of above-mentioned reverse suggestion; And
When judged result thinks that above-mentioned conversational device can be carried out the action model of above-mentioned reverse suggestion, the unit that the corresponding relation of the action model of the above-mentioned user's that preserves in the above-mentioned action model table health status and above-mentioned conversational device is upgraded.
CNA038252929A 2002-09-20 2003-09-19 Interactive device Pending CN1701287A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002276121 2002-09-20
JP276121/2002 2002-09-20

Publications (1)

Publication Number Publication Date
CN1701287A true CN1701287A (en) 2005-11-23

Family

ID=32025058

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA038252929A Pending CN1701287A (en) 2002-09-20 2003-09-19 Interactive device

Country Status (5)

Country Link
US (1) US20060100880A1 (en)
EP (1) EP1542101A1 (en)
JP (1) JPWO2004027527A1 (en)
CN (1) CN1701287A (en)
WO (1) WO2004027527A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104981188A (en) * 2013-05-14 2015-10-14 夏普株式会社 Electronic machine
CN108305640A (en) * 2017-01-13 2018-07-20 深圳大森智能科技有限公司 Intelligent robot active service method and device
CN111133469A (en) * 2017-09-22 2020-05-08 元多满有限公司 Chatbot-based user care system
CN111492425A (en) * 2017-12-19 2020-08-04 三星电子株式会社 Speech recognition apparatus and method
US12118991B2 (en) 2018-07-20 2024-10-15 Sony Corporation Information processing device, information processing system, and information processing method

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359979B2 (en) * 2002-09-30 2008-04-15 Avaya Technology Corp. Packet prioritization and associated bandwidth and buffer management techniques for audio over IP
US20040073690A1 (en) 2002-09-30 2004-04-15 Neil Hepworth Voice over IP endpoint call admission
US7978827B1 (en) 2004-06-30 2011-07-12 Avaya Inc. Automatic configuration of call handling based on end-user needs and characteristics
US20070233285A1 (en) * 2004-09-14 2007-10-04 Kakuya Yamamoto Apparatus Control System and Apparatus Control Method
JP4677543B2 (en) * 2005-05-24 2011-04-27 株式会社国際電気通信基礎技術研究所 Facial expression voice generator
JP2007094544A (en) * 2005-09-27 2007-04-12 Fuji Xerox Co Ltd Information retrieval system
US20090197504A1 (en) * 2008-02-06 2009-08-06 Weistech Technology Co., Ltd. Doll with communication function
JP5255888B2 (en) * 2008-04-08 2013-08-07 日本電信電話株式会社 Pollen symptom diagnosis device, pollen symptom diagnosis support method, and pollen symptom diagnosis system
US8218751B2 (en) 2008-09-29 2012-07-10 Avaya Inc. Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences
JP5201050B2 (en) * 2009-03-27 2013-06-05 ブラザー工業株式会社 Conference support device, conference support method, conference system, conference support program
EP2572302B1 (en) * 2010-05-19 2021-02-17 Sanofi-Aventis Deutschland GmbH Modification of operational data of an interaction and/or instruction determination process
KR101759190B1 (en) * 2011-01-04 2017-07-19 삼성전자주식회사 Method for reporting emergency stateduring call service in portable wireless terminal and apparatus thereof
US20130147395A1 (en) * 2011-12-07 2013-06-13 Comcast Cable Communications, Llc Dynamic Ambient Lighting
JP5776544B2 (en) * 2011-12-28 2015-09-09 トヨタ自動車株式会社 Robot control method, robot control device, and robot
JP5904021B2 (en) * 2012-06-07 2016-04-13 ソニー株式会社 Information processing apparatus, electronic device, information processing method, and program
JP2014059764A (en) * 2012-09-18 2014-04-03 Sharp Corp Self-propelled control device, method for controlling self-propelled control device, external device control system, self-propelled control device control program and computer-readable recording medium with the same recorded therein
US9380443B2 (en) 2013-03-12 2016-06-28 Comcast Cable Communications, Llc Immersive positioning and paring
JP2015184563A (en) * 2014-03-25 2015-10-22 シャープ株式会社 Interactive home appliance system, server device, interactive home appliance, method for home appliance system to perform dialogue, and program for realizing the method on a computer
JP6530906B2 (en) * 2014-11-28 2019-06-12 マッスル株式会社 Partner robot and its remote control system
DE112017002589T5 (en) 2016-05-20 2019-04-25 Groove X, Inc. Autonomous trading robot and computer program
JP2018049358A (en) * 2016-09-20 2018-03-29 株式会社イシダ Health management system
CN109117233A (en) * 2018-08-22 2019-01-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling information
JP2020185618A (en) * 2019-05-10 2020-11-19 株式会社スター精機 Machine operation method, machine operation setting method, and machine operation confirmation method
JP6842514B2 (en) * 2019-08-22 2021-03-17 東芝ライフスタイル株式会社 Safety confirmation system using a refrigerator
US12295081B2 (en) 2022-01-06 2025-05-06 Comcast Cable Communications, Llc Video display environmental lighting
US12417828B2 (en) * 2023-05-30 2025-09-16 International Business Machines Corporation Expert crowdsourcing for health assessment learning from speech in the digital healthcare era

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6249720B1 (en) * 1997-07-22 2001-06-19 Kabushikikaisha Equos Research Device mounted in vehicle
US6405170B1 (en) * 1998-09-22 2002-06-11 Speechworks International, Inc. Method and system of reviewing the behavior of an interactive speech recognition application
US6606598B1 (en) * 1998-09-22 2003-08-12 Speechworks International, Inc. Statistical computing and reporting for interactive speech applications
US6697457B2 (en) * 1999-08-31 2004-02-24 Accenture Llp Voice messaging system that organizes voice messages based on detected emotion
JP2001148889A (en) * 1999-11-19 2001-05-29 Daiwa House Ind Co Ltd Integral operation system for in-house device
US6526382B1 (en) * 1999-12-07 2003-02-25 Comverse, Inc. Language-oriented user interfaces for voice activated services
JP2001188784A (en) * 1999-12-28 2001-07-10 Sony Corp Conversation processing apparatus and method, and recording medium
JP2001249945A (en) * 2000-03-07 2001-09-14 Nec Corp Feeling generation method and feeling generator
JP2002123289A (en) * 2000-10-13 2002-04-26 Matsushita Electric Ind Co Ltd Voice interaction device
US6975988B1 (en) * 2000-11-10 2005-12-13 Adam Roth Electronic mail method and system using associated audio and visual techniques

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104981188A (en) * 2013-05-14 2015-10-14 夏普株式会社 Electronic machine
CN104981188B (en) * 2013-05-14 2017-10-27 夏普株式会社 Electronic equipment
CN108305640A (en) * 2017-01-13 2018-07-20 深圳大森智能科技有限公司 Intelligent robot active service method and device
CN111133469A (en) * 2017-09-22 2020-05-08 元多满有限公司 Chatbot-based user care system
CN111133469B (en) * 2017-09-22 2023-08-15 元多满有限公司 User care system based on chatbot
CN111492425A (en) * 2017-12-19 2020-08-04 三星电子株式会社 Speech recognition apparatus and method
US12154559B2 (en) 2017-12-19 2024-11-26 Samsung Electronics Co., Ltd. Speech recognition device and method
US12118991B2 (en) 2018-07-20 2024-10-15 Sony Corporation Information processing device, information processing system, and information processing method

Also Published As

Publication number Publication date
US20060100880A1 (en) 2006-05-11
WO2004027527A1 (en) 2004-04-01
JPWO2004027527A1 (en) 2006-01-19
EP1542101A1 (en) 2005-06-15

Similar Documents

Publication Publication Date Title
CN1701287A (en) Interactive device
CN1187734C (en) Robot control apparatus
CN1270289C (en) Action teaching apparatus and action teaching method for robot system, and storage medium
CN1309535C (en) Robot device, method for controlling motion of robot device, and system for controlling motion of robot device
CN1248193C (en) Session device, session host device, session slave device, session control method, and session control program
CN1171650C (en) Voice recognition device, voice recognition method, and game machine using the same
CN1199149C (en) Dialogue processing equipment, method and recording medium
CN1237505C (en) User interface/entertainment equipment of imitating human interaction and loading relative external database using relative data
CN1461463A (en) speech synthesis device
CN1290034C (en) Robotic device and its behavior control method
CN1213401C (en) Program, speech interaction apparatus, and method
CN1908965A (en) Information processing apparatus and method, and program
CN1806755A (en) Method and apparatus for rendition of content data
CN1633690A (en) Digital recorder for selectively storing only a music section out of radio broadcasting contents and method thereof
CN1591569A (en) Speech communication system and method, and robot device
CN1220174C (en) Speech output apparatus
CN1298160C (en) Broadcast receiving method and system
CN1781140A (en) Audio conversation device, method, and robot device
CN1142647A (en) Voice Recognition Dialogue Device
CN1339997A (en) Edit device, edit method and recorded medium
CN1817311A (en) Judgment ability evaluation apparatus, robot, judgment ability evaluation method, program, and medium
CN1224368A (en) Game device, game processing method, and recording medium
CN1808566A (en) Playback apparatus and method
JP2005342862A (en) robot
JP2007034664A (en) Emotion estimation apparatus and method, recording medium, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20051123

C20 Patent right or utility model deemed to be abandoned or is abandoned