CN105404161A - Intelligent voice interaction method and device - Google Patents
Intelligent voice interaction method and device Download PDFInfo
- Publication number
- CN105404161A CN105404161A CN201510735961.1A CN201510735961A CN105404161A CN 105404161 A CN105404161 A CN 105404161A CN 201510735961 A CN201510735961 A CN 201510735961A CN 105404161 A CN105404161 A CN 105404161A
- Authority
- CN
- China
- Prior art keywords
- user
- voice
- operation instruction
- instruction
- inherent operation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000000977 initiatory effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 16
- 241000207961 Sesamum Species 0.000 description 6
- 235000003434 Sesamum indicum Nutrition 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000010354 integration Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention provides an intelligent voice interaction method and device. The intelligent voice interaction method comprises the following steps: starting a voice interaction according to operation of a user, receiving a voice instruction, which belongs to a custom content, input by the user, obtaining an inherent operation instruction, which is determined according to a corresponding relationship between the custom content established in advance and the inherent operation instruction, corresponding to the voice instruction, and executing corresponding operation according to the inherent operation instruction. The method can improve user experience and accuracy of voice recognition.
Description
Technical Field
The invention relates to the technical field of voice processing, in particular to an intelligent voice interaction method and device.
Background
The user can control devices such as a smart phone and a smart household appliance through voice interaction. When a user performs voice interaction, the current scheme usually requires the user to input a fixed format, for example, when the user wants to set the air conditioner to 26 degrees celsius, the user can only say that the air conditioner temperature is set to 26 degrees celsius.
However, current solutions have certain problems, both in terms of user input experience and in terms of system recognition accuracy.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present invention is to provide an intelligent voice interaction method, which can improve user experience and improve voice recognition accuracy.
Another objective of the present invention is to provide an intelligent voice interaction device.
In order to achieve the above object, an intelligent voice interaction method provided in an embodiment of a first aspect of the present invention includes: starting voice interaction according to the operation of a user; receiving a voice instruction which belongs to the user-defined content and is input by a user, and acquiring an inherent operation instruction corresponding to the voice instruction, wherein the inherent operation instruction is determined according to a corresponding relation between the pre-established user-defined content and the inherent operation instruction; and executing corresponding operation according to the inherent operation instruction.
According to the intelligent voice interaction method provided by the embodiment of the first aspect of the invention, the user can input the custom content through the voice of the user, the user can set the custom content according to the user's needs, the user experience can be improved, and the voice recognition accuracy can be improved.
In order to achieve the above object, an embodiment of a second aspect of the present invention provides an intelligent voice interaction apparatus, including: the starting module is used for starting voice interaction according to the operation of a user; the acquisition module is used for receiving a voice instruction which belongs to the user-defined content and is input by a user and acquiring an inherent operation instruction corresponding to the voice instruction, wherein the inherent operation instruction is determined according to a corresponding relation between the pre-established user-defined content and the inherent operation instruction; and the execution module is used for executing corresponding operation according to the inherent operation instruction.
According to the intelligent voice interaction device provided by the embodiment of the second aspect of the invention, the user can input the custom content through the voice of the user, the user can set the custom content according to the user's needs, the user experience can be improved, and the voice recognition accuracy can be improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of an intelligent voice interaction method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for intelligent voice interaction according to another embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for intelligent voice interaction according to another embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a voice interaction process according to an embodiment of the present invention in comparison with a current scenario;
FIG. 5 is a schematic structural diagram of an intelligent voice interaction apparatus according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of an intelligent voice interaction apparatus according to another embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar modules or modules having the same or similar functionality throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Fig. 1 is a schematic flow chart of an intelligent voice interaction method according to an embodiment of the present invention, where the method includes:
s11: and starting voice interaction according to the operation of the user.
The user can start voice interaction through keys on the mobile device, or open an application program (APP) for voice interaction, start voice interaction through function keys in the APP, or wake up or start through the keys through voice of other hardware devices, such as intelligent home appliances.
S12: receiving a voice instruction which belongs to the self-defined content and is input by a user, and acquiring an inherent operation instruction corresponding to the voice instruction, wherein the inherent operation instruction is determined according to a corresponding relation between the pre-established self-defined content and the inherent operation instruction.
The inherent operation instruction is a default instruction of the system, and corresponds to the content with a fixed format, such as 'calling to a small and clear mobile phone number', and the custom content is the content set by the user, such as 'Amitabha'.
The client can interact with the server, sends the collected voice instruction to the server, and obtains the corresponding inherent operation instruction from the server.
Referring to fig. 2, the implementation flow of S12 may include:
s21: the client receives a voice command which belongs to the user-defined content and is input by a user.
For example, after a user initiates a voice interaction on a client, the client may capture a voice command, such as "Amitabha," input by the user.
S22: and the client sends the voice instruction to the cloud end so that the cloud end performs voice recognition on the voice instruction and determines the inherent operation instruction corresponding to the voice instruction according to the corresponding relation between the pre-established user-defined content and the inherent operation instruction.
The cloud end can pre-establish the corresponding relation between the user-defined content and the inherent operation instruction, after receiving the voice instruction sent by the client end, the cloud end can perform voice recognition on the voice instruction, and then the corresponding inherent operation instruction can be determined according to the pre-established corresponding relation.
For example, if the inherent operation instruction corresponding to the amitra in the correspondence relationship pre-established by the cloud is "call the mingmen mobile phone number", it can be determined that the corresponding inherent operation instruction is "call the mingmen mobile phone number".
S23: and the client receives the inherent operation instruction sent by the cloud.
For example, the client receives a telephone number sent by the cloud.
In some embodiments, referring to fig. 3, the process of setting the corresponding relationship may include:
s31: a setup interface is presented to the user.
With the user on the cell-phone carry out the speech interaction for the example, the user can start the setting interface through the function key in the APP, for example, the user opens this APP, contains the setting button in this APP, and the user can start the setting interface through clicking the setting button, perhaps, starts the setting interface through cell-phone system layer function, for example, with speech interaction and cell-phone system function integration together for the cell-phone is from taking the speech interaction function.
S32: and receiving user-defined content input by a user in the setting interface in a voice or text mode, and acquiring an inherent operation instruction which is selected by the user and corresponds to the user-defined content.
After the setting interface is started, a user can input custom content such as Amitabha through voice or characters, the mapped content such as Amitabha is selected correspondingly, and then the mobile phone number with the smallest middle and the smallest middle in a phone book of the mobile phone is called. Or, setting "open the door by the sesame" — starting to play the music instruction.
In order to facilitate the user operation in the subsequent voice recognition, the length of the custom content is usually smaller than the length of the fixed format content corresponding to the corresponding inherent operation instruction, for example, the "Amitabha" is four words, and the corresponding "telephone number for calling a little is 10 words.
S33: and sending the user-defined content and the inherent operation instruction to a cloud end so that the cloud end establishes a corresponding relation between the user-defined content and the inherent operation instruction.
For example, the Amitabha and the mobile number for calling the Xiaoming are correspondingly sent to the cloud, the sesame opening door and the music starting to be played are correspondingly sent to the cloud, and after the cloud receives the contents, the corresponding relation between the Amitabha and the mobile phone number for calling the Xiaoming can be established, and the corresponding relation between the sesame opening door and the music starting to be played can be established.
After the corresponding relationship is established at the cloud end, the inherent operation instruction corresponding to the user-defined content can be determined according to the corresponding relationship and sent to the client.
S13: and executing corresponding operation according to the inherent operation instruction.
For example, after the client acquires that the inherent operation instruction is "call to a small and clear mobile phone number", the client can search and dial the small and clear mobile phone number.
Referring to fig. 4, comparing the voice interaction flow 41 of the current scheme with the voice interaction flow 42 of the scheme of this embodiment, it can be seen from fig. 4 that, for the user, the current scheme requires the user to speak 10 words and the format must be fixed, whereas the user of the scheme of this embodiment only needs to speak 4 words and the 4 words are user-defined, because the input amount is reduced and the content better conforms to the characteristics of the user, the user experience can be improved. For system identification, the current scheme needs the system to accurately identify 10 words, but the embodiment only needs to accurately identify 4 words, so that the identification accuracy is higher.
In the embodiment, the user can input the custom content through the voice of the user, the user can set the custom content according to the own needs, the user experience can be improved, the input cost can be reduced when the custom content is short, and the accuracy of the voice recognition of the system can be improved. The privacy of information input can be ensured through the user-defined content, and the entertainment, the intelligence and the like can be improved.
Fig. 5 is a schematic structural diagram of an intelligent voice interaction apparatus according to another embodiment of the present invention, where the apparatus 50 includes: a starting module 51, an obtaining module 52 and an executing module 53.
A starting module 51, configured to start voice interaction according to a user operation;
optionally, the starting module 51 is specifically configured to:
starting voice interaction according to the operation of a user on a key on the mobile equipment; or,
starting voice interaction according to a function key in an application program for voice interaction opened by a user; or,
voice interaction is initiated through voice wake-up or key pressing of other hardware devices.
The user can start voice interaction through keys on the mobile device, or open an application program (APP) for voice interaction, start voice interaction through function keys in the APP, or wake up or start through the keys through voice of other hardware devices, such as intelligent home appliances.
An obtaining module 52, configured to receive a voice instruction belonging to a user-defined content and input by a user, and obtain an inherent operation instruction corresponding to the voice instruction, where the inherent operation instruction is determined according to a correspondence relationship between the pre-established user-defined content and the inherent operation instruction;
in some embodiments, referring to fig. 6, the obtaining module 52 includes:
the first unit 521 is configured to receive a voice instruction belonging to the customized content input by the user.
For example, after a user initiates a voice interaction on a client, the client may capture a voice command, such as "Amitabha," input by the user.
The second unit 522 is configured to send the voice instruction to the cloud, so that the cloud performs voice recognition on the voice instruction and determines an inherent operation instruction corresponding to the voice instruction according to a correspondence between pre-established user-defined content and the inherent operation instruction.
The cloud end can pre-establish the corresponding relation between the user-defined content and the inherent operation instruction, after receiving the voice instruction sent by the client end, the cloud end can perform voice recognition on the voice instruction, and then the corresponding inherent operation instruction can be determined according to the pre-established corresponding relation.
For example, if the inherent operation instruction corresponding to the amitra in the correspondence relationship pre-established by the cloud is "call the mingmen mobile phone number", it can be determined that the corresponding inherent operation instruction is "call the mingmen mobile phone number".
A third unit 523, configured to receive the inherent operation instruction sent by the cloud.
For example, the client receives a telephone number sent by the cloud.
In some embodiments, referring to fig. 6, the apparatus 50 further comprises:
a presentation module 54 for presenting the setting interface to a user;
optionally, the display module is specifically configured to:
starting a setting interface according to a function key arranged in an application program; or,
and starting a setting interface through a system level function.
With the user on the cell-phone carry out the speech interaction for the example, the user can start the setting interface through the function key in the APP, for example, the user opens this APP, contains the setting button in this APP, and the user can start the setting interface through clicking the setting button, perhaps, starts the setting interface through cell-phone system layer function, for example, with speech interaction and cell-phone system function integration together for the cell-phone is from taking the speech interaction function.
The setting module 55 is configured to receive a user-defined content input by a user in the setting interface through voice or text, and obtain an inherent operation instruction selected by the user and corresponding to the user-defined content;
after the setting interface is started, a user can input custom content such as Amitabha through voice or characters, the mapped content such as Amitabha is selected correspondingly, and then the mobile phone number with the smallest middle and the smallest middle in a phone book of the mobile phone is called. Or, setting "open the door by the sesame" — starting to play the music instruction.
In order to facilitate the user operation in the subsequent voice recognition, the length of the custom content is usually smaller than the length of the fixed format content corresponding to the corresponding inherent operation instruction, for example, the "Amitabha" is four words, and the corresponding "telephone number for calling a little is 10 words.
A sending module 56, configured to send the custom content and the inherent operation instruction to a cloud end, so that the cloud end establishes a correspondence between the custom content and the inherent operation instruction.
For example, the Amitabha and the mobile number for calling the Xiaoming are correspondingly sent to the cloud, the sesame opening door and the music starting to be played are correspondingly sent to the cloud, and after the cloud receives the contents, the corresponding relation between the Amitabha and the mobile phone number for calling the Xiaoming can be established, and the corresponding relation between the sesame opening door and the music starting to be played can be established.
After the corresponding relationship is established at the cloud end, the inherent operation instruction corresponding to the user-defined content can be determined according to the corresponding relationship and sent to the client.
And the execution module 53 is configured to execute a corresponding operation according to the inherent operation instruction.
For example, after the client acquires that the inherent operation instruction is "call to a small and clear mobile phone number", the client can search and dial the small and clear mobile phone number.
Referring to fig. 4, comparing the voice interaction flow 41 of the current scheme with the voice interaction flow 42 of the scheme of this embodiment, it can be seen from fig. 4 that, for the user, the current scheme requires the user to speak 10 words and the format must be fixed, whereas the user of the scheme of this embodiment only needs to speak 4 words and the 4 words are user-defined, because the input amount is reduced and the content better conforms to the characteristics of the user, the user experience can be improved. For system identification, the current scheme needs the system to accurately identify 10 words, but the embodiment only needs to accurately identify 4 words, so that the identification accuracy is higher.
In the embodiment, the user can input the custom content through the voice of the user, the user can set the custom content according to the own needs, the user experience can be improved, the input cost can be reduced when the custom content is short, and the accuracy of the voice recognition of the system can be improved. The privacy of information input can be ensured through the user-defined content, and the entertainment, the intelligence and the like can be improved.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (10)
1. An intelligent voice interaction method, comprising:
starting voice interaction according to the operation of a user;
receiving a voice instruction which belongs to the user-defined content and is input by a user, and acquiring an inherent operation instruction corresponding to the voice instruction, wherein the inherent operation instruction is determined according to a corresponding relation between the pre-established user-defined content and the inherent operation instruction;
and executing corresponding operation according to the inherent operation instruction.
2. The method of claim 1, wherein the obtaining the intrinsic operation instruction corresponding to the voice instruction comprises:
sending the voice instruction to a cloud end so that the cloud end performs voice recognition on the voice instruction and determines an inherent operation instruction corresponding to the voice instruction according to a corresponding relation between pre-established self-defined content and the inherent operation instruction;
and receiving the inherent operation instruction sent by the cloud.
3. The method of claim 2, further comprising:
displaying a setting interface to a user;
receiving user-defined content input by a user in the setting interface in a voice or text mode, and acquiring an inherent operation instruction which is selected by the user and corresponds to the user-defined content;
and sending the user-defined content and the inherent operation instruction to a cloud end so that the cloud end establishes a corresponding relation between the user-defined content and the inherent operation instruction.
4. The method of claim 3, wherein the presenting a settings interface to a user comprises:
starting a setting interface according to a function key arranged in an application program; or,
and starting a setting interface through a system level function.
5. The method of claim 1, wherein initiating a voice interaction based on a user action comprises:
starting voice interaction according to the operation of a user on a key on the mobile equipment; or,
starting voice interaction according to a function key in an application program for voice interaction opened by a user; or,
voice interaction is initiated through voice wake-up or key pressing of other hardware devices.
6. The method according to any one of claims 1-5, wherein the length of the custom content is smaller than the length of the fixed format content corresponding to the corresponding native operation instruction.
7. An intelligent voice interaction device, comprising:
the starting module is used for starting voice interaction according to the operation of a user;
the acquisition module is used for receiving a voice instruction which belongs to the user-defined content and is input by a user and acquiring an inherent operation instruction corresponding to the voice instruction, wherein the inherent operation instruction is determined according to a corresponding relation between the pre-established user-defined content and the inherent operation instruction;
and the execution module is used for executing corresponding operation according to the inherent operation instruction.
8. The apparatus of claim 7, wherein the obtaining module comprises:
the first unit is used for receiving a voice instruction which belongs to the user-defined content and is input by a user;
the second unit is used for sending the voice instruction to a cloud end so that the cloud end performs voice recognition on the voice instruction and determines an inherent operation instruction corresponding to the voice instruction according to a corresponding relation between pre-established custom content and the inherent operation instruction;
a third unit, configured to receive the inherent operation instruction sent by the cloud.
9. The apparatus of claim 8, further comprising:
the display module is used for displaying a setting interface to a user;
the setting module is used for receiving the user-defined content input by the user in the setting interface in a voice or text mode and acquiring the inherent operation instruction which is selected by the user and corresponds to the user-defined content;
and the sending module is used for sending the custom content and the inherent operation instruction to a cloud end so that the cloud end establishes a corresponding relation between the custom content and the inherent operation instruction.
10. The apparatus according to any one of claims 7-9, wherein the length of the custom content is smaller than the length of the fixed format content corresponding to the corresponding native operation instruction.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510735961.1A CN105404161A (en) | 2015-11-02 | 2015-11-02 | Intelligent voice interaction method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510735961.1A CN105404161A (en) | 2015-11-02 | 2015-11-02 | Intelligent voice interaction method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN105404161A true CN105404161A (en) | 2016-03-16 |
Family
ID=55469711
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510735961.1A Pending CN105404161A (en) | 2015-11-02 | 2015-11-02 | Intelligent voice interaction method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105404161A (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105895096A (en) * | 2016-03-30 | 2016-08-24 | 乐视控股(北京)有限公司 | Identity identification and voice interaction operating method and device |
| CN105955698A (en) * | 2016-05-04 | 2016-09-21 | 深圳市凯立德科技股份有限公司 | Voice control method and apparatus |
| CN107545895A (en) * | 2017-09-26 | 2018-01-05 | 联想(北京)有限公司 | Information processing method and electronic equipment |
| CN108632653A (en) * | 2018-05-30 | 2018-10-09 | 腾讯科技(深圳)有限公司 | Voice management-control method, smart television and computer readable storage medium |
| CN108962235A (en) * | 2017-12-27 | 2018-12-07 | 北京猎户星空科技有限公司 | Voice interactive method and device |
| CN109243450A (en) * | 2018-10-18 | 2019-01-18 | 深圳供电局有限公司 | Interactive voice recognition method and system |
| CN110347784A (en) * | 2019-05-23 | 2019-10-18 | 深圳壹账通智能科技有限公司 | Report form inquiring method, device, storage medium and electronic equipment |
| CN111128136A (en) * | 2019-11-28 | 2020-05-08 | 星络智能科技有限公司 | User-defined voice control method, computer equipment and readable storage medium |
| CN111785265A (en) * | 2019-11-26 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Smart speaker setting method and device, control method and device, smart speaker |
| CN113359501A (en) * | 2021-06-29 | 2021-09-07 | 前海沃乐家(深圳)智能生活科技有限公司 | Remote control system and method based on intelligent switch |
| CN120215388A (en) * | 2025-04-24 | 2025-06-27 | 深圳市致尚科技股份有限公司 | Control method based on multi-directional input device, multi-directional input device, control handle and information processing system |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080165937A1 (en) * | 2007-01-04 | 2008-07-10 | Darryl Moore | Call re-directed based on voice command |
| CN103226432A (en) * | 2013-05-22 | 2013-07-31 | 青岛旲天下智能科技有限公司 | Intelligent human-machine interaction operating system |
| CN104239371A (en) * | 2013-06-24 | 2014-12-24 | 腾讯科技(深圳)有限公司 | Instruction information processing method and device |
| CN104346127A (en) * | 2013-08-02 | 2015-02-11 | 腾讯科技(深圳)有限公司 | Realization method, realization device and terminal for voice input |
| CN104505093A (en) * | 2014-12-16 | 2015-04-08 | 佛山市顺德区美的电热电器制造有限公司 | Household appliance and voice interaction method thereof |
| CN104579873A (en) * | 2015-01-27 | 2015-04-29 | 三星电子(中国)研发中心 | Method and system for controlling intelligent home equipment |
| CN104599669A (en) * | 2014-12-31 | 2015-05-06 | 乐视致新电子科技(天津)有限公司 | Voice control method and device |
| CN204631465U (en) * | 2015-03-21 | 2015-09-09 | 中国石油大学(华东) | A Humanized Smart Home Control System with Remote Voice Control |
-
2015
- 2015-11-02 CN CN201510735961.1A patent/CN105404161A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080165937A1 (en) * | 2007-01-04 | 2008-07-10 | Darryl Moore | Call re-directed based on voice command |
| CN103226432A (en) * | 2013-05-22 | 2013-07-31 | 青岛旲天下智能科技有限公司 | Intelligent human-machine interaction operating system |
| CN104239371A (en) * | 2013-06-24 | 2014-12-24 | 腾讯科技(深圳)有限公司 | Instruction information processing method and device |
| CN104346127A (en) * | 2013-08-02 | 2015-02-11 | 腾讯科技(深圳)有限公司 | Realization method, realization device and terminal for voice input |
| CN104505093A (en) * | 2014-12-16 | 2015-04-08 | 佛山市顺德区美的电热电器制造有限公司 | Household appliance and voice interaction method thereof |
| CN104599669A (en) * | 2014-12-31 | 2015-05-06 | 乐视致新电子科技(天津)有限公司 | Voice control method and device |
| CN104579873A (en) * | 2015-01-27 | 2015-04-29 | 三星电子(中国)研发中心 | Method and system for controlling intelligent home equipment |
| CN204631465U (en) * | 2015-03-21 | 2015-09-09 | 中国石油大学(华东) | A Humanized Smart Home Control System with Remote Voice Control |
Non-Patent Citations (1)
| Title |
|---|
| 抖斗书屋 等: "《WPS2000技巧与实例》", 31 May 2000 * |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105895096A (en) * | 2016-03-30 | 2016-08-24 | 乐视控股(北京)有限公司 | Identity identification and voice interaction operating method and device |
| CN105955698A (en) * | 2016-05-04 | 2016-09-21 | 深圳市凯立德科技股份有限公司 | Voice control method and apparatus |
| CN107545895B (en) * | 2017-09-26 | 2021-10-22 | 联想(北京)有限公司 | Information processing method and electronic device |
| CN107545895A (en) * | 2017-09-26 | 2018-01-05 | 联想(北京)有限公司 | Information processing method and electronic equipment |
| CN108962235A (en) * | 2017-12-27 | 2018-12-07 | 北京猎户星空科技有限公司 | Voice interactive method and device |
| CN108632653A (en) * | 2018-05-30 | 2018-10-09 | 腾讯科技(深圳)有限公司 | Voice management-control method, smart television and computer readable storage medium |
| CN108632653B (en) * | 2018-05-30 | 2022-04-19 | 腾讯科技(深圳)有限公司 | Voice control method, smart television and computer readable storage medium |
| CN109243450A (en) * | 2018-10-18 | 2019-01-18 | 深圳供电局有限公司 | Interactive voice recognition method and system |
| CN110347784A (en) * | 2019-05-23 | 2019-10-18 | 深圳壹账通智能科技有限公司 | Report form inquiring method, device, storage medium and electronic equipment |
| WO2021103788A1 (en) * | 2019-11-26 | 2021-06-03 | 北京沃东天骏信息技术有限公司 | Smart sound box setting method and apparatus, smart sound box control method and apparatus, and smart sound box |
| CN111785265A (en) * | 2019-11-26 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Smart speaker setting method and device, control method and device, smart speaker |
| CN111128136A (en) * | 2019-11-28 | 2020-05-08 | 星络智能科技有限公司 | User-defined voice control method, computer equipment and readable storage medium |
| CN113359501A (en) * | 2021-06-29 | 2021-09-07 | 前海沃乐家(深圳)智能生活科技有限公司 | Remote control system and method based on intelligent switch |
| CN120215388A (en) * | 2025-04-24 | 2025-06-27 | 深圳市致尚科技股份有限公司 | Control method based on multi-directional input device, multi-directional input device, control handle and information processing system |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105404161A (en) | Intelligent voice interaction method and device | |
| CN105389099B (en) | Method and apparatus for voice recording and playback | |
| CN105634881B (en) | Application scene recommendation method and device | |
| CN104951335B (en) | The processing method and processing device of application program installation kit | |
| US10069818B2 (en) | Method, system, device, and terminal for network initialization of multimedia playback device | |
| CN104486451B (en) | Application program recommends method and device | |
| CN106293221B (en) | Touch pressure control method and equipment | |
| CN110945467B (en) | Disturbance-free method and terminal | |
| CN104951329A (en) | Configuration and start method of application template and mobile terminal | |
| CN111490927A (en) | Method, device and equipment for displaying message | |
| CN106775232B (en) | Method and device for setting key function through target application | |
| CN105550035A (en) | Background process control method and device | |
| CN108536415B (en) | Application volume control method, device, mobile terminal and computer readable medium | |
| CN105389113A (en) | Gesture-based application control method, device and terminal | |
| WO2016029351A1 (en) | Method and terminal for processing media file | |
| CN103916468A (en) | System upgrading method, terminal, server and upgrading system | |
| CN108345422A (en) | Application control method, apparatus, mobile terminal and computer-readable medium | |
| CN104767857A (en) | Telephone calling method and device based on cloud name cards | |
| CN104580705A (en) | Terminal | |
| CN108270661B (en) | Information reply method, device and equipment | |
| CN106789556B (en) | Expression generation method and device | |
| CN103648001A (en) | Switching method and apparatus | |
| CN105162979A (en) | Incoming call mute control method and smartwatch | |
| CN114415530A (en) | Control method, control device, electronic equipment and storage medium | |
| EP3035313A1 (en) | Method and apparatus for remote control |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160316 |