[go: up one dir, main page]

CN105139850A - Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal - Google Patents

Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal Download PDF

Info

Publication number
CN105139850A
CN105139850A CN201510493833.0A CN201510493833A CN105139850A CN 105139850 A CN105139850 A CN 105139850A CN 201510493833 A CN201510493833 A CN 201510493833A CN 105139850 A CN105139850 A CN 105139850A
Authority
CN
China
Prior art keywords
voice
module
control system
command
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510493833.0A
Other languages
Chinese (zh)
Inventor
刘延
何琳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN201510493833.0A priority Critical patent/CN105139850A/en
Publication of CN105139850A publication Critical patent/CN105139850A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a speech interaction device, a speech interaction method and a speech interaction type LED asynchronous control system terminal with the speech interaction device. The speech interaction device comprises the components of a network transceiver module, an interaction mode identification module, a language type determination module, a speech information analyzing and processing module, a command executing and processing module, and an executing result processing module. The speech interaction device can identify a fact that the speech interaction type LED asynchronous control system terminal is in a short-distance speech interaction mode or a remote speech interaction mode according to speech commands from different transmission approaches. Afterwards the language type which is adopted in the speech command can be determined, and a control command which corresponds with the speech command is analyzed and executed. Finally, a corresponding prompting speech is output according to a command executing result. Therefore, the speech interaction device can realize intelligent speech interaction between a user and the LED asynchronous control system terminal and relatively high user experience and furthermore can satisfy an intelligent requirement of the user.

Description

Voice interaction device and method and voice interaction type LED asynchronous control system terminal
Technical Field
The invention belongs to the technical field of LED display control, and particularly relates to a voice interaction device, a voice interaction method and a voice interaction type LED asynchronous control system terminal adopting the voice interaction device.
Background
At present, a system based on an embedded system platform and utilizing cloud services to realize voice interactive control is relatively mature and is favored by vast users; such as mobile phone application software like "mini ice" and "mini na" developed by microsoft.
In the existing LED display control system industry, some voice interactive control LED control systems are proposed, which mainly receive and transmit some specific voice commands to realize some simple functions, and the system can only realize one-to-one control, but cannot realize intelligent and one-to-many control. Therefore, some problems summarizing the prior art solutions are: a) the intelligence is not enough, and the voices of different kisses cannot be intelligently identified; b) the problem of sending control commands to a plurality of terminal systems simultaneously cannot be solved; c) the problem of information reply processing of a plurality of terminal systems cannot be solved; and d) low ease of use.
Disclosure of Invention
Therefore, aiming at the defects and shortcomings in the prior art, the invention provides a voice interaction device, a voice interaction type LED asynchronous control system terminal and a voice interaction method.
Specifically, the voice interaction device provided by the embodiment of the invention is applied to a voice interaction type LED asynchronous control system terminal. The voice interaction device comprises: the system comprises a network transceiving module, an interactive mode identification module, a language type judgment module, a voice information analysis and processing module, a command execution and processing module and an execution result processing module. The network transceiver module is used for receiving an input audio information data packet representing a voice command; the interactive mode identification module is used for analyzing the input audio information data packet to judge whether the voice interactive LED asynchronous control system terminal is currently in a close-range voice interactive mode or a remote voice interactive mode; the language type judging module is used for analyzing the voice information from the input audio information data packet and judging the language type of the voice information; the voice information analyzing and processing module is used for analyzing the voice information after judging the voice type to obtain a corresponding control command; the command executing and processing module is used for executing the control command and generating a command executing result; and the execution result processing module is used for acquiring corresponding prompt voice according to the command execution result and providing the prompt voice to the network transceiver module so as to output the prompt voice as response information of the voice command.
In an embodiment of the present invention, the voice interaction apparatus further includes: and the cloud end butt joint processing module is suitable for being connected with a cloud end voice library. The language type judgment module is specifically used for comparing the analyzed voice information with the voice information type acquired from a cloud voice library through the cloud docking processing module, analyzing and judging the language type; the voice information analyzing and processing module specifically comprises a control command corresponding to the analyzed voice information and prompt voices corresponding to the control command execution success and failure respectively, wherein the control command is obtained based on the voice information acquired from a cloud voice library by the cloud docking processing module; and the execution result processing module specifically comprises a prompt voice which is obtained from the voice information analysis and processing module and corresponds to the command execution result.
In an embodiment of the present invention, the interactive mode recognition module is further adapted to connect to a local voice module and determine that the voice interactive LED asynchronous control system terminal is currently in a short-distance voice interactive mode when receiving audio information from the local voice module.
In an embodiment of the present invention, the voice interaction apparatus further includes: the image acquisition and processing module is suitable for being connected with a camera. When the voice interactive LED asynchronous control system terminal is currently in a remote voice interactive mode, the command execution and processing module further transmits the command execution result to the image acquisition and processing module, so that the image acquisition and processing module selectively acquires and processes an image obtained by shooting information currently displayed by an LED display screen of the voice interactive LED asynchronous control system terminal by the camera, and provides the image to the network transceiver module for output.
In addition, the voice interactive LED asynchronous control system terminal provided by the embodiment of the invention comprises an asynchronous control card and an LED display screen which are connected; the asynchronous control card comprises a network module and a programmable logic device. In addition, the asynchronous control card further comprises the voice interaction device and a local voice module, the voice interaction device is connected between the network module and the programmable logic device, and the local voice module is connected with a loudspeaker and a microphone and the interaction mode recognition module in the voice interaction device.
In an embodiment of the present invention, the voice interactive LED asynchronous control system terminal further includes a camera, where the camera is connected to the voice interaction device and is used for providing an image obtained by shooting information currently displayed on the LED display screen to the voice interaction device.
In addition, the voice interaction method provided by the embodiment of the invention is suitable for being matched with a voice interaction type LED asynchronous control system terminal for execution. The voice interaction method comprises the following steps: (S1) receiving an input audio information packet representing a voice command; (S2) parsing the input audio information packet to determine whether the voice interactive LED asynchronous control system terminal is currently in a near-distance voice interactive mode or a remote voice interactive mode; (S3) parsing the voice information from the input audio information packet and determining a language type of the voice information; (S4) after the voice type is judged, parsing the voice information to obtain a corresponding control command; (S5) executing the control command and generating a command execution result; and (S6) acquiring a corresponding prompt voice as response information of the voice command according to the command execution result.
In an embodiment of the present invention, the step (S3) includes comparing and analyzing the analyzed voice information with the voice information category obtained from the cloud-based voice library to determine the language type; and the step (S4) comprises the steps of obtaining a control command corresponding to the analyzed voice information based on the voice information obtained from the cloud voice library and obtaining prompt voices corresponding to the execution success and failure of the control command respectively for the step (S6).
In an embodiment of the present invention, when the voice interactive LED asynchronous control system terminal is currently in the remote voice interaction mode, after the step (S5), the voice interaction method further includes the steps of: and selectively acquiring and processing an image obtained by shooting information currently displayed on an LED display screen of the voice interactive LED asynchronous control system terminal by a camera according to the command execution result, so that a provider of the voice command can remotely view the image.
In one embodiment of the present invention, when the user transmits the voice command to the network module through a management terminal via a network, the step (S2) determines that the voice interactive LED asynchronous control system terminal is currently in a short-distance voice interactive mode; and when the user transmits the voice command to the network module through the management terminal via the server terminal, the step (S2) of determining that the voice interactive LED asynchronous control system terminal is currently in a remote voice interactive mode.
Therefore, the embodiment of the invention can realize intelligent voice interaction between a user and the LED asynchronous control system terminal, is more convenient, more flexible, more convenient and simpler to use, has better user experience, and can meet the intelligent requirements of the user; the method can avoid the situation that a user needs to operate various complicated software when configuring the LED display screen, and can realize system control only through simple voice interaction.
Other aspects and features of the present invention will become apparent from the following detailed description, which proceeds with reference to the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein.
Drawings
The following detailed description of embodiments of the invention will be made with reference to the accompanying drawings.
Fig. 1 is an application block diagram of a voice interactive LED asynchronous control system terminal according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a hardware architecture of a voice interactive LED asynchronous control system terminal according to an embodiment of the present invention.
Fig. 3 is a voice interaction software implementation scheme in the voice interaction type LED asynchronous control system terminal according to the embodiment of the present invention.
Fig. 4 is a flowchart illustrating a voice interaction performed by a user through a management end via a network direct connection or a server end according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Please refer to fig. 1, which is an application block diagram of a voice interactive LED asynchronous control system terminal according to an embodiment of the present invention. The voice interactive LED asynchronous control system terminal 20 of this embodiment can be applied in two aspects, one is a case where the user interacts with the voice interactive LED asynchronous control system terminal 20 in a short distance, and the short distance interaction has two ways, one is that the user directly interacts with the voice interactive LED asynchronous control system terminal 20, and the other is that the user interacts with the voice interactive LED asynchronous control system terminal 20 through the management terminal 10 via a network (e.g., a local area network); another aspect is the case of remote interaction, that is, a user interacts with the voice interactive LED asynchronous control system terminal 20 through the management terminal 10 via the server terminal 30, and in the remote interaction, one or more voice interactive LED asynchronous control system terminals 20 may be provided, so that one-to-many voice interactive control can be realized.
Please refer to fig. 2, which is a schematic diagram of a hardware architecture of a voice interactive LED asynchronous control system terminal 20 according to an embodiment of the present invention. As shown in fig. 2, the present embodiment adds the functions of intelligent voice recognition and voice interaction on the basis of a general LED asynchronous control system terminal.
Specifically, the asynchronous control card 21 includes an embedded processor 211, a network module 212, a local voice module 213, a programmable logic device 215, an MCU module 216, and an LED display driver module 217. Wherein, the network module 212 and the local voice module 213 are connected with the embedded processor 211; the MCU module 216 and the LED display driving module 217 are connected with the programmable logic device 215.
As mentioned above, the embedded processor 211 is, for example, an ARM microprocessor, which is a platform for providing the operation of the embedded operating system, and is a core main control part of the asynchronous control card 21, and some related software is run on the operating system.
The network module 212 is used for information interaction with external devices, and may be a wired network module such as an ethernet module, a wireless network module such as a 3G module, a WiFi module, etc., or a combination thereof.
The local voice module 213 includes a recording processing part and a broadcasting processing part, the recording processing part is mainly used for converting the audio signal input by the user through the microphone 24 into a digital signal and providing the digital signal to the embedded processor 211, and the broadcasting processing part is used for performing digital-to-analog conversion on the audio signal fed back and output by the embedded processor 211 and then sending the audio signal to the loudspeaker 23 for broadcasting, so as to realize the dialogue with the user; the module is mainly used when a user directly interacts with a voice interactive LED asynchronous control system terminal in a short distance.
The camera 22 connected to the embedded processor 211 is mainly used for transmitting information displayed on the LED display screen 25 back to the management terminal 10 through the camera 22 during remote interaction. Generally, in use, a user needs to perform parameter configuration debugging on the LED display screen 25, and when performing parameter configuration debugging on the LED display screen 25, the camera 22 acquires current display information in real time and uploads the current display information to the server 30 through the network module 212 connected to the embedded processor 211, and the server 30 processes received data and then presents the processed data to the management terminal 10.
The programmable logic device 215 is, for example, an FPGA (field programmable gate array) device, and mainly processes information exchanged with the embedded processor 211, processes a display signal, and sends the processed display signal to the LED display screen 25. Here, the display signal is output to the LED display screen 25 in two ways, one way is output to the corresponding LED lamp panel in the LED display screen 25 through the LED display driving module 217 connected to the programmable logic device 215, for example, via a flat cable interface, and the other way is output to the corresponding LED lamp panel in the LED display screen 25 through a network interface (the network interface is connected to the programmable logic device 215 through, for example, a network PHY chip) after being packed according to a network protocol and then through the receiving card 26; in practical application, the two modes can be adopted independently or simultaneously.
The MCU module (also referred to as a "single chip module") 216 is, for example, used to control a power module (not shown in fig. 2) of the embedded processor 211, so as to prevent the embedded processor 211 from being burned out due to power-on breakdown, and also to implement management of resetting, powering off, and the like of the embedded processor 211, thereby improving the stability of the system. In addition, the MCU module 216 may also receive and process some sensed signals such as temperature, humidity, voltage, and alarm signals.
Known from the foregoing hardware architecture of the voice interactive LED asynchronous control system terminal 20 is: the embodiment can realize the operation of the voice interactive LED asynchronous control system terminal 20 controlled by voice, and can realize the complicated display screen configuration of the LED display screen 20 by the voice interactive LED asynchronous control system terminal 20, thereby reducing the configuration of the LED display screen 25 which can be performed step by step only by operating software in the past, and really realizing the functions of intelligent LED screen configuration, display and playing. In addition, the intelligentization is also embodied in that, in the embodiment, certain data can be acquired from the cloud voice library 40 (see fig. 3), human language is analyzed, and the real meaning to be expressed by the user is analyzed, rather than the simple voice recognition; for example: speech recognition such as mandarin recognition, cantonese recognition and English recognition can be realized; in addition, the recognition of the voices is mainly the result of comparison and analysis with the data in the cloud voice library 40. In addition, the voice interactive LED asynchronous control system terminal 20 of the present embodiment may select whether to answer the question asked by the user in mandarin, english, or cantonese while interacting. As long as the management terminal 10 is set to be in the english system, the voice interactive LED asynchronous control system terminal 20 will interact in english in the whole interaction process, and if a standard mandarin system is set, the voice interactive LED asynchronous control system terminal 20 will interact with the user in mandarin.
Moreover, when the configuration debugging of the LED display screen is performed remotely, the camera 22 is needed, on one hand, the camera 22 is used for providing some corrected data to the management terminal 10 for the correction service of the LED display screen 25; on the other hand, when the configuration debugging of the LED display screen is performed, the camera 22 transmits the information displayed on the LED display screen 25 to the management terminal 10 through the network just like the human eyes are on the spot, and the management terminal 10 can continue to control the voice interactive LED asynchronous control system terminal 20 to perform the next configuration work after seeing the transmitted image.
Referring to fig. 1, fig. 2 and fig. 3 together, fig. 3 is a voice interaction software implementation scheme of the voice interaction LED asynchronous control system terminal 20 according to the embodiment of the present invention, which is typically executed in the embedded processor 211, and corresponds to a role of the embedded processor 211 as a voice interaction device. The voice interactive LED asynchronous control system terminal 20 of this embodiment may receive audio information directly sent by a user, or may receive audio information sent by the user through the server 30.
The network transceiver module 2111, on one hand, mainly receives the audio information data packet from the network module 212, which is sent from the management terminal 10 through the network or the server terminal 30, and transmits the received audio information data packet to the interactive mode identification module 2113, and on the other hand, sends the audio information to be sent to the management terminal 10 or the server terminal 30 through the network module 212, and sends the image information obtained by shooting through the camera 22 to the server terminal 30.
The interactive mode recognition module 2113 is configured to determine whether the audio information data packet received through the network module 212 is from the management terminal 10 or the server terminal 30, and this may perform recognition that the user and the voice interactive LED asynchronous control system terminal 20 are in the short-distance voice interactive mode or the remote voice interactive mode by customizing different protocols and identifiers with the management terminal 10 and the server terminal 30, so that the interactive mode recognition module 2113 may learn that the user is currently in the short-distance voice interactive mode or the remote voice interactive mode by analyzing the audio information data packet; in addition, the default is close-range voice interaction mode for receiving audio information from the local voice module 213. In addition, it is worth mentioning that one of the applications of the voice interaction mode of the voice interactive LED asynchronous control system terminal 20 is recognized as being currently in the remote voice interaction mode, and when the current application is judged to be currently in the remote voice interaction mode, various audio inputs in the short-distance voice interaction mode may be temporarily disabled or ignored; and vice versa.
The cloud docking processing module 2114 is mainly responsible for acquiring data from the cloud voice library 40 and processing the data. The cloud speech library 40 is a speech library that can be searched on the network.
The language type determination module 2115 is configured to process the audio information passing through the interaction pattern recognition module 2113 to obtain voice information, and compare and analyze the voice information with the type of the voice information obtained from the cloud voice library 40 through the cloud docking processing module 2114, so as to know which language the management terminal 10 wants to interact with.
The voice information analyzing and processing module 2116 is used for identifying and analyzing the content of the voice information after the language type is judged, and at this time, the cloud docking processing module 2114 is also used for searching in the cloud voice library 40 to analyze and compare the voice command sent by the user, and identifying what the user wants to do by the voice interactive LED asynchronous control system terminal 20; a large amount of common dialog information corresponding to different scenes may be stored in the cloud voice library 40, and the voice interactive LED asynchronous control system terminal 20 finally resolves the meaning of the voice command of the user by comparing the received voice information with those stored in the cloud voice library 40, and converts the meaning into a control command to be issued to the command execution and processing module 2117 for execution.
The command executing and processing module 2117 is mainly used for completing the execution of the control command and transmitting the execution result to the execution result processing module 2118. Furthermore, when the user and the voice interactive LED asynchronous control system terminal 20 are in the remote voice interactive mode, the execution result needs to be transmitted to the image collecting and processing module 2119, so that the image collecting and processing module 2119 collects and processes the information currently displayed on the LED display screen 25.
The execution result processing module 2118 needs to know whether the command execution is successful or unsuccessful after receiving the execution result of the command execution and processing module 2117, and after knowing the result, obtains the prompt voice corresponding to success or failure from the voice information parsing and processing module 2116, and packages the prompt voice into a corresponding audio information data packet through the network transceiving module 2111, and then responds to the user through the network module 212. After receiving the prompt voice sent back by the system, the management terminal 10 knows whether the voice interactive LED asynchronous control system terminal 20 completes the voice command sent by itself. In addition, the execution result processing module 2118 outputs a prompt voice to the local voice module 213 to play the prompt voice locally at the voice interactive LED asynchronous control system terminal 20 by the speaker 23, in addition to responding the prompt voice to the user through the network transceiver module 2111 via the network module 212.
The image collecting and processing module 2119 is mainly used for controlling the camera 22 to shoot an image of information currently displayed on the LED display screen 25 and temporarily store the shot image in a local storage space when the voice interactive LED asynchronous control system terminal 20 is currently in the remote voice interactive mode after receiving an execution result output by the command executing and processing module 2117, and then uploading the stored image information to the server 30 through the network module 212 by using the network transceiving module 2111; thus, the server 30 has the current display information of the LED display screen 25, and the management terminal 10 can know what is currently displayed by the voice interactive LED asynchronous control system terminal 20 by acquiring the image information from the server 30; thus, the voice command can be continuously sent according to the displayed result, and the voice interactive LED asynchronous control system terminal 20 can be instructed to do what next. It can be understood that the image acquisition and processing module 2119 is only used for configuring and screen testing the LED display screen 25 in the remote voice interaction mode, and when performing other operations such as timed reminding or timed power on/off for the voice interactive LED asynchronous control system terminal 20, it only needs to execute according to the command sent by the user, and returns the execution result after the execution is finished, and image acquisition is not needed; in other words, the image capturing and processing module 2119 is selectively used according to the voice command in the remote voice interaction mode.
Referring to fig. 4, a flowchart of a user performing voice interaction through a management terminal 10 via a network (e.g., a local area network) or a server terminal 30 according to an embodiment of the present invention is shown. Firstly, the voice interactive LED asynchronous control system terminal 20 is powered on and started, after being connected through the network, the user of the management terminal 10 sends out a command to select the language type according to the own needs, and after the setting is successful, the voice interactive LED asynchronous control system terminal 20 responds according to the voice command sent out by the user of the management terminal 10. The voice interactive LED asynchronous control system terminal 20 will typically provide at least two selectable language categories, chinese and english. After the language category is set, the user of the management terminal 10 can perform voice interaction with the voice interactive LED asynchronous control system terminal 20. In addition, in the remote voice interaction mode, the management terminal 10 may search for and connect to a plurality of voice-interactive LED asynchronous control system terminals 20, and the management terminal 10 may simultaneously transmit a voice command to the plurality of voice-interactive LED asynchronous control system terminals 20, but when each voice-interactive LED asynchronous control system terminal 20 responds, the management terminal 10 is a device that receives response information of each voice-interactive LED asynchronous control system terminal 20; here, each voice interactive LED asynchronous control system terminal 20 has its own unique identification code, and when responding to a voice command of a user of the management terminal 10, it will report the corresponding unique identification code first and then report the response content.
When the user of the management terminal 10 interacts with the voice interactive LED asynchronous control system terminal 20 through the network in a short-distance visual situation (corresponding to a short-distance voice interaction mode), when the user of the management terminal 10 issues an LED display screen configuration command, after the voice interactive LED asynchronous control system terminal 20 receives the voice command, the content information is recognized and compared with some commonly used human utterances in the cloud voice library 40, so as to finally recognize the real intention of the user of the management terminal 10, i.e. analyze the voice command, and then asks the user of the management terminal 10 whether to execute the "XXX" command (corresponding to the contents of the voice command), and, upon receiving the confirmation command of the user of the management terminal 10, the voice interactive LED asynchronous control system terminal 20 executes the command, after the execution is finished, the voice interactive LED asynchronous control system terminal 20 responds corresponding information to the management terminal 10 according to the execution result. After receiving the response message, the management terminal 10 can instruct the voice interactive LED asynchronous control system terminal 20 to do what next step as required.
When the management terminal 10 interacts with the voice interactive LED asynchronous control system terminal 20 through the network server 30 under a remote invisible condition (corresponding to a remote voice interaction mode), the voice command sent by the user of the management terminal 10 and the information (including the prompt voice information, the captured image information, etc.) responded by the voice interactive LED asynchronous control system terminal 20 are relayed through the server terminal 30, but the overall response speed of the voice interactive LED asynchronous control system terminal 20 is slightly slower than that of the short-distance interaction.
In conclusion, the embodiment of the invention can realize intelligent voice interaction between the user and the LED asynchronous control system terminal, is more convenient, more flexible, more convenient and simpler to use, has better user experience, and can meet the intelligent requirements of the user; the method can avoid the situation that a user needs to operate various complicated software when configuring the LED display screen, and can realize system control only through simple voice interaction.
It should be noted that the voice interactive LED asynchronous control system terminal according to the above embodiment of the present invention is not limited to use the cloud voice library for language type determination and voice information analysis, and may also be configured with a local voice library, and accordingly, when performing the voice type determination and the voice information analysis, the cloud voice library may be used, and the local voice library may also be used.
So far, the principle and the implementation of the voice interaction device and the method and the voice interaction type LED asynchronous control system terminal of the present invention have been explained in the present document by applying specific examples, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be subject to the appended claims.

Claims (10)

1. A voice interaction device is applied to a voice interaction type LED asynchronous control system terminal; characterized in that the voice interaction device comprises:
the network transceiving module is used for receiving an input audio information data packet representing a voice command;
the interactive mode identification module is used for analyzing the input audio information data packet to judge whether the voice interactive LED asynchronous control system terminal is currently in a short-distance voice interactive mode or a remote voice interactive mode;
the language type judging module is used for analyzing the voice information from the input audio information data packet and judging the language type of the voice information;
the voice information analyzing and processing module is used for analyzing the voice information after judging the voice type to obtain a corresponding control command;
the command execution and processing module is used for executing the control command and generating a command execution result; and
and the execution result processing module is used for acquiring corresponding prompt voice according to the command execution result and providing the prompt voice to the network transceiver module so as to output the prompt voice as response information of the voice command.
2. The voice interaction apparatus of claim 1, further comprising: the cloud end butt joint processing module is suitable for being connected with a cloud end voice library; wherein,
the language type judgment module is specifically used for comparing the analyzed voice information with the voice information types acquired from a cloud voice library through the cloud docking processing module, analyzing and judging the language type;
the voice information analyzing and processing module specifically comprises a control command corresponding to the analyzed voice information and prompt voices corresponding to the control command execution success and failure respectively, wherein the control command is obtained based on the voice information acquired from a cloud voice library by the cloud docking processing module; and
the execution result processing module specifically comprises a prompt voice which is obtained from the voice information analysis and processing module and corresponds to the command execution result.
3. The voice interaction device of claim 1, wherein the interaction mode recognition module is further adapted to connect to a local voice module and determine that the voice-interactive LED asynchronous control system terminal is currently in the short-range voice interaction mode when receiving audio information from the local voice module.
4. The voice interaction apparatus of claim 1, further comprising: the image acquisition and processing module is suitable for being connected with a camera; when the voice interactive LED asynchronous control system terminal is currently in a remote voice interactive mode, the command execution and processing module further transmits the command execution result to the image acquisition and processing module, so that the image acquisition and processing module selectively acquires and processes an image obtained by shooting information currently displayed by an LED display screen of the voice interactive LED asynchronous control system terminal by the camera, and provides the image to the network transceiver module for output.
5. A voice interactive LED asynchronous control system terminal comprises an asynchronous control card and an LED display screen which are connected; the asynchronous control card comprises a network module and a programmable logic device; the asynchronous control card further comprises a voice interaction device according to claim 1, and a local voice module, wherein the voice interaction device is connected between the network module and the programmable logic device, and the local voice module is connected with a speaker and a microphone and the interaction pattern recognition module in the voice interaction device.
6. The voice-interactive asynchronous control system terminal for LED according to claim 5, further comprising a camera connected to said voice interaction device for providing said voice interaction device with an image obtained by photographing information currently displayed on said LED display screen.
7. A voice interaction method is suitable for being executed in cooperation with a voice interactive LED asynchronous control system terminal; the voice interaction method is characterized by comprising the following steps:
(S1) receiving an input audio information packet representing a voice command;
(S2) parsing the input audio information packet to determine whether the voice interactive LED asynchronous control system terminal is currently in a near-distance voice interactive mode or a remote voice interactive mode;
(S3) parsing the voice information from the input audio information packet and determining a language type of the voice information;
(S4) after the voice type is judged, parsing the voice information to obtain a corresponding control command;
(S5) executing the control command and generating a command execution result; and
(S6) acquiring a corresponding prompt voice as response information of the voice command according to the command execution result.
8. The voice interaction method of claim 7,
step (S3) includes comparing and analyzing the analyzed voice information with the voice information category obtained from the cloud voice library to determine the language type; and
the step (S4) includes obtaining a control command corresponding to the parsed voice information based on the voice information obtained from the cloud voice library, and obtaining prompt voices corresponding to success and failure of execution of the control command respectively for use in the step (S6).
9. The voice interaction method of claim 7, wherein when the voice interactive LED asynchronous control system terminal is currently in the remote voice interaction mode, further comprising, after the step (S5), the steps of:
and selectively acquiring and processing an image obtained by shooting information currently displayed on an LED display screen of the voice interactive LED asynchronous control system terminal by a camera according to the command execution result, so that a provider of the voice command can remotely view the image.
10. The voice interaction method of claim 7, wherein when the user transmits the voice command to the network module through a management terminal via a network, the step (S2) of determining that the voice interactive LED asynchronous control system terminal is currently in the short-distance voice interaction mode; and when the user transmits the voice command to the network module through the management terminal via the server terminal, the step (S2) of determining that the voice interactive LED asynchronous control system terminal is currently in a remote voice interactive mode.
CN201510493833.0A 2015-08-12 2015-08-12 Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal Pending CN105139850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510493833.0A CN105139850A (en) 2015-08-12 2015-08-12 Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510493833.0A CN105139850A (en) 2015-08-12 2015-08-12 Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal

Publications (1)

Publication Number Publication Date
CN105139850A true CN105139850A (en) 2015-12-09

Family

ID=54725172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510493833.0A Pending CN105139850A (en) 2015-08-12 2015-08-12 Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal

Country Status (1)

Country Link
CN (1) CN105139850A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976814A (en) * 2015-12-10 2016-09-28 乐视致新电子科技(天津)有限公司 Headset control method and device
CN106019993A (en) * 2016-06-01 2016-10-12 佛山市顺德区美的电热电器制造有限公司 Cooking system
CN106023991A (en) * 2016-05-23 2016-10-12 丽水学院 Handheld voice interaction device and interaction method orienting to multi-task interaction
CN108818554A (en) * 2018-07-05 2018-11-16 浙江山诺智能科技有限公司 A kind of multimedia intelligent robot
CN109256132A (en) * 2018-10-29 2019-01-22 华南农业大学 A kind of message type information interaction system and method based on speech-sound intelligent identification
CN109616095A (en) * 2018-12-12 2019-04-12 安徽讯呼信息科技有限公司 A kind of AI intelligent voice system
CN110428821A (en) * 2019-07-26 2019-11-08 广州市申迪计算机系统有限公司 A kind of voice command control method and device for crusing robot
CN111402869A (en) * 2018-12-13 2020-07-10 南京硅基智能科技有限公司 Multi-voice mode man-machine dialogue system
CN111538471A (en) * 2020-04-21 2020-08-14 武汉市奥拓智能科技有限公司 Display screen voice control and maintenance device, method and system
CN113053374A (en) * 2021-03-05 2021-06-29 天九共享网络科技集团有限公司 Large screen control system
CN115243007A (en) * 2022-06-30 2022-10-25 国家能源集团新能源技术研究院有限公司 A remote control system and remote control method, device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1361516A (en) * 2000-12-28 2002-07-31 广东科龙电器股份有限公司 Comprehensive household server
CN102136187A (en) * 2010-01-26 2011-07-27 苏州捷新环保电子科技有限公司 Method for realizing interactive voice-controlled LED (light-emitting diode) display screen
CN102538143A (en) * 2012-02-06 2012-07-04 广东美的电器股份有限公司 Intelligent phonic search engine air-conditioning system and control method thereof
US8490131B2 (en) * 2009-11-05 2013-07-16 Sony Corporation Automatic capture of data for acquisition of metadata
CN103219006A (en) * 2012-01-18 2013-07-24 北京德信互动网络技术有限公司 Man-machine interaction system and method
CN203812562U (en) * 2014-03-24 2014-09-03 美的集团股份有限公司 Remote control device
CN104065922A (en) * 2014-06-20 2014-09-24 西安诺瓦电子科技有限公司 LED display screen terminal monitoring and control system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1361516A (en) * 2000-12-28 2002-07-31 广东科龙电器股份有限公司 Comprehensive household server
US8490131B2 (en) * 2009-11-05 2013-07-16 Sony Corporation Automatic capture of data for acquisition of metadata
CN102136187A (en) * 2010-01-26 2011-07-27 苏州捷新环保电子科技有限公司 Method for realizing interactive voice-controlled LED (light-emitting diode) display screen
CN103219006A (en) * 2012-01-18 2013-07-24 北京德信互动网络技术有限公司 Man-machine interaction system and method
CN102538143A (en) * 2012-02-06 2012-07-04 广东美的电器股份有限公司 Intelligent phonic search engine air-conditioning system and control method thereof
CN203812562U (en) * 2014-03-24 2014-09-03 美的集团股份有限公司 Remote control device
CN104065922A (en) * 2014-06-20 2014-09-24 西安诺瓦电子科技有限公司 LED display screen terminal monitoring and control system and method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976814B (en) * 2015-12-10 2020-04-10 乐融致新电子科技(天津)有限公司 Control method and device of head-mounted equipment
CN105976814A (en) * 2015-12-10 2016-09-28 乐视致新电子科技(天津)有限公司 Headset control method and device
CN106023991A (en) * 2016-05-23 2016-10-12 丽水学院 Handheld voice interaction device and interaction method orienting to multi-task interaction
CN106023991B (en) * 2016-05-23 2019-12-03 丽水学院 A hand-held voice interaction device and interaction method for multi-task interaction
CN106019993A (en) * 2016-06-01 2016-10-12 佛山市顺德区美的电热电器制造有限公司 Cooking system
CN108818554A (en) * 2018-07-05 2018-11-16 浙江山诺智能科技有限公司 A kind of multimedia intelligent robot
CN109256132A (en) * 2018-10-29 2019-01-22 华南农业大学 A kind of message type information interaction system and method based on speech-sound intelligent identification
CN109616095A (en) * 2018-12-12 2019-04-12 安徽讯呼信息科技有限公司 A kind of AI intelligent voice system
CN111402869A (en) * 2018-12-13 2020-07-10 南京硅基智能科技有限公司 Multi-voice mode man-machine dialogue system
CN111402869B (en) * 2018-12-13 2023-09-01 宿迁硅基智能科技有限公司 Multi-voice mode man-machine dialogue system
CN110428821A (en) * 2019-07-26 2019-11-08 广州市申迪计算机系统有限公司 A kind of voice command control method and device for crusing robot
CN111538471A (en) * 2020-04-21 2020-08-14 武汉市奥拓智能科技有限公司 Display screen voice control and maintenance device, method and system
CN113053374A (en) * 2021-03-05 2021-06-29 天九共享网络科技集团有限公司 Large screen control system
CN115243007A (en) * 2022-06-30 2022-10-25 国家能源集团新能源技术研究院有限公司 A remote control system and remote control method, device and medium

Similar Documents

Publication Publication Date Title
CN105139850A (en) Speech interaction device, speech interaction method and speech interaction type LED asynchronous control system terminal
CN107276864B (en) Method, device and system for controlling household appliances by intelligent voice equipment
CN112423238B (en) Electronic device, control terminal, storage medium, and device connection method
CN107135443B (en) Signal processing method and electronic equipment
WO2021169495A1 (en) Network configuration method for intelligent device, and related apparatuses
US10606367B2 (en) Command relay device, system and method for providing remote assistance/remote control
CN107544272B (en) Terminal control method, device and storage medium
KR20170032445A (en) Terminal, server, and terminal control method
CN109986561A (en) A kind of robot long-distance control method, device and storage medium
CN107948565B (en) Method and device for realizing wired screen transmission
CN103841466A (en) Screen projection method, computer end and mobile terminal
CN106373566A (en) Data transmission control method and device
CN109637534A (en) Voice remote control method, system, controlled device and computer readable storage medium
US9147396B2 (en) Voice recognition device and voice recognition method
CN104122979A (en) Method and device for control over large screen through voice
CN112466300A (en) Interaction method, electronic device, intelligent device and readable storage medium
CN117668196A (en) Multi-mode request processing method and device
CN105446145A (en) Voice control method and system for smart socket
CN109166582A (en) Automatic control system and method for voice recognition
CN111210819B (en) Information processing method and device and electronic equipment
CN108037829B (en) Multi-mode interaction method and system based on holographic equipment
CN111091829B (en) Voice control method and device and electronic equipment
CN107948569B (en) Method and device for realizing wired screen transmission
CN114666363B (en) Information transmission method, device, electronic equipment, storage medium and product
CN105188155B (en) A kind of method and terminal of network connection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151209