[go: up one dir, main page]

HK1181888B - Speech recognition for context switching - Google Patents

Speech recognition for context switching Download PDF

Info

Publication number
HK1181888B
HK1181888B HK13109201.1A HK13109201A HK1181888B HK 1181888 B HK1181888 B HK 1181888B HK 13109201 A HK13109201 A HK 13109201A HK 1181888 B HK1181888 B HK 1181888B
Authority
HK
Hong Kong
Prior art keywords
context
application
game
user interface
computing device
Prior art date
Application number
HK13109201.1A
Other languages
Chinese (zh)
Other versions
HK1181888A (en
Inventor
M.J.蒙森
W.P.基斯
D.J.格里纳沃尔特
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1181888A publication Critical patent/HK1181888A/en
Publication of HK1181888B publication Critical patent/HK1181888B/en

Links

Description

Speech recognition for context switching
Technical Field
The present application relates to a context switching technique, and more particularly, to a technique for speech recognition for context switching.
Background
Many computer applications provide a variety of different contexts (contexts) and graphical user interfaces through which a user can interact with the application. For example, video games typically include different user interfaces that allow a user to access various functions provided by the video game. Some user interfaces may allow a user to customize certain portions of a game, such as a game arena for playing the game, a vehicle for playing the game, and so forth. Other user interfaces may allow a user to participate in various types of game play, such as single player game play, multi-player game play, and so forth. While these different user interfaces may provide a more entertaining and different gaming experience, the current manner of navigating between the various user interfaces is cumbersome.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Various embodiments provide techniques for implementing speech recognition for context switching. In at least some embodiments, the techniques can allow a user to switch between different contexts and/or user interfaces of an application through voice commands. For example, a gaming application may include a variety of different user interfaces that provide different interaction contexts and functionality. Some user interfaces may provide game play functionality, while other user interfaces may provide game customization functionality. The techniques described herein may allow a user to navigate among the various user interfaces by issuing voice commands.
In at least some embodiments, a context menu is provided that lists the available contexts of an application that can be navigated by voice command. For example, a user may speak a trigger while a context-specific user interface of an application is being displayed. Recognition of the trigger word may cause a context menu to be displayed as part of the user interface. The context menu may include other contexts that may be navigated to by voice commands. In an implementation, the other contexts presented in the context menu include a subset of a larger set of contexts that are filtered based on a variety of context filtering criteria. A user may speak one of the contexts presented in the context menu to cause navigation to a user interface associated with a different context.
Drawings
The detailed description is described with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items.
FIG. 1 is an illustration of an example operating environment in which techniques described herein can be employed in accordance with one or more embodiments.
FIG. 2 is an illustration of an example context switch scenario in accordance with one or more embodiments.
FIG. 3 is an illustration of an example context switch scenario in accordance with one or more embodiments.
FIG. 4 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
FIG. 5 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
Fig. 6 illustrates an example system including the computing device described with reference to fig. 1 and 7 in accordance with one or more embodiments.
FIG. 7 illustrates an example computing device that can be used to implement embodiments described herein.
Detailed Description
Overview
Various embodiments provide techniques for implementing speech recognition for context switching. In at least some embodiments, the techniques can allow a user to switch between different contexts and/or user interfaces of an application through voice commands. For example, a gaming application may include a variety of different user interfaces that provide different interaction contexts and functionality. Some user interfaces may provide game play functionality, while other user interfaces may provide game customization functionality. The techniques described herein may allow a user to navigate among the various user interfaces by issuing voice commands.
In at least some embodiments, a context menu is provided that lists the available contexts of an application that can be navigated to by voice command. For example, a user may speak a trigger while a context-specific user interface of an application is being displayed. Recognition of the trigger word may cause a context menu to be displayed as part of the user interface. The context menu may include other contexts that may be navigated to by voice commands. In an implementation, the other contexts presented in the context menu include a subset of a larger set of contexts that are filtered based on a variety of context filtering criteria. A user may speak one of the contexts presented in the context menu to cause navigation to a user interface associated with a different context.
In the discussion that follows, a section entitled "operating Environment" is provided and describes one environment in which one or more embodiments can be employed. Following this, a section entitled "example context switch scenario" describes an example context switch scenario in accordance with one or more embodiments. Next, a section entitled "example methods" describes example methods in accordance with one or more embodiments. Finally, a section entitled "example System and device" describes an example system and an example device that can be used to implement one or more embodiments.
Operating environment
FIG. 1 illustrates an operating environment in accordance with one or more embodiments, generally at 100. Operating environment 100 includes a computing device 102, which may be configured in a variety of ways. For example, computing device 102 may be embodied as any suitable computing device, such as, by way of example and not limitation, a gaming console, a desktop computer, a portable computer, a handheld computer such as a Personal Digital Assistant (PDA), a cellular telephone, and so forth. One example configuration of computing device 102 is shown and described below in fig. 7.
Included as part of computing device 102 are one or more applications 104, which are representations of functionality that allows a wide variety of tasks to be performed via computing device 102. For example, the application 104 may be executed by the computing device 102 to provide functionality such as video games, word processing, email, spreadsheets, media content consumption, and so forth.
An input/output module 106 is also included as part of computing device 102, representing functionality for sending and receiving information. For example, the input/output module 106 may be configured to receive input generated by an input device such as a keyboard, mouse, touch pad, game controller, optical scanner, or the like. The input/output module 106 may also be configured to receive and/or interpret input received through non-contact mechanisms such as voice recognition, gesture-based input, object scanning, and the like. Also for these embodiments, the computing device 102 includes a Natural User Interface (NUI) device 108 configured to receive various non-contact inputs, for example, through human gesture visual recognition, object scanning, voice input, color input, and so forth.
A speech recognition module 110 is included as part of the input/output module 106, which is a representation of functionality that recognizes and converts speech input (e.g., from the NUI device 108) into a form that other entities can use to perform tasks.
Further to the techniques discussed herein, the application 104 includes one or more context modules 112 that are representations of functionality that allow the application to switch between various contexts and/or user interfaces associated with the application. In at least some embodiments, the context module 112 is configured to receive input from the input/output module 106 and/or the speech recognition module 110 to implement the techniques discussed herein.
Operating environment 100 also includes a display device 114 coupled with computing device 102. In at least some embodiments, display device 114 is configured to receive and display output from computing device 102, such as a user interface generated by application 104 and provided to display device 114 through input/output module 106. In an implementation, the input/output module 106 may receive input (e.g., voice input) from the NUI device 108 and may utilize the input to allow a user to interact with the context module 112 to navigate among various contexts and/or user interfaces provided by the application 104. Further implementations of operating environment 100 are described below.
Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), or a combination of these implementations. The terms "module," "functionality," and "logic" as used herein generally represent software, firmware, hardware, or a combination thereof. In the case of a software implementation, the module, functionality, or logic represents program code that performs specified tasks when executed on a processor (e.g., CPU or CPUs). The program code can be stored in one or more computer readable memory devices. The features of the techniques described below are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
For example, the computing device 102 may also include an entity (e.g., software) that causes hardware of the computing device 102 to perform operations, such as processors, functional blocks, and so on. For example, the computing device 102 may include a computer-readable medium configured to maintain instructions that cause the computing device, and in particular hardware of the computing device 102, to perform operations. Thus, the instructions are used to configure hardware to perform operations and in this way cause hardware transformations to perform functions. The instructions may be provided by the computer-readable medium to the computing device 102 through a variety of different configurations.
One such computer-readable medium configuration is a signal bearing medium and thus is configured to transmit instructions (e.g., as a carrier wave) to the hardware of the computing device, e.g., over a network. The computer-readable medium may also be configured as a computer-readable storage medium and thus is not a signal bearing medium. Examples of computer readable storage media include Random Access Memory (RAM), Read Only Memory (ROM), optical disks, flash memory, hard disk memory, and other storage devices that may use magnetic, optical, and other technologies for storing instructions and other data.
Example context switching scenarios
This section describes example context switch scenarios that may be enabled by the techniques discussed herein. In at least some embodiments, an example context switch scenario can be implemented by aspects of the operating environment 100 as discussed above and/or the example system 600 as discussed below. Accordingly, certain aspects of the example context switch scenario will be discussed with reference to features of the operating environment 100 and/or the example system 600. This is for purposes of example only, and aspects of the example context switch scenario may be implemented in a variety of different operating environments and systems without departing from the spirit and scope of the claimed embodiments.
Fig. 2 illustrates an example context switch scenario generally at 200. In the upper half of the context switch scenario 200, the display device 114 displays a customization interface 202 associated with the gaming application. In implementations, the customization interface 202 allows a user to customize various aspects of a gaming application, for example, by switching components of a gaming vehicle, changing the color of the vehicle, and so forth. As such, the customization interface 202 is associated with a particular set of functionality that enables various tasks associated with the gaming application to be performed.
Also shown in the upper half of the context switch scenario is a speech input 204 to the NUI device 108. The speech input 204 represents words and/or other utterances that may be spoken by a user and sensed by one or more audio sensing tools of the NUI device 108. Trigger words 206 are included as part of the speech input 204, which represent words that can be spoken to activate the speech recognition functionality discussed herein.
Continuing with the lower half of the context switch scenario 200, recognition of the speech input 204 (e.g., trigger word 206) causes a context menu 208 to be presented in the customization interface 202. The context menu 208 includes context options that can be selected to navigate to other contexts associated with the gaming application. For example, a context option may be spoken to select a particular context option and cause navigation to a graphical user interface associated with the particular context option. The context options presented in the context menu 208 may include filtered context options that are filtered based on one or more filtering criteria. Example ways to filter context options are discussed below.
Fig. 3 illustrates an example context switch scenario generally at 300. In an implementation, the context switch scenario 300 represents a continuation of the context switch scenario 200 as discussed above. In the upper half of the context switch scenario 300, the customization interface 202 is displayed along with the context menu 208. A speech input 302 comprising context words 304 is received at the NUI device 108. In this example, context word 304 represents the selection of a context option from context menu 208.
Continuing to the lower half of the context switch scenario 300, recognition of the speech input 302 causes a game interface 306 to be displayed on the display device 114. The tournament interface 306 may allow a user to participate in one or more tournaments associated with the gaming application. As such, the tournament interface 306 may be associated with a particular set of functions that enable game play-related actions to be performed. In an implementation, the functionality represented by the tournament interface 306 is different than the functionality represented by the customization interface 202 as discussed above. In this way, the techniques described herein may allow switching between different sets of functions via speech input.
Although the context switch scenario is discussed above with reference to the context menu being presented, at least some embodiments may allow context switches without requiring context menu presentation. For example, a user may speak a trigger word after a contextual word, which may cause a switch from one context to another independently of the presentation of a context menu. As such, context words may represent words that may be spoken to invoke a particular context, user interface, and/or functionality.
Having described an example context switch scenario, consider now a discussion of an example method in accordance with one or more embodiments.
Example method
Many methods that may be implemented for performing the techniques described herein are discussed below. Aspects of the methods may be implemented using hardware, firmware, software, or a combination thereof. The methodologies are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. Moreover, operations illustrated with respect to a particular method may be combined and/or interchanged with operations of different methods in accordance with one or more implementations. Aspects of the method may be implemented through interaction between various entities by reference to the environment 100 as discussed above and by reference to the system 600 discussed below.
FIG. 4 is a flow diagram that describes steps in a method in accordance with one or more embodiments. Step 400 displays a graphical user interface associated with a first context. For example, a game graphical user interface associated with a particular set of functions may be displayed. Step 402 identifies spoken trigger words indicating potential navigation to different contexts. A wide variety of different trigger words may be implemented to indicate potential navigation.
Step 404 represents a context menu that includes one or more different contexts that may be navigated to. The one or more different contexts may be determined by filtering a set of contexts based on various different filtering criteria. Examples of such filtering criteria are discussed below. In an implementation, a context menu may be displayed as part of a graphical user interface associated with a first context.
Step 406 determines whether a speech input of a contextual word is recognized in a particular time interval after the trigger word is recognized. For example, a timer may begin to elapse after a spoken trigger is detected and/or a context menu is presented. If no speech input of a contextual word is recognized in a particular time interval ("NO"), the process returns to step 400. For example, the context menu may be removed from display and the graphical user interface associated with the first context brought into focus.
If a speech input of a contextual word is received in a particular time interval ("YES"), step 408 navigates to a graphical user interface associated with a second context. The graphical user interface associated with the second context may be associated with a set of functions that is different from the user interface associated with the first context. In an implementation, a graphical user interface associated with the second context may be navigated to and displayed in response to a voice command (e.g., a trigger word and/or a context word) and independent of additional input from the user.
Although implementations are described herein with respect to combinations of trigger words and context words, this is not intended to be limiting. For example, some implementations may use speech recognition of a single word and/or phrase to navigate from a user interface associated with a first context to a user interface associated with a second context.
FIG. 5 is a flow diagram that describes steps in a method in accordance with one or more embodiments. In an implementation, the method may represent a more detailed implementation of step 404 as discussed above. In at least some embodiments, the method can be implemented at least in part by one or more context modules 112.
Step 500 filters a set of context options for an application. For example, the set of context options may be filtered based on one or more context-specific criteria (e.g., attributes of the application, the device on which the application is executing, or a user of the device). In an implementation, different sets of context options are available for different application versions. For example, a high-level version of an application may have more context options than a standard version of the application. For example, the advanced version may have access to more types of game play, more customization options, more multiplayer options, etc. than the standard version of the application.
Also, the state of the application may be used to filter context options. For example, if a gaming application does not have a saved game, context options associated with the saved game (e.g., viewing a replay of a previous game) may not be available.
The properties of the device may also affect the available context options. For example, if a device is not connected to a network (e.g., the internet) or the device's network connection is below a certain threshold bandwidth, certain network-related context options may not be available. Such network-related contextual options may include multiplayer network game play, content available from network resources (e.g., vehicles, game characters, arenas, etc.), messaging services that utilize network resources, and so forth.
Moreover, the particular capabilities of the device may also affect the available context options. For example, certain game play options that require threshold data and/or graphics processing capabilities may not be available on devices that do not meet the threshold processing capabilities.
The user's attributes may also affect the available context options. For example, applications may be associated with different account membership grades to which users may subscribe to access different resources and/or functions. The advanced membership grade may give the user extended access, such as extended multiplayer gaming time, more arena options, more vehicle options, more game character options, and the like, as compared to the standard membership grade.
The attributes of the user may also take into account security controls associated with the user account. For example, younger users may be prevented from accessing certain game content and/or functionality available to older users. In this way, context options may be filtered based on the age of the user and/or permissions associated with the user. Various other considerations may also be taken into account when filtering context options.
Step 502 generates a set of available context options for the application. For example, the available context options may correspond to a subset of context options that are not filtered out from the set of context options as discussed above. Step 504 allows available context options to be selected via voice commands to navigate to different contexts. For example, one or more of the available context options may be displayed as part of a context menu, as described above. Additionally or alternatively, one or more of the available context options may be selectable by voice command independent of being displayed.
Having described a method in accordance with one or more embodiments, consider now an example system and an example device that can be used to implement one or more embodiments.
Example systems and devices
FIG. 6 illustrates an example system 600 showing a computing device 102 implemented in an environment in which multiple devices are interconnected through a central computing device. The central computing device may be local to the plurality of devices or may be located remotely from the plurality of devices. In one embodiment, the central computing device is a "cloud" server farm that includes one or more server computers connected to the plurality of devices through a network or the internet or other means.
In one embodiment, the interconnect architecture enables functionality to be provided across multiple devices to provide a common and seamless experience to users of the multiple devices. Each of the multiple devices may have different physical requirements and capabilities, and the central computing device uses a platform to enable an experience to be delivered to the device that is both tailored to the device and yet common to all devices. In one embodiment, a target device "class" is created and the experience is adapted to the generic device class. A class of devices may be defined by physical characteristics or uses or other common characteristics of the devices. For example, as described above, the computing device 102 is configured in a variety of different ways, such as for mobile 602, computer 604, and television 606 uses. Each of these configurations has a generally corresponding screen size, and thus the computing device 102 may be configured as one of these device classes in this example system 600. For example, the computing device 102 may assume the mobile device 602 class of device, which includes mobile phones, music players, gaming devices, and so forth.
The computing device 102 may also assume a computer 604 device class that includes personal computers, laptops, netbooks, and so on. Television 606 configurations include configurations of devices that involve display in a casual environment, such as televisions, set-top boxes, game consoles, and so forth. Thus, the techniques described herein may be supported by these various configurations of the computing device 102 and are not limited to the specific examples described in the following sections.
Cloud 608 is shown to include a platform 610 for web services 612. Platform 610 abstracts the underlying functionality of the hardware (e.g., servers) and software resources of cloud 608, and thus may serve as a "cloud operating system. For example, the platform 610 may abstract resources to connect the computing device 102 with other computing devices. The platform 610 may also be used to abstract scaling of resources to provide a corresponding level of scaling to encountered demand for web services 612 implemented via the platform 610. Various other examples are also contemplated, such as load balancing of servers in a server farm, protection against malicious parties (e.g., spam, viruses, and other malware), and so forth.
Thus, cloud 608 is included as part of a policy regarding software and hardware resources available to computing device 102 via the internet or other network. For example, the techniques for speech recognition for context switching described herein may be implemented as part of the computing device 102 and through the platform 610 supporting the web service 612.
In implementations, input to the computing device 102 may be detected using touchscreen functionality in the mobile configuration 602, trackpad functionality configured by the computer 604, detected by a camera as part of the support of a Natural User Interface (NUI) that does not involve contact with a particular input device, and so forth. Moreover, execution of operations to implement the techniques discussed herein may be distributed over the system 600, as executed by the computing device 102 and/or by a web service 612 supported by a platform 610 of the cloud 608.
Fig. 7 illustrates various components of an example device 700 that can be implemented as any type of portable and/or computer device as described with reference to fig. 1 and 6 to implement embodiments of the techniques for context switched speech recognition described herein. Device 700 includes a communication device 702 that enables wired and/or wireless communication of device data 704 (e.g., received data, data being received, data scheduled for broadcast, data packets of the data, etc.). The device data 704 or other device content can include configuration settings of the device, media content stored on the device, and/or information associated with a user of the device. Media content stored on device 700 can include any type of audio, video, and/or image data. Device 700 includes one or more data inputs 706 via which any type of data, media content, and/or inputs can be received, such as user-selectable inputs, messages, music, television media content, recorded video content, and any other type of audio, video, and/or image data received from any content source and/or data source.
Device 700 also includes communication interfaces 708 that can be implemented as any one or more of a serial and/or parallel interface, a wireless interface, any type of network interface, a modem, and as any other type of communication interface. The communication interfaces 708 provide a connection and/or communication links between device 700 and a communication network by which other electronic, computing, and communication devices communicate data with device 700.
Device 700 includes one or more processors 710 (e.g., any of microprocessors, controllers, and the like) which process various computer-executable or readable instructions to control the operation of device 700 and to implement speech recognition for embodiments of context switching as described above. Additionally or alternatively, device 700 can be implemented with any one or combination of hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits which are generally identified at 712. Although not shown, device 700 can include a system bus or data transfer system that couples the various components within the device. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures.
Device 700 also includes computer-readable media 714, such as one or more memory components, examples of which include Random Access Memory (RAM), non-volatile memory (e.g., any one or more of a read-only memory (ROM), flash memory, EPROM, EEPROM, etc.), and a disk storage device. A disk storage device may be implemented as any type of magnetic or optical storage device, such as a hard disk drive, a recordable and/or rewriteable Compact Disc (CD), any type of a Digital Versatile Disc (DVD), and the like. The device 700 may also include a mass storage media device 716.
Computer-readable media 714 provides data storage mechanisms to store the device data 704, as well as various device applications 718 and any other types of information and/or data related to operational aspects of device 700. For example, an operating system 720 can be maintained as a computer application with the computer-readable media 714 and executed on processors 710. The device applications 718 can include a device manager (e.g., a control application, software application, signal processing and control module, code that pertains to a particular device, a hardware abstraction layer for a particular device, etc.), as well as other applications, which can include a web browser, an image processing application, a communication application (such as an instant messaging application), a word processing application, and various other different applications. The device applications 718 also include system components or modules for implementing embodiments of the context switched speech recognition techniques described herein.
In this example, the device applications 718 include an interface application 722 and a gesture capture driver 724 that are shown as software modules and/or computer applications. Gesture capture driver 724 represents software for providing an interface with a device configured to capture gestures (e.g., a touch screen, track pad, camera, etc.). Additionally or alternatively, the interface application 722 and the gesture-capture driver 724 may be implemented as hardware, software, firmware, or any combination thereof.
The device 700 also includes an audio and/or video input-output system 726 that provides audio data to an audio system 728 and/or video data to a display system 730. The audio system 728 and/or the display system 730 can include any devices that process, display, and/or otherwise render audio, video, and image data. Video signals and audio signals may be communicated from device 700 to an audio device and/or a display device via an RF (radio frequency) link, S-video link, composite video link, component video link, DVI (digital video interface), analog audio connection, or other similar communication link. In an embodiment, the audio system 728 and/or the display system 730 are implemented as external components to device 700. Alternatively, the audio system 728 and/or the display system 730 are implemented as integrated components of the example device 700.
Conclusion
Various embodiments provide techniques for speech recognition to enable context switching. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. A computer-implemented method, comprising:
presenting a context menu in a user interface associated with an application context of the application in response to the recognition of the spoken trigger;
filtering a set of application contexts of the application to identify at least one other application context based on one or more attributes of a device on which the application is executing, the one or more attributes including processing capabilities of the device;
presenting the at least one other application context as part of the context menu, thereby enabling the at least one other application context to navigate via voice input commands;
identifying a speech input of a contextual word associated with the at least one other application context in a particular time interval after the identification of the spoken trigger word; and
navigating from a user interface associated with the application context to a user interface associated with the at least one other application context in response to recognition of a voice input of the contextual word in a particular time interval.
2. The method of claim 1, wherein the application comprises a gaming application, and wherein the user interface associated with the application context comprises a different set of gaming functions than the user interface associated with the at least one other application context.
3. The method of claim 1, wherein the one or more attributes of the device are part of a set of context-specific criteria used to determine the at least one other application context.
4. The method of claim 3, wherein the context-specific criteria further comprises one or more attributes of the application or one or more attributes of a user of the device.
5. The method of claim 1, wherein the navigating occurs in response to the identifying and independently of additional input from a user.
6. A computer-implemented method, comprising:
while displaying, on a computing device, a user interface associated with a first game context of a game application, receiving an indication of a voice input that triggers a word;
filtering a set of game contexts using one or more filtering criteria for the game application to generate a set of one or more available game contexts, wherein the filtering criteria comprise one or more attributes of the computing device, the one or more attributes comprising processing capabilities of the computing device;
causing the set of one or more available game contexts to be displayed as part of the user interface associated with the first game context; and
navigating to a user interface associated with a second game context of the game application in response to an indication of a voice selection of one or more available game contexts within a particular time interval after voice input of the trigger word.
7. The method of claim 6, wherein one of the first game context or the second game context is associated with a game customization function, and wherein the other of the first game context or the second game context is associated with a game play function.
8. The method of claim 6, wherein the filtering criteria is based on one or more of attributes of the computing device or attributes of a user of the computing device.
9. The method of claim 6, wherein the filter criteria comprises a network connection status of the computing device.
10. The method of claim 6, wherein the filter criteria include one or more of an account membership rating associated with a user for the gaming application, an access permission associated with the user, or an age of the user.
11. A computer-implemented system, comprising:
means for receiving, while displaying, on a computing device, a user interface associated with a first game context of a game application, an indication of a voice input of a trigger word;
means for filtering a set of game contexts using one or more filtering criteria for the game application to generate a set of one or more available game contexts, wherein the filtering criteria comprise one or more attributes of the computing device, the one or more attributes comprising processing capabilities of the computing device;
means for causing the set of one or more available game contexts to be displayed as part of the user interface associated with the first game context; and
means for navigating to a user interface associated with a second game context of the game application in response to an indication of a voice selection of one or more available game contexts within a particular time interval after voice input of the trigger word.
12. A computer-implemented method, comprising:
filtering a set of context options for an application based on a bandwidth of an existing network connection of a device on which the application is executing;
generating a set of available context options for the application based on the filtering, the set of available context options including a subset of the set of context options; and
one or more of the set of available context options are made selectable by voice command to navigate from a user interface associated with a first context of the application to a user interface associated with a second context of the application.
HK13109201.1A 2011-10-10 2013-08-06 Speech recognition for context switching HK1181888B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/270,018 2011-10-10

Publications (2)

Publication Number Publication Date
HK1181888A HK1181888A (en) 2013-11-15
HK1181888B true HK1181888B (en) 2017-11-24

Family

ID=

Similar Documents

Publication Publication Date Title
KR102078889B1 (en) Speech recognition for context switching
CN109196464B (en) Context-based user agent
US8788269B2 (en) Satisfying specified intent(s) based on multimodal request(s)
US9223472B2 (en) Closing applications
US8635555B2 (en) Jump, checkmark, and strikethrough gestures
EP1705564A2 (en) Systems and methods for providing a system level user interface in a multimedia console
KR102019002B1 (en) Target disambiguation and correction
JP2017523515A (en) Change icon size
JP2014523056A (en) On-demand tab recovery
US20110307840A1 (en) Erase, circle, prioritize and application tray gestures
JP7270008B2 (en) Game providing method, computer program, computer-readable recording medium, and computer device
CA2799524A1 (en) Character selection
CN102981818A (en) Scenario based animation library
EP4115275B1 (en) Game console application with action card strand
KR20170042338A (en) Gesture-based access to a mix view
US8769169B2 (en) Assistive buffer usage techniques
KR20140109926A (en) Input pointer delay
JP7333363B2 (en) Game providing method, computer program, computer-readable recording medium, and computer device
KR20160144445A (en) Expandable application representation, milestones, and storylines
HK1181888B (en) Speech recognition for context switching
HK1181888A (en) Speech recognition for context switching
CN106371926A (en) Application running method and device
KR20140089888A (en) Method for controlling screen of terminal, system and apparatus thereof