US20150326985A1 - Hand-worn device for surface gesture input - Google Patents
Hand-worn device for surface gesture input Download PDFInfo
- Publication number
- US20150326985A1 US20150326985A1 US14/273,238 US201414273238A US2015326985A1 US 20150326985 A1 US20150326985 A1 US 20150326985A1 US 201414273238 A US201414273238 A US 201414273238A US 2015326985 A1 US2015326985 A1 US 2015326985A1
- Authority
- US
- United States
- Prior art keywords
- hand
- audio
- accelerometer
- audio signal
- worn device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3259—Power saving in cursor control device, e.g. mouse, joystick, trackball
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3287—Power saving characterised by the action undertaken by switching off individual functional units in the computer system
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/142—Image acquisition using hand-held instruments; Constructional details of the instruments
- G06V30/1423—Image acquisition using hand-held instruments; Constructional details of the instruments the instrument generating sequences of position coordinates corresponding to handwriting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/1801—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/22—Character recognition characterised by the type of writing
- G06V30/228—Character recognition characterised by the type of writing of three-dimensional handwriting, e.g. writing in the air
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J50/00—Circuit arrangements or systems for wireless supply or distribution of electric power
- H02J50/001—Energy harvesting or scavenging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
- H04R29/004—Monitoring arrangements; Testing arrangements for microphones
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/033—Indexing scheme relating to G06F3/033
- G06F2203/0331—Finger worn pointing device
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J50/00—Circuit arrangements or systems for wireless supply or distribution of electric power
- H02J50/10—Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/07—Applications of wireless loudspeakers or wireless microphones
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- Gesture-based user interaction allows a user to control an electronic device by making gestures such as writing letters to spell words, swatting a hand to navigate a selector, or directing a remote controller to direct a character in a video game.
- One way to provide for such interaction is to use a device such as a mobile phone or tablet computing device equipped with a touch screen for two-dimensional (2-D) touch input on the touch screen.
- a device such as a mobile phone or tablet computing device equipped with a touch screen for two-dimensional (2-D) touch input on the touch screen.
- 2-D two-dimensional
- Another way is to use depth cameras to track a user's movements and enable three-dimensional (3-D) gesture input to a system having an associated display, and such functionality has been provided in certain smart televisions and game consoles.
- 3-D gesture tracking devices One drawback with such three-dimensional gesture tracking devices is that they have high power requirements which presents challenges for implementation in portable computing devices, and another drawback is that they typically require a fixed camera to observe the scene, also a challenge to portability. For these reasons, there are challenges to adopting touch screens and 3-D gesture tracking technologies as input devices for computing devices with ultra-portable form factors, including wearable computing devices.
- a hand-worn device may include a microphone configured to capture an audio input and generate an audio signal, an accelerometer configured to capture a motion input and generate an accelerometer signal, and a controller comprising a processor and memory.
- the controller may be configured to detect a wake-up motion input based on the accelerometer signal.
- the controller may wake from a low-power sleep mode in which the accelerometer is turned on and the microphone is turned off and enter a user interaction interpretation mode in which the microphone is turned on.
- the controller may contemporaneously receive the audio signal and the accelerometer signal and decode strokes based on the audio signal and the accelerometer signal.
- the controller may detect a period of inactivity based on the audio signal and return to the low-power sleep mode.
- FIG. 1 is a schematic view of a hand-worn device for energy efficient gesture input on a surface, according to one embodiment.
- FIG. 2 illustrates an example use of the hand-worn device of FIG. 1 to input a gesture on the surface.
- FIG. 3 is a flowchart illustrating an energy efficient method for capturing gesture input on a surface, according to one embodiment.
- FIG. 4 is a flowchart illustrating substeps of a step of the method of FIG. 3 for decoding strokes.
- FIG. 5 illustrates an example use of embodiments of the hand-worn device as a ring or wristband.
- FIG. 6 is a simplified schematic illustration of an embodiment of a computing system within which the hand worn device of FIG. 1 may be utilized.
- FIG. 7 illustrates a hierarchical gesture classification strategy for disambiguating different gestures.
- FIG. 1 shows a schematic view of a hand-worn device 10 for energy efficient gesture input on a surface.
- the hand-worn device 10 may include sensors 12 which may include a microphone 14 configured to capture an audio input and generate an audio signal based thereon, and an accelerometer 16 configured to capture a motion input and generate an accelerometer signal based thereon.
- the hand-worn device may also include a controller 18 comprising a processor 20 and memory 22 , and the controller 18 may be configured to switch the hand-worn device 10 between various operating modes to maintain energy efficiency.
- the hand-worn device 10 may operate in a low-power sleep mode in which the accelerometer 16 is turned on and the microphone 14 is turned off.
- the accelerometer 16 may itself operate in a low-power motion-detection mode, which may include only detecting motion input above a predetermined threshold.
- the controller 18 may then detect a wake-up motion input of a user based on the accelerometer signal from the accelerometer 16 .
- the wake-up motion input may be from a wake-up gesture of the user such as a tap that exceeds a predetermined threshold in the accelerometer signal. Multiple taps or other suitable gestures may be used to prevent accidental waking by incidental user motions.
- the controller 18 may wake from the low-power sleep mode and enter a user interaction interpretation mode in which the microphone 14 is turned on and the accelerometer 16 is fully active.
- the controller 18 may contemporaneously receive the audio signal from the microphone 14 and the accelerometer signal from the accelerometer 16 . The controller 18 may then execute a stroke decoder 24 to decode strokes based on the audio signal and the accelerometer signal. Once the user has finished gesturing, the controller 18 may detect a period of inactivity based on the audio signal from the microphone 14 and return to the low-power sleep mode.
- the period of inactivity may be preset, such as 30 seconds, 1 minute, or 5 minutes, may be a user input period of time, or may be a period set through machine learning techniques that analyze patterns of accelerometer and audio signals and the periods of inactivity that are likely to follow.
- Decoding strokes on the hand-worn device 10 may involve breaking gestures down into simple geometric patterns such as orthogonal or diagonal line segments and half circles.
- the strokes may make up letters or context-dependent symbols, etc.
- the stroke decoder 24 may comprise a stroke classifier which may be, for example, a support vector machine (SVM) classifier, and the SVM classifier may save energy by only looking for a predetermined set of strokes.
- the stroke decoder 24 may be programmed to recognize taps and swipes based on a threshold of the accelerometer signal and a length of the audio signal. Further, orthogonal and diagonal scrolls are detectable, depending on the context of the gesture input, as explained below.
- Device 10 may be configured to recognize more complicated gestures as well, although recognition of more complicated gestures may require a concomitant increase in power consumed during disambiguation and/or degrade performance.
- the hand-worn device 10 may include an audio processing subsystem 26 with a band-pass filter 28 configured to filter the audio signal, an amplifier 30 configured to amplify the audio signal, and an envelope detector 32 such as, for example, a threshold based envelope detector, configured to generate an audio envelope from the audio signal.
- the audio processing subsystem 26 may transform the audio signal into an audio envelope by filtering the audio signal, amplifying the audio signal, and generating an audio envelope from the audio signal.
- the controller 18 may then decode strokes with the stroke decoder 24 based on the audio envelope of the audio signal, rather than the audio signal itself, and the accelerometer signal.
- the audio processing subsystem 26 may be formed separately from the microphone 14 , or one or more parts within the audio processing subsystem 26 may be incorporated into the microphone 14 , for example. Additionally, more than one band-pass filter 28 and more than one amplifier 30 may be included.
- Gesture input may take place in many different situations with different surroundings, as well as on a variety of different types of surfaces.
- the audio input detected by the microphone 14 may be the sound of skin dragging across a surface, as one example. Sound may be produced in the same frequency band regardless of the composition of the surface, thus the surface may be composed of wood, plastic, paper, glass, cloth, skin, etc. As long as the surface generates enough friction when rubbed with skin to produce an audio input detectable by the microphone 14 , virtually any sturdy surface material may be used. Additionally, any suitable surface that is close at hand may be used, such that it may not be necessary to gesture on only one specific surface, increasing the utility of the hand worn device 10 in a variety of environments.
- the audio input being thus produced by skin dragging across the surface, may be used to determine when the user is gesturing. However, the audio input may not always be easily distinguished from ambient noise.
- the audio processing subsystem 26 may filter the audio signal with at least one band-pass filter 28 to remove ambient noise and leave only the audio signal due to skin dragging across the surface. Generating an audio envelope of the audio signal may keep the length and amplitude of the audio signal for decoding strokes while discarding data that may not be used, both simplifying computation and saving the hand-worn device 10 energy.
- the hand-worn device 10 may further comprise a battery 34 configured to store energy and energy harvesting circuitry 36 including an energy harvesting coil 38 .
- the energy harvesting circuitry 36 may include a capacitor.
- the energy harvesting circuitry 36 may be configured to siphon energy from a device other than the hand-worn device via a wireless energy transfer technique such as near-field communication (NFC) or an inductive charging standard, and charge the battery with the siphoned energy.
- NFC near-field communication
- the energy may be siphoned from a mobile phone with NFC capabilities, for example. Simply holding the mobile phone may put the hand-worn device 10 in close proximity to an NFC chip in the mobile phone, allowing the hand-worn device 10 to charge the battery 34 throughout the day through natural actions of the user and without requiring removal of the hand-worn device 10 .
- the hand-worn device 10 may utilize a charging pad or other such charging device to charge the battery 34 .
- a charging pad may be used by placing the hand-worn device 10 on it while the user sleeps, for example. However, removal may not be necessary.
- the charging pad may be placed under a mouse or other such input device while the user operates a personal computer, allowing the hand-worn device 10 to be charged while the user works.
- the hand-worn device 10 may further comprise a radio 40 and the controller 18 may be configured to send a gesture packet 42 to a computing device 44 via the radio 40 .
- the radio includes a wireless transceiver configured for two way communication, which enables acknowledgments of transmissions to be sent from the computing device back to the hand worn device.
- a radio 40 including a one way transmitter may be used.
- the computing device may be the device from which the hand-worn device siphons energy, but energy may also be siphoned from a separate device.
- the gesture packet 42 may comprise the decoded strokes and inter-stroke information.
- the inter-stroke information may comprise inter-stroke duration, which is the time between decoded strokes, and data indicating whether a user remains in contact with the surface or does not remain in contact with the surface between decoded strokes. These two factors may be taken into account when assembling the decoded strokes into different letters, for example. One letter may be gestured with two consecutive strokes without lifting, and one may be gestured with the same two stokes, but the user may lift off the surface and reposition for the second stroke.
- the computing device may comprise an application programming interface (API) 46 configured to receive the gesture packet 42 and decode an application input corresponding to the gesture packet 42 .
- API application programming interface
- Sending a gesture packet 42 rather than raw signals may greatly reduce the amount of energy the hand-worn device 10 may spend, since the gesture packet 42 may be much smaller than the corresponding audio signal and accelerometer signal.
- the application input may be letters, symbols, or commands, for example. Commands may include scrolling, changing pages, zooming in or out, cycling through displayed media, selecting, changing channels, and adjusting volume, among others.
- the API 46 may provide context to the stroke decoder 24 such that the stroke decoder 24 may only recognize, for example, strokes of letters for text entry or scrolls for scrolling through displayed pages. Such gestures may be difficult to disambiguate without context from the API 46 .
- the computing device 44 may be any of a wide variety of devices for different uses.
- the computing device 44 may be a device that controls a television.
- the hand-worn device 10 may receive gesture input that corresponds to application input to change the channel on the television or adjust the volume.
- the surface may be a couch arm or the user's own leg.
- the computing device 44 may control a television and allow a user to stream movies.
- the hand-worn device 10 may receive a swipe or scroll application input to browse through movies, or it may allow the user to input letters to search by title, etc.
- the computing device 44 may control display of a presentation. The user may control slides without holding onto a remote which is easily dropped.
- the computing device 44 may allow a user to access a plurality of devices. In such a situation, the user may be able to, for example, turn on various appliances in a home, by using one hand-worn device 10 . Alternatively, the user may be able to switch between devices that share a common display, for example.
- the computing device 44 may control a head-mounted display (HMD) or be a watch or mobile phone, where space for input on a built-in surface is limited. For instance, if the computing device 44 is a mobile phone, it may ring at an inopportune time for the user. The user may concentrically search through pockets and bags to find the mobile phone and silence the ringer. However, by using the hand-worn device 10 , the user may easily interact with the mobile phone from a distance. In such instances, the hand-worn device 10 may be constantly available due to being worn by the user.
- HMD head-mounted display
- FIG. 2 illustrates an example use of the hand-worn device 10 to input a gesture on the surface, utilizing the hardware and software components of FIG. 1 .
- the hand-worn device 10 is a ring, which may be the size and shape of a typical ring worn as jewelry. However, other manifestations may be possible, such as a watch, wristband, fingerless glove, or other hand-worn device.
- the user is gesturing the letter “A” with his finger on a table, providing a gesture input 48 .
- another digit or an appendage may serve to enact the gesture.
- the accelerometer 16 In order for the microphone 14 to capture an audio input 50 , skin is typically dragged across a surface, and in order for the accelerometer 16 to capture a motion input 52 , the accelerometer 16 is typically placed near enough to where the user touches the surface to provide a useable accelerometer signal 54 .
- the accelerometer 16 may be further configured to determine a tilt of the hand-worn device 10 after detecting the wake-up motion input.
- a given surface may not be perfectly horizontal, or the user may slightly tilt her finger, for example.
- Tilt determination may be used to convert X-, Y-, and Z-components of the accelerometer signal 54 to X-, Y-, and Z-components with respect to an interacting plane of the surface.
- the microphone 14 may generate the audio signal 56 , which may then be received by the audio processing subsystem 26 to generate the audio envelope 58 .
- the audio envelope 58 may be received by the stroke decoder 24 of the controller 18 , along with the accelerometer signal 54 .
- the stroke decoder 24 may decode strokes based on the audio envelope 58 and the accelerometer signal 54 and generate a gesture packet 42 .
- the gesture packet 42 may be sent to the computing device 44 , in this case a personal computer, where the API 46 may decode an application input 60 corresponding to the gesture packet 42 .
- the application input 60 includes displaying the letter “A.”
- the controller 18 may be further configured to receive feedback 62 from the user indicating that the application input 60 is correct or incorrect.
- the feedback 62 is received by selecting or not selecting the cancel option X displayed by the computing device 44 .
- the feedback 62 may be received by the hand-worn device 10 by shaking the hand-worn device 10 , etc., to cancel the recognition phase and start gesture input again or to select a different recognition candidate.
- the controller 18 may apply a machine learning algorithm to accelerometer samples of the accelerometer signal 54 to statistically identify accelerometer samples 54 A that are likely to be included in the decoded strokes, and eliminate other accelerometer samples that are unlikely to be included. More generally, based on the feedback 62 , the controller 18 may adjust parameters 64 of the stroke decoder 24 .
- the stroke decoder 24 may use only the most relevant accelerometer samples 54 A along with the audio envelope 58 when decoding strokes. This may allow the stroke decoder 24 to use simple arithmetic operations for low-power stroke classification and avoid using techniques such as dynamic time warping and cross correlations that may use complex mathematical operations and/or a greater number of accelerometer samples, which may lead to a higher energy consumption.
- the hand-worn device 10 may be further configured to consume no more than 1.5 mA and preferably no more than 1.2 mA in the user interaction interpretation mode and no more than 1.0 ⁇ A and preferably no more than 0.8 ⁇ A in the low-power sleep mode.
- FIG. 3 illustrates a flowchart of an energy efficient method, method 300 , for capturing gesture input on a surface with a hand-worn device.
- the hand-worn device may be a ring, watch, wristband, glove, or other hand-worn device, for example.
- the following description of method 300 is provided with reference to the software and hardware components of the hand-worn device 10 and computing device 44 described above and shown in FIGS. 1 and 2 . It will be appreciated that method 300 may also be performed in other contexts using other suitable hardware and software components.
- the method 300 may include detecting a wake-up motion input based on an accelerometer signal from an accelerometer.
- the method 300 may include waking from a low-power sleep mode in which the accelerometer is turned on and a microphone is turned off and entering a user interaction interpretation mode in which the microphone is turned on.
- the hand worn device may be configured to begin detecting a tilt of the hand worn device at the accelerometer.
- the method 300 may include contemporaneously receiving an audio signal from the microphone and the accelerometer signal.
- the method 300 may include decoding strokes based on the audio signal and the accelerometer signal.
- the method 300 may include detecting a period of inactivity based on the audio signal, which may be of the length described above, input by a user, or learned over time by the hand worn device.
- the method 300 may include returning to the low-power sleep mode. After 312 the method 300 may include ending or continuing to operate in a sleep-wake cycle by returning to 302 .
- the hand-worn device may further comprise a battery and energy harvesting circuitry including an energy harvesting coil, and thus at any point throughout method 300 , the method may include siphoning energy from a device other than the hand-worn device via a wireless energy transfer technique such as near-field communication (NFC) at the energy harvesting circuitry and charging the battery with the siphoned energy.
- the energy may be siphoned from a device such as an NFC capable smartphone or charging pad, for example.
- NFC near-field communication
- Combining the low power consumption of the device with energy siphoning abilities may allow the user to wear the hand-worn device at all times without removing it for charging. This may reduce the likelihood of dropping, losing, or forgetting the hand-worn device, incorporating the use and presence of the hand-worn device into daily life.
- FIG. 4 is a flowchart illustrating detailed substeps of step 308 , decoding strokes, of method 300 of FIG. 3 .
- the method 300 may include filtering the audio signal with a band-pass filter of an audio processing subsystem of the hand-worn device.
- the audio processing subsystem may further comprise at least one amplifier and an envelope detector.
- the method 300 may include amplifying the audio signal with the amplifier.
- the method 300 may include generating an audio envelope from the audio signal with an envelope detector.
- the method 300 may include decoding strokes based on the audio envelope of the audio signal and the accelerometer signal.
- the method 300 may include sending a gesture packet to a computing device via a radio, the gesture packet comprising the decoded strokes and inter-stroke information.
- the inter-stroke information may comprise inter-stroke duration and data indicating whether a user remains in contact with the surface or does not remain in contact with the surface between decoded strokes.
- the method 300 may include receiving the gesture packet at an application programming interface (API) of the computing device and decoding an application input corresponding to the gesture packet at the API.
- API application programming interface
- the step 308 of method 300 may end. However, it may also proceed to 332 to begin a feedback process.
- the method 300 may include receiving feedback from the user indicating that the application input is correct or incorrect.
- the method 300 may include, based on the feedback, adjusting parameters of a stroke decoder.
- the method 300 may include returning to 326 decode strokes more efficiently than before receiving feedback.
- method 300 is provided by way of example and is not meant to be limiting. Therefore, it is to be understood that method 300 may include additional and/or alternative steps than those illustrated in FIGS. 3 and 4 . Further, it is to be understood that method 300 may be performed in any suitable order. Further still, it is to be understood that one or more steps may be omitted from method 300 without departing from the scope of this disclosure.
- FIG. 5 illustrates example embodiments of the hand-worn device as a ring or wristband, though it may also be another hand-worn device such as a fingerless glove, for example.
- the user may wear the ring on a finger or the wristband on a wrist.
- the surface upon which the user is gesturing is a countertop in this example.
- the wide arrow indicates the movement of the user dragging her finger along the countertop to provide a gesture input, and her entire hand, including the hand-worn device, may move in nearly or exactly the same manner as her finger, such that the accelerometer in the hand-worn device may generate an accelerometer signal with accuracy.
- the friction generated between the countertop and the user's finger may produce sound waves as visually represented in FIG. 5 .
- the sound waves may serve as an audio input and the thin arrows may demonstrate the microphone in the hand-worn device capturing the audio input.
- FIG. 7 illustrates a hierarchical gesture classification strategy for disambiguating different gestures.
- Disambiguating gestures in tiers in this manner may allow for higher accuracy in detecting and interpreting gestures as well as reduced energy consumption.
- disambiguation is performed to eliminate gesture candidates, and narrow the field of possible matching gestures.
- the total processing power consumed for matching a gesture may be reduced, since possible candidates are eliminated at each fork in the hierarchy.
- gestures not only may gestures be broken down into strokes and reassembled as letters, characters, shapes, symbols, etc., but gestures such as scrolls, swipes, and taps may also be decoded with different classifiers.
- the start of a gesture may be detected by the audio envelope indicating that skin is moving across a surface. From here, the magnitude of the Z-component of the accelerometer signal may be compared to a threshold value to classify the gesture as either a hard landing or a soft landing.
- a soft landing may be determined if the Z-component is under the threshold.
- a hard landing may be determined if the Z-component is equal to or over the threshold. The types of landing may be classified by a landing classifier of the stroke decoder.
- Context from the API may be used to further classify the gesture with a soft landing into either a stroke or series of strokes at X 08 or a scroll at X 10 .
- the context may be, for example, that the API will accept text input (stroke), invoking the stroke classifier of the stroke decoder, or page navigation (scroll), invoking a scroll classifier of the stroke decoder. Any or all of the landing classifier, the stroke classifier, and the scroll classifier may be an SVM classifier, for example. If the gesture is determined to be a scroll, the beginning of the gesture may be a short nudge. After the nudge is detected, the remainder of the gesture may be interpreted in real-time such that different directions of scrolling are determined based upon the accelerometer signal.
- a gesture with a hard landing may be further disambiguated by a swipe-tap classifier using the length of the audio envelope.
- a tap may be determined by a very short audio envelope, i.e. it is under a threshold.
- a swipe may be determined by a longer audio envelope, i.e. it is greater than or equal to the threshold.
- a swipe may be further disambiguated by direction according to the accelerometer signal. In this manner, a variety of gesture inputs may be disambiguated by traversing a tiered classifier as shown in FIG. 7 , thus conserving processor time and power consumption as compared to attempting to disambiguate a wide class of gestures in a single step.
- the above described systems and methods may be used to provide energy efficient gesture input on a surface using a hand-worn device.
- the hand-worn device may be adapted in different embodiments to serve a variety of purposes. This approach has the potential advantages of constant availability, low power consumption, battery charging with or without removing the hand-worn device, accurate capture of user intent, and versatility.
- the methods and processes described herein may be tied to a computing system of one or more computing devices or hand-worn devices.
- such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
- API application-programming interface
- FIG. 6 schematically shows a non-limiting embodiment of a computing system 600 that can enact one or more of the methods and processes described above.
- Hand-worn device 10 and computing device 44 may take the form of computing system 600 .
- Computing system 600 is shown in simplified form.
- Computing system 600 may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smartphone), hand-worn devices, and/or other computing devices.
- Computing system 600 includes a logic subsystem 604 and a storage subsystem 608 .
- Computing system 600 may optionally include a display subsystem 612 , sensor subsystem 620 , input subsystem 622 , communication subsystem 616 , and/or other components not shown in FIG. 6 .
- Logic subsystem 604 includes one or more physical devices configured to execute instructions.
- the logic subsystem may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs.
- Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
- the logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic subsystems configured to execute hardware or firmware instructions. Processors of the logic subsystem may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic subsystem optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
- Storage subsystem 608 includes one or more physical devices configured to hold instructions executable by the logic subsystem to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage subsystem 608 may be transformed—e.g., to hold different data.
- Storage subsystem 608 may include removable devices 624 and/or built-in devices.
- Storage subsystem 608 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others.
- Storage subsystem 608 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
- storage subsystem 608 includes one or more physical devices.
- aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration.
- a communication medium e.g., an electromagnetic signal, an optical signal, etc.
- logic subsystem 604 and storage subsystem 608 may be integrated together into one or more hardware-logic components.
- Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
- FPGAs field-programmable gate arrays
- PASIC/ASICs program- and application-specific integrated circuits
- PSSP/ASSPs program- and application-specific standard products
- SOC system-on-a-chip
- CPLDs complex programmable logic devices
- module and “program” may be used to describe an aspect of computing system 600 implemented to perform a particular function.
- a module or program may be instantiated via logic subsystem 604 executing instructions held by storage subsystem 608 . It will be understood that different modules, programs, and/or subsystems may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or subsystem may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc.
- module and “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
- display subsystem 612 may be used to present a visual representation of data held by storage subsystem 608 .
- This visual representation may take the form of a graphical user interface (GUI).
- GUI graphical user interface
- the state of display subsystem 612 may likewise be transformed to visually represent changes in the underlying data.
- Display subsystem 612 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 604 and/or storage subsystem 608 in a shared enclosure, or such display devices may be peripheral display devices.
- communication subsystem 616 may be configured to communicatively couple computing system 600 with one or more other computing devices.
- Communication subsystem 616 may include wired and/or wireless communication devices compatible with one or more different communication protocols.
- the communication subsystem may be configured for communication via a radio, a wireless telephone network, or a wired or wireless local- or wide-area network.
- the communication subsystem may allow computing system 600 to send and/or receive messages to and/or from other devices via a network such as the Internet.
- sensor subsystem 620 may include one or more sensors configured to sense different physical phenomena (e.g., visible light, infrared light, sound, acceleration, orientation, position, etc.). Sensor subsystem 620 may be configured to provide sensor data to logic subsystem 604 , for example.
- sensors configured to sense different physical phenomena (e.g., visible light, infrared light, sound, acceleration, orientation, position, etc.).
- Sensor subsystem 620 may be configured to provide sensor data to logic subsystem 604 , for example.
- input subsystem 622 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller.
- the input subsystem may comprise or interface with selected natural user input (NUI) componentry.
- NUI natural user input
- Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board.
- NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
- computing system 600 may function as computing device 44 describe above and shown in FIGS. 1 and 2
- the hand-worn device 10 may be an input device of input subsystem 622 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Acoustics & Sound (AREA)
- Otolaryngology (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Power Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- User Interface Of Digital Computer (AREA)
- Telephone Function (AREA)
Abstract
Description
- Gesture-based user interaction allows a user to control an electronic device by making gestures such as writing letters to spell words, swatting a hand to navigate a selector, or directing a remote controller to direct a character in a video game. One way to provide for such interaction is to use a device such as a mobile phone or tablet computing device equipped with a touch screen for two-dimensional (2-D) touch input on the touch screen. But, this can have the disadvantage that the screen is typically occluded while it is being touched, and such devices that include touch screens are also comparatively expensive and somewhat large in their form factors. Another way is to use depth cameras to track a user's movements and enable three-dimensional (3-D) gesture input to a system having an associated display, and such functionality has been provided in certain smart televisions and game consoles. One drawback with such three-dimensional gesture tracking devices is that they have high power requirements which presents challenges for implementation in portable computing devices, and another drawback is that they typically require a fixed camera to observe the scene, also a challenge to portability. For these reasons, there are challenges to adopting touch screens and 3-D gesture tracking technologies as input devices for computing devices with ultra-portable form factors, including wearable computing devices.
- Various embodiments are disclosed herein that relate to energy efficient gesture input on a surface. For example, one disclosed embodiment provides a hand-worn device that may include a microphone configured to capture an audio input and generate an audio signal, an accelerometer configured to capture a motion input and generate an accelerometer signal, and a controller comprising a processor and memory. The controller may be configured to detect a wake-up motion input based on the accelerometer signal. In response, the controller may wake from a low-power sleep mode in which the accelerometer is turned on and the microphone is turned off and enter a user interaction interpretation mode in which the microphone is turned on. Then, the controller may contemporaneously receive the audio signal and the accelerometer signal and decode strokes based on the audio signal and the accelerometer signal. Finally, the controller may detect a period of inactivity based on the audio signal and return to the low-power sleep mode.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
-
FIG. 1 is a schematic view of a hand-worn device for energy efficient gesture input on a surface, according to one embodiment. -
FIG. 2 illustrates an example use of the hand-worn device ofFIG. 1 to input a gesture on the surface. -
FIG. 3 is a flowchart illustrating an energy efficient method for capturing gesture input on a surface, according to one embodiment. -
FIG. 4 is a flowchart illustrating substeps of a step of the method ofFIG. 3 for decoding strokes. -
FIG. 5 illustrates an example use of embodiments of the hand-worn device as a ring or wristband. -
FIG. 6 is a simplified schematic illustration of an embodiment of a computing system within which the hand worn device ofFIG. 1 may be utilized. -
FIG. 7 illustrates a hierarchical gesture classification strategy for disambiguating different gestures. -
FIG. 1 shows a schematic view of a hand-worn device 10 for energy efficient gesture input on a surface. The hand-worn device 10 may includesensors 12 which may include amicrophone 14 configured to capture an audio input and generate an audio signal based thereon, and anaccelerometer 16 configured to capture a motion input and generate an accelerometer signal based thereon. The hand-worn device may also include acontroller 18 comprising aprocessor 20 andmemory 22, and thecontroller 18 may be configured to switch the hand-worn device 10 between various operating modes to maintain energy efficiency. - When not in use, the hand-
worn device 10 may operate in a low-power sleep mode in which theaccelerometer 16 is turned on and themicrophone 14 is turned off. Theaccelerometer 16 may itself operate in a low-power motion-detection mode, which may include only detecting motion input above a predetermined threshold. Thecontroller 18 may then detect a wake-up motion input of a user based on the accelerometer signal from theaccelerometer 16. The wake-up motion input may be from a wake-up gesture of the user such as a tap that exceeds a predetermined threshold in the accelerometer signal. Multiple taps or other suitable gestures may be used to prevent accidental waking by incidental user motions. Upon detecting the wake-up motion input, thecontroller 18 may wake from the low-power sleep mode and enter a user interaction interpretation mode in which themicrophone 14 is turned on and theaccelerometer 16 is fully active. - During the user interaction interpretation mode, the
controller 18 may contemporaneously receive the audio signal from themicrophone 14 and the accelerometer signal from theaccelerometer 16. Thecontroller 18 may then execute astroke decoder 24 to decode strokes based on the audio signal and the accelerometer signal. Once the user has finished gesturing, thecontroller 18 may detect a period of inactivity based on the audio signal from themicrophone 14 and return to the low-power sleep mode. The period of inactivity may be preset, such as 30 seconds, 1 minute, or 5 minutes, may be a user input period of time, or may be a period set through machine learning techniques that analyze patterns of accelerometer and audio signals and the periods of inactivity that are likely to follow. - Decoding strokes on the hand-
worn device 10 may involve breaking gestures down into simple geometric patterns such as orthogonal or diagonal line segments and half circles. The strokes may make up letters or context-dependent symbols, etc. Thestroke decoder 24 may comprise a stroke classifier which may be, for example, a support vector machine (SVM) classifier, and the SVM classifier may save energy by only looking for a predetermined set of strokes. Additionally, thestroke decoder 24 may be programmed to recognize taps and swipes based on a threshold of the accelerometer signal and a length of the audio signal. Further, orthogonal and diagonal scrolls are detectable, depending on the context of the gesture input, as explained below.Device 10 may be configured to recognize more complicated gestures as well, although recognition of more complicated gestures may require a concomitant increase in power consumed during disambiguation and/or degrade performance. - The hand-
worn device 10 may include anaudio processing subsystem 26 with a band-pass filter 28 configured to filter the audio signal, anamplifier 30 configured to amplify the audio signal, and anenvelope detector 32 such as, for example, a threshold based envelope detector, configured to generate an audio envelope from the audio signal. Using these components, theaudio processing subsystem 26 may transform the audio signal into an audio envelope by filtering the audio signal, amplifying the audio signal, and generating an audio envelope from the audio signal. Thecontroller 18 may then decode strokes with thestroke decoder 24 based on the audio envelope of the audio signal, rather than the audio signal itself, and the accelerometer signal. Theaudio processing subsystem 26 may be formed separately from themicrophone 14, or one or more parts within theaudio processing subsystem 26 may be incorporated into themicrophone 14, for example. Additionally, more than one band-pass filter 28 and more than oneamplifier 30 may be included. - Gesture input may take place in many different situations with different surroundings, as well as on a variety of different types of surfaces. The audio input detected by the
microphone 14 may be the sound of skin dragging across a surface, as one example. Sound may be produced in the same frequency band regardless of the composition of the surface, thus the surface may be composed of wood, plastic, paper, glass, cloth, skin, etc. As long as the surface generates enough friction when rubbed with skin to produce an audio input detectable by themicrophone 14, virtually any sturdy surface material may be used. Additionally, any suitable surface that is close at hand may be used, such that it may not be necessary to gesture on only one specific surface, increasing the utility of the handworn device 10 in a variety of environments. - The audio input, being thus produced by skin dragging across the surface, may be used to determine when the user is gesturing. However, the audio input may not always be easily distinguished from ambient noise. The
audio processing subsystem 26 may filter the audio signal with at least one band-pass filter 28 to remove ambient noise and leave only the audio signal due to skin dragging across the surface. Generating an audio envelope of the audio signal may keep the length and amplitude of the audio signal for decoding strokes while discarding data that may not be used, both simplifying computation and saving the hand-worn device 10 energy. - The hand-
worn device 10 may further comprise abattery 34 configured to store energy andenergy harvesting circuitry 36 including anenergy harvesting coil 38. Theenergy harvesting circuitry 36 may include a capacitor. Theenergy harvesting circuitry 36 may be configured to siphon energy from a device other than the hand-worn device via a wireless energy transfer technique such as near-field communication (NFC) or an inductive charging standard, and charge the battery with the siphoned energy. The energy may be siphoned from a mobile phone with NFC capabilities, for example. Simply holding the mobile phone may put the hand-worn device 10 in close proximity to an NFC chip in the mobile phone, allowing the hand-worn device 10 to charge thebattery 34 throughout the day through natural actions of the user and without requiring removal of the hand-worn device 10. - In another example, the hand-
worn device 10 may utilize a charging pad or other such charging device to charge thebattery 34. If the user does not wish to wear the hand-worndevice 10 at night, such a charging pad may be used by placing the hand-worndevice 10 on it while the user sleeps, for example. However, removal may not be necessary. For instance, the charging pad may be placed under a mouse or other such input device while the user operates a personal computer, allowing the hand-worndevice 10 to be charged while the user works. - The hand-worn
device 10 may further comprise aradio 40 and thecontroller 18 may be configured to send agesture packet 42 to acomputing device 44 via theradio 40. Typically the radio includes a wireless transceiver configured for two way communication, which enables acknowledgments of transmissions to be sent from the computing device back to the hand worn device. In other embodiments, aradio 40 including a one way transmitter may be used. The computing device may be the device from which the hand-worn device siphons energy, but energy may also be siphoned from a separate device. Thegesture packet 42 may comprise the decoded strokes and inter-stroke information. The inter-stroke information may comprise inter-stroke duration, which is the time between decoded strokes, and data indicating whether a user remains in contact with the surface or does not remain in contact with the surface between decoded strokes. These two factors may be taken into account when assembling the decoded strokes into different letters, for example. One letter may be gestured with two consecutive strokes without lifting, and one may be gestured with the same two stokes, but the user may lift off the surface and reposition for the second stroke. - The computing device may comprise an application programming interface (API) 46 configured to receive the
gesture packet 42 and decode an application input corresponding to thegesture packet 42. Sending agesture packet 42 rather than raw signals may greatly reduce the amount of energy the hand-worndevice 10 may spend, since thegesture packet 42 may be much smaller than the corresponding audio signal and accelerometer signal. - The application input may be letters, symbols, or commands, for example. Commands may include scrolling, changing pages, zooming in or out, cycling through displayed media, selecting, changing channels, and adjusting volume, among others. The
API 46 may provide context to thestroke decoder 24 such that thestroke decoder 24 may only recognize, for example, strokes of letters for text entry or scrolls for scrolling through displayed pages. Such gestures may be difficult to disambiguate without context from theAPI 46. - The
computing device 44 may be any of a wide variety of devices for different uses. For example, thecomputing device 44 may be a device that controls a television. The hand-worndevice 10 may receive gesture input that corresponds to application input to change the channel on the television or adjust the volume. In this case, the surface may be a couch arm or the user's own leg. In another example, thecomputing device 44 may control a television and allow a user to stream movies. In this case, the hand-worndevice 10 may receive a swipe or scroll application input to browse through movies, or it may allow the user to input letters to search by title, etc. In another example, thecomputing device 44 may control display of a presentation. The user may control slides without holding onto a remote which is easily dropped. - In another example, the
computing device 44 may allow a user to access a plurality of devices. In such a situation, the user may be able to, for example, turn on various appliances in a home, by using one hand-worndevice 10. Alternatively, the user may be able to switch between devices that share a common display, for example. In yet another example, thecomputing device 44 may control a head-mounted display (HMD) or be a watch or mobile phone, where space for input on a built-in surface is limited. For instance, if thecomputing device 44 is a mobile phone, it may ring at an inopportune time for the user. The user may frantically search through pockets and bags to find the mobile phone and silence the ringer. However, by using the hand-worndevice 10, the user may easily interact with the mobile phone from a distance. In such instances, the hand-worndevice 10 may be constantly available due to being worn by the user. -
FIG. 2 illustrates an example use of the hand-worndevice 10 to input a gesture on the surface, utilizing the hardware and software components ofFIG. 1 . In this example, the hand-worndevice 10 is a ring, which may be the size and shape of a typical ring worn as jewelry. However, other manifestations may be possible, such as a watch, wristband, fingerless glove, or other hand-worn device. In this instance, the user is gesturing the letter “A” with his finger on a table, providing agesture input 48. However, in an instance where the user does not have a finger or otherwise cannot gesture with a finger, another digit or an appendage, for example, may serve to enact the gesture. In order for themicrophone 14 to capture anaudio input 50, skin is typically dragged across a surface, and in order for theaccelerometer 16 to capture amotion input 52, theaccelerometer 16 is typically placed near enough to where the user touches the surface to provide auseable accelerometer signal 54. - To account for different users, surfaces, and situations, the
accelerometer 16 may be further configured to determine a tilt of the hand-worndevice 10 after detecting the wake-up motion input. A given surface may not be perfectly horizontal, or the user may slightly tilt her finger, for example. Tilt determination may be used to convert X-, Y-, and Z-components of theaccelerometer signal 54 to X-, Y-, and Z-components with respect to an interacting plane of the surface. - As mentioned above, the
microphone 14 may generate theaudio signal 56, which may then be received by theaudio processing subsystem 26 to generate theaudio envelope 58. Theaudio envelope 58 may be received by thestroke decoder 24 of thecontroller 18, along with theaccelerometer signal 54. Thestroke decoder 24 may decode strokes based on theaudio envelope 58 and theaccelerometer signal 54 and generate agesture packet 42. Thegesture packet 42 may be sent to thecomputing device 44, in this case a personal computer, where theAPI 46 may decode anapplication input 60 corresponding to thegesture packet 42. In this example, theapplication input 60 includes displaying the letter “A.” - The
controller 18 may be further configured to receive feedback 62 from the user indicating that theapplication input 60 is correct or incorrect. In this example, the feedback 62 is received by selecting or not selecting the cancel option X displayed by thecomputing device 44. In other examples, the feedback 62 may be received by the hand-worndevice 10 by shaking the hand-worndevice 10, etc., to cancel the recognition phase and start gesture input again or to select a different recognition candidate. Based on this feedback 62, thecontroller 18 may apply a machine learning algorithm to accelerometer samples of theaccelerometer signal 54 to statistically identifyaccelerometer samples 54A that are likely to be included in the decoded strokes, and eliminate other accelerometer samples that are unlikely to be included. More generally, based on the feedback 62, thecontroller 18 may adjustparameters 64 of thestroke decoder 24. - In this way, the
stroke decoder 24 may use only the mostrelevant accelerometer samples 54A along with theaudio envelope 58 when decoding strokes. This may allow thestroke decoder 24 to use simple arithmetic operations for low-power stroke classification and avoid using techniques such as dynamic time warping and cross correlations that may use complex mathematical operations and/or a greater number of accelerometer samples, which may lead to a higher energy consumption. Instead, the hand-worndevice 10 may be further configured to consume no more than 1.5 mA and preferably no more than 1.2 mA in the user interaction interpretation mode and no more than 1.0 μA and preferably no more than 0.8 μA in the low-power sleep mode. -
FIG. 3 illustrates a flowchart of an energy efficient method,method 300, for capturing gesture input on a surface with a hand-worn device. The hand-worn device may be a ring, watch, wristband, glove, or other hand-worn device, for example. The following description ofmethod 300 is provided with reference to the software and hardware components of the hand-worndevice 10 andcomputing device 44 described above and shown inFIGS. 1 and 2 . It will be appreciated thatmethod 300 may also be performed in other contexts using other suitable hardware and software components. - With reference to
FIG. 3 , at 302 themethod 300 may include detecting a wake-up motion input based on an accelerometer signal from an accelerometer. At 304 themethod 300 may include waking from a low-power sleep mode in which the accelerometer is turned on and a microphone is turned off and entering a user interaction interpretation mode in which the microphone is turned on. In addition, after detecting the wake up motion input, the hand worn device may be configured to begin detecting a tilt of the hand worn device at the accelerometer. - At 306 the
method 300 may include contemporaneously receiving an audio signal from the microphone and the accelerometer signal. At 308 themethod 300 may include decoding strokes based on the audio signal and the accelerometer signal. At 310 themethod 300 may include detecting a period of inactivity based on the audio signal, which may be of the length described above, input by a user, or learned over time by the hand worn device. At 312 themethod 300 may include returning to the low-power sleep mode. After 312 themethod 300 may include ending or continuing to operate in a sleep-wake cycle by returning to 302. - It will be appreciated as described above that the hand-worn device may further comprise a battery and energy harvesting circuitry including an energy harvesting coil, and thus at any point throughout
method 300, the method may include siphoning energy from a device other than the hand-worn device via a wireless energy transfer technique such as near-field communication (NFC) at the energy harvesting circuitry and charging the battery with the siphoned energy. The energy may be siphoned from a device such as an NFC capable smartphone or charging pad, for example. Combining the low power consumption of the device with energy siphoning abilities may allow the user to wear the hand-worn device at all times without removing it for charging. This may reduce the likelihood of dropping, losing, or forgetting the hand-worn device, incorporating the use and presence of the hand-worn device into daily life. -
FIG. 4 is a flowchart illustrating detailed substeps ofstep 308, decoding strokes, ofmethod 300 ofFIG. 3 . At 320 themethod 300 may include filtering the audio signal with a band-pass filter of an audio processing subsystem of the hand-worn device. The audio processing subsystem may further comprise at least one amplifier and an envelope detector. At 322 themethod 300 may include amplifying the audio signal with the amplifier. At 324 themethod 300 may include generating an audio envelope from the audio signal with an envelope detector. At 326 themethod 300 may include decoding strokes based on the audio envelope of the audio signal and the accelerometer signal. - At 328 the
method 300 may include sending a gesture packet to a computing device via a radio, the gesture packet comprising the decoded strokes and inter-stroke information. The inter-stroke information may comprise inter-stroke duration and data indicating whether a user remains in contact with the surface or does not remain in contact with the surface between decoded strokes. At 330 themethod 300 may include receiving the gesture packet at an application programming interface (API) of the computing device and decoding an application input corresponding to the gesture packet at the API. After 330, thestep 308 ofmethod 300 may end. However, it may also proceed to 332 to begin a feedback process. - At 332 the
method 300 may include receiving feedback from the user indicating that the application input is correct or incorrect. At 334 themethod 300 may include, based on the feedback, adjusting parameters of a stroke decoder. After 336, themethod 300 may include returning to 326 decode strokes more efficiently than before receiving feedback. - It will be appreciated that
method 300 is provided by way of example and is not meant to be limiting. Therefore, it is to be understood thatmethod 300 may include additional and/or alternative steps than those illustrated inFIGS. 3 and 4 . Further, it is to be understood thatmethod 300 may be performed in any suitable order. Further still, it is to be understood that one or more steps may be omitted frommethod 300 without departing from the scope of this disclosure. -
FIG. 5 illustrates example embodiments of the hand-worn device as a ring or wristband, though it may also be another hand-worn device such as a fingerless glove, for example. The user may wear the ring on a finger or the wristband on a wrist. The surface upon which the user is gesturing is a countertop in this example. The wide arrow indicates the movement of the user dragging her finger along the countertop to provide a gesture input, and her entire hand, including the hand-worn device, may move in nearly or exactly the same manner as her finger, such that the accelerometer in the hand-worn device may generate an accelerometer signal with accuracy. The friction generated between the countertop and the user's finger may produce sound waves as visually represented inFIG. 5 . The sound waves may serve as an audio input and the thin arrows may demonstrate the microphone in the hand-worn device capturing the audio input. -
FIG. 7 illustrates a hierarchical gesture classification strategy for disambiguating different gestures. Disambiguating gestures in tiers in this manner may allow for higher accuracy in detecting and interpreting gestures as well as reduced energy consumption. At each tier, disambiguation is performed to eliminate gesture candidates, and narrow the field of possible matching gestures. By utilizing a gesture recognition algorithm that traverses a disambiguation tree in this manner, the total processing power consumed for matching a gesture may be reduced, since possible candidates are eliminated at each fork in the hierarchy. As described above, not only may gestures be broken down into strokes and reassembled as letters, characters, shapes, symbols, etc., but gestures such as scrolls, swipes, and taps may also be decoded with different classifiers. - With reference to
FIG. 7 , at 702 the start of a gesture may be detected by the audio envelope indicating that skin is moving across a surface. From here, the magnitude of the Z-component of the accelerometer signal may be compared to a threshold value to classify the gesture as either a hard landing or a soft landing. At 704 a soft landing may be determined if the Z-component is under the threshold. Alternatively, at 706 a hard landing may be determined if the Z-component is equal to or over the threshold. The types of landing may be classified by a landing classifier of the stroke decoder. - Context from the API may be used to further classify the gesture with a soft landing into either a stroke or series of strokes at X08 or a scroll at X10. The context may be, for example, that the API will accept text input (stroke), invoking the stroke classifier of the stroke decoder, or page navigation (scroll), invoking a scroll classifier of the stroke decoder. Any or all of the landing classifier, the stroke classifier, and the scroll classifier may be an SVM classifier, for example. If the gesture is determined to be a scroll, the beginning of the gesture may be a short nudge. After the nudge is detected, the remainder of the gesture may be interpreted in real-time such that different directions of scrolling are determined based upon the accelerometer signal.
- A gesture with a hard landing may be further disambiguated by a swipe-tap classifier using the length of the audio envelope. At 712 a tap may be determined by a very short audio envelope, i.e. it is under a threshold. At 714 a swipe may be determined by a longer audio envelope, i.e. it is greater than or equal to the threshold. A swipe may be further disambiguated by direction according to the accelerometer signal. In this manner, a variety of gesture inputs may be disambiguated by traversing a tiered classifier as shown in
FIG. 7 , thus conserving processor time and power consumption as compared to attempting to disambiguate a wide class of gestures in a single step. - The above described systems and methods may be used to provide energy efficient gesture input on a surface using a hand-worn device. The hand-worn device may be adapted in different embodiments to serve a variety of purposes. This approach has the potential advantages of constant availability, low power consumption, battery charging with or without removing the hand-worn device, accurate capture of user intent, and versatility.
- In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices or hand-worn devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
-
FIG. 6 schematically shows a non-limiting embodiment of acomputing system 600 that can enact one or more of the methods and processes described above. Hand-worndevice 10 andcomputing device 44 may take the form ofcomputing system 600.Computing system 600 is shown in simplified form.Computing system 600 may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smartphone), hand-worn devices, and/or other computing devices. -
Computing system 600 includes alogic subsystem 604 and astorage subsystem 608.Computing system 600 may optionally include adisplay subsystem 612,sensor subsystem 620,input subsystem 622,communication subsystem 616, and/or other components not shown inFIG. 6 . -
Logic subsystem 604 includes one or more physical devices configured to execute instructions. For example, the logic subsystem may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result. - The logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic subsystems configured to execute hardware or firmware instructions. Processors of the logic subsystem may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic subsystem optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
-
Storage subsystem 608 includes one or more physical devices configured to hold instructions executable by the logic subsystem to implement the methods and processes described herein. When such methods and processes are implemented, the state ofstorage subsystem 608 may be transformed—e.g., to hold different data. -
Storage subsystem 608 may includeremovable devices 624 and/or built-in devices.Storage subsystem 608 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others.Storage subsystem 608 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. - It will be appreciated that
storage subsystem 608 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration. - Aspects of
logic subsystem 604 andstorage subsystem 608 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example. - The terms “module” and “program” may be used to describe an aspect of
computing system 600 implemented to perform a particular function. In some cases, a module or program may be instantiated vialogic subsystem 604 executing instructions held bystorage subsystem 608. It will be understood that different modules, programs, and/or subsystems may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or subsystem may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module” and “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc. - When included,
display subsystem 612 may be used to present a visual representation of data held bystorage subsystem 608. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage subsystem, and thus transform the state of the storage subsystem, the state ofdisplay subsystem 612 may likewise be transformed to visually represent changes in the underlying data.Display subsystem 612 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined withlogic subsystem 604 and/orstorage subsystem 608 in a shared enclosure, or such display devices may be peripheral display devices. - When included,
communication subsystem 616 may be configured to communicatively couplecomputing system 600 with one or more other computing devices.Communication subsystem 616 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a radio, a wireless telephone network, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allowcomputing system 600 to send and/or receive messages to and/or from other devices via a network such as the Internet. - When included,
sensor subsystem 620 may include one or more sensors configured to sense different physical phenomena (e.g., visible light, infrared light, sound, acceleration, orientation, position, etc.).Sensor subsystem 620 may be configured to provide sensor data tologic subsystem 604, for example. - When included,
input subsystem 622 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity. It will be appreciated thatcomputing system 600 may function as computingdevice 44 describe above and shown inFIGS. 1 and 2 , and the hand-worndevice 10 may be an input device ofinput subsystem 622. - It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
- The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Claims (20)
Priority Applications (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/273,238 US9232331B2 (en) | 2014-05-08 | 2014-05-08 | Hand-worn device for surface gesture input |
| PCT/US2015/028682 WO2015171442A1 (en) | 2014-05-08 | 2015-05-01 | Hand-worn device for surface gesture input |
| EP15722634.1A EP3140713B1 (en) | 2014-05-08 | 2015-05-01 | Hand-worn device for surface gesture input |
| CN201580023949.6A CN106462216B (en) | 2014-05-08 | 2015-05-01 | Hand-worn device for surface gesture input |
| KR1020167034527A KR102489212B1 (en) | 2014-05-08 | 2015-05-01 | Hand-worn device for surface gesture input |
| US14/987,526 US9360946B2 (en) | 2014-05-08 | 2016-01-04 | Hand-worn device for surface gesture input |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/273,238 US9232331B2 (en) | 2014-05-08 | 2014-05-08 | Hand-worn device for surface gesture input |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/987,526 Continuation US9360946B2 (en) | 2014-05-08 | 2016-01-04 | Hand-worn device for surface gesture input |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20150326985A1 true US20150326985A1 (en) | 2015-11-12 |
| US9232331B2 US9232331B2 (en) | 2016-01-05 |
Family
ID=53177897
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/273,238 Active 2034-05-28 US9232331B2 (en) | 2014-05-08 | 2014-05-08 | Hand-worn device for surface gesture input |
| US14/987,526 Active US9360946B2 (en) | 2014-05-08 | 2016-01-04 | Hand-worn device for surface gesture input |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/987,526 Active US9360946B2 (en) | 2014-05-08 | 2016-01-04 | Hand-worn device for surface gesture input |
Country Status (5)
| Country | Link |
|---|---|
| US (2) | US9232331B2 (en) |
| EP (1) | EP3140713B1 (en) |
| KR (1) | KR102489212B1 (en) |
| CN (1) | CN106462216B (en) |
| WO (1) | WO2015171442A1 (en) |
Cited By (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160093199A1 (en) * | 2014-09-26 | 2016-03-31 | Intel Corporation | Shoe-based wearable interaction system |
| US20160246372A1 (en) * | 2013-10-28 | 2016-08-25 | Kyocera Corporation | Tactile sensation providing apparatus and control method of tactile sensation providing apparatus |
| US9594427B2 (en) | 2014-05-23 | 2017-03-14 | Microsoft Technology Licensing, Llc | Finger tracking |
| US20170271922A1 (en) * | 2016-03-17 | 2017-09-21 | Industry-Academic Cooperation Foundation, Chosun University | Apparatus and method of charging mobile terminal using energy harvesting device |
| US20180088761A1 (en) * | 2016-09-23 | 2018-03-29 | Apple Inc. | Dynamically adjusting touch hysteresis based on contextual data |
| US20190018642A1 (en) * | 2017-07-12 | 2019-01-17 | Lenovo (Singapore) Pte. Ltd. | Portable computing devices and command input methods for the portable computing devices |
| CN109313510A (en) * | 2016-06-24 | 2019-02-05 | 微软技术许可有限责任公司 | Integrated free space and surface input devices |
| US10438205B2 (en) | 2014-05-29 | 2019-10-08 | Apple Inc. | User interface for payments |
| CN111524513A (en) * | 2020-04-16 | 2020-08-11 | 歌尔科技有限公司 | Wearable device and voice transmission control method, device and medium thereof |
| US10914606B2 (en) | 2014-09-02 | 2021-02-09 | Apple Inc. | User interactions for a mapping application |
| US10972600B2 (en) | 2013-10-30 | 2021-04-06 | Apple Inc. | Displaying relevant user interface objects |
| US20210183380A1 (en) * | 2019-12-12 | 2021-06-17 | Silicon Laboratories Inc. | Keyword Spotting Using Machine Learning |
| US11048293B2 (en) | 2017-07-19 | 2021-06-29 | Samsung Electronics Co., Ltd. | Electronic device and system for deciding duration of receiving voice input based on context information |
| US11250844B2 (en) * | 2017-04-12 | 2022-02-15 | Soundhound, Inc. | Managing agent engagement in a man-machine dialog |
| US11321731B2 (en) | 2015-06-05 | 2022-05-03 | Apple Inc. | User interface for loyalty accounts and private label accounts |
| US11403606B2 (en) * | 2018-01-05 | 2022-08-02 | Advanced New Technologies Co., Ltd. | Executing application without unlocking mobile device |
| US11528565B2 (en) * | 2015-12-18 | 2022-12-13 | Cochlear Limited | Power management features |
| CN116722629A (en) * | 2023-08-07 | 2023-09-08 | 深圳市首诺信电子有限公司 | A vehicle-mounted wireless charging device and its charging method |
| US11762474B2 (en) * | 2017-09-06 | 2023-09-19 | Georgia Tech Research Corporation | Systems, methods and devices for gesture recognition |
| US11783305B2 (en) | 2015-06-05 | 2023-10-10 | Apple Inc. | User interface for loyalty accounts and private label accounts for a wearable device |
| US20230367366A1 (en) * | 2014-05-15 | 2023-11-16 | Federal Express Corporation | Wearable devices for courier processing and methods of use thereof |
| US20250251792A1 (en) * | 2024-02-02 | 2025-08-07 | Htc Corporation | Head-mounted display, tap input signal generating method and non-transitory computer readable storage medium thereof |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9582076B2 (en) | 2014-09-17 | 2017-02-28 | Microsoft Technology Licensing, Llc | Smart ring |
| US20170236318A1 (en) * | 2016-02-15 | 2017-08-17 | Microsoft Technology Licensing, Llc | Animated Digital Ink |
| EP3519892B1 (en) | 2016-09-27 | 2020-12-16 | Snap Inc. | Eyewear device mode indication |
| CN108052195B (en) * | 2017-12-05 | 2021-11-26 | 广东小天才科技有限公司 | Control method of microphone equipment and terminal equipment |
| US11006043B1 (en) | 2018-04-03 | 2021-05-11 | Snap Inc. | Image-capture control |
| CN109991537A (en) * | 2019-05-13 | 2019-07-09 | 广东电网有限责任公司 | A kind of high-tension switch gear connecting lever divide-shut brake monitors system and method in place |
| KR102168185B1 (en) * | 2019-09-05 | 2020-10-20 | 성균관대학교 산학협력단 | An electronic device including a harvest circuit being input aperiodic signal |
| US20240402982A1 (en) * | 2023-06-02 | 2024-12-05 | Algoriddim Gmbh | Artificial reality based system, method and computer program for pre-cueing music audio data |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1785808B1 (en) | 2005-11-10 | 2014-10-01 | BlackBerry Limited | System and method for activating an electronic device |
| TWI374391B (en) | 2008-05-27 | 2012-10-11 | Ind Tech Res Inst | Method for recognizing writing motion and trajectory and apparatus for writing and recognizing system |
| CN101604212B (en) * | 2008-06-11 | 2013-01-09 | 财团法人工业技术研究院 | Writing action and trajectory recognition method, writing device and recognition system thereof |
| EP2302882A1 (en) | 2009-09-24 | 2011-03-30 | Research In Motion Limited | Communication device and method for initiating NFC communication |
| US9174123B2 (en) * | 2009-11-09 | 2015-11-03 | Invensense, Inc. | Handheld computer systems and techniques for character and command recognition related to human movements |
| US20130135223A1 (en) | 2009-12-13 | 2013-05-30 | Ringbow Ltd. | Finger-worn input devices and methods of use |
| US20120038652A1 (en) | 2010-08-12 | 2012-02-16 | Palm, Inc. | Accepting motion-based character input on mobile computing devices |
| US9042571B2 (en) | 2011-07-19 | 2015-05-26 | Dolby Laboratories Licensing Corporation | Method and system for touch gesture detection in response to microphone output |
| US9811255B2 (en) | 2011-09-30 | 2017-11-07 | Intel Corporation | Detection of gesture data segmentation in mobile devices |
| US20130100044A1 (en) | 2011-10-24 | 2013-04-25 | Motorola Mobility, Inc. | Method for Detecting Wake Conditions of a Portable Electronic Device |
| US9389690B2 (en) | 2012-03-01 | 2016-07-12 | Qualcomm Incorporated | Gesture detection based on information from multiple types of sensors |
| US9696802B2 (en) | 2013-03-20 | 2017-07-04 | Microsoft Technology Licensing, Llc | Short range wireless powered ring for user interaction and sensing |
| US20150078586A1 (en) * | 2013-09-16 | 2015-03-19 | Amazon Technologies, Inc. | User input with fingerprint sensor |
-
2014
- 2014-05-08 US US14/273,238 patent/US9232331B2/en active Active
-
2015
- 2015-05-01 WO PCT/US2015/028682 patent/WO2015171442A1/en not_active Ceased
- 2015-05-01 KR KR1020167034527A patent/KR102489212B1/en active Active
- 2015-05-01 CN CN201580023949.6A patent/CN106462216B/en active Active
- 2015-05-01 EP EP15722634.1A patent/EP3140713B1/en active Active
-
2016
- 2016-01-04 US US14/987,526 patent/US9360946B2/en active Active
Cited By (46)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160246372A1 (en) * | 2013-10-28 | 2016-08-25 | Kyocera Corporation | Tactile sensation providing apparatus and control method of tactile sensation providing apparatus |
| US10031584B2 (en) * | 2013-10-28 | 2018-07-24 | Kyocera Corporation | Tactile sensation providing apparatus and control method of tactile sensation providing apparatus |
| US11316968B2 (en) | 2013-10-30 | 2022-04-26 | Apple Inc. | Displaying relevant user interface objects |
| US10972600B2 (en) | 2013-10-30 | 2021-04-06 | Apple Inc. | Displaying relevant user interface objects |
| US12088755B2 (en) | 2013-10-30 | 2024-09-10 | Apple Inc. | Displaying relevant user interface objects |
| US12026013B2 (en) * | 2014-05-15 | 2024-07-02 | Federal Express Corporation | Wearable devices for courier processing and methods of use thereof |
| US20230367366A1 (en) * | 2014-05-15 | 2023-11-16 | Federal Express Corporation | Wearable devices for courier processing and methods of use thereof |
| US10191543B2 (en) | 2014-05-23 | 2019-01-29 | Microsoft Technology Licensing, Llc | Wearable device touch detection |
| US9594427B2 (en) | 2014-05-23 | 2017-03-14 | Microsoft Technology Licensing, Llc | Finger tracking |
| US11836725B2 (en) | 2014-05-29 | 2023-12-05 | Apple Inc. | User interface for payments |
| US10438205B2 (en) | 2014-05-29 | 2019-10-08 | Apple Inc. | User interface for payments |
| US10748153B2 (en) | 2014-05-29 | 2020-08-18 | Apple Inc. | User interface for payments |
| US10796309B2 (en) | 2014-05-29 | 2020-10-06 | Apple Inc. | User interface for payments |
| US10977651B2 (en) | 2014-05-29 | 2021-04-13 | Apple Inc. | User interface for payments |
| US10902424B2 (en) | 2014-05-29 | 2021-01-26 | Apple Inc. | User interface for payments |
| US11733055B2 (en) | 2014-09-02 | 2023-08-22 | Apple Inc. | User interactions for a mapping application |
| US10914606B2 (en) | 2014-09-02 | 2021-02-09 | Apple Inc. | User interactions for a mapping application |
| US9747781B2 (en) * | 2014-09-26 | 2017-08-29 | Intel Corporation | Shoe-based wearable interaction system |
| US20160093199A1 (en) * | 2014-09-26 | 2016-03-31 | Intel Corporation | Shoe-based wearable interaction system |
| US11783305B2 (en) | 2015-06-05 | 2023-10-10 | Apple Inc. | User interface for loyalty accounts and private label accounts for a wearable device |
| US12333509B2 (en) | 2015-06-05 | 2025-06-17 | Apple Inc. | User interface for loyalty accounts and private label accounts for a wearable device |
| US12456129B2 (en) | 2015-06-05 | 2025-10-28 | Apple Inc. | User interface for loyalty accounts and private label accounts |
| US11734708B2 (en) | 2015-06-05 | 2023-08-22 | Apple Inc. | User interface for loyalty accounts and private label accounts |
| US11321731B2 (en) | 2015-06-05 | 2022-05-03 | Apple Inc. | User interface for loyalty accounts and private label accounts |
| US12028681B2 (en) | 2015-12-18 | 2024-07-02 | Cochlear Limited | Power management features |
| US11528565B2 (en) * | 2015-12-18 | 2022-12-13 | Cochlear Limited | Power management features |
| US10326312B2 (en) * | 2016-03-17 | 2019-06-18 | Industry-Academic Cooperation Foundation, Chosun University | Apparatus and method of charging mobile terminal using energy harvesting device |
| US20170271922A1 (en) * | 2016-03-17 | 2017-09-21 | Industry-Academic Cooperation Foundation, Chosun University | Apparatus and method of charging mobile terminal using energy harvesting device |
| CN109313510A (en) * | 2016-06-24 | 2019-02-05 | 微软技术许可有限责任公司 | Integrated free space and surface input devices |
| US10860199B2 (en) * | 2016-09-23 | 2020-12-08 | Apple Inc. | Dynamically adjusting touch hysteresis based on contextual data |
| US20180088761A1 (en) * | 2016-09-23 | 2018-03-29 | Apple Inc. | Dynamically adjusting touch hysteresis based on contextual data |
| US12125484B2 (en) * | 2017-04-12 | 2024-10-22 | Soundhound Ai Ip, Llc | Controlling an engagement state of an agent during a human-machine dialog |
| US20220122607A1 (en) * | 2017-04-12 | 2022-04-21 | Soundhound, Inc. | Controlling an engagement state of an agent during a human-machine dialog |
| US11250844B2 (en) * | 2017-04-12 | 2022-02-15 | Soundhound, Inc. | Managing agent engagement in a man-machine dialog |
| US10802790B2 (en) * | 2017-07-12 | 2020-10-13 | Lenovo (Singapore) Pte. Ltd. | Portable computing devices and command input methods for the portable computing devices |
| US20190018642A1 (en) * | 2017-07-12 | 2019-01-17 | Lenovo (Singapore) Pte. Ltd. | Portable computing devices and command input methods for the portable computing devices |
| US11048293B2 (en) | 2017-07-19 | 2021-06-29 | Samsung Electronics Co., Ltd. | Electronic device and system for deciding duration of receiving voice input based on context information |
| US11762474B2 (en) * | 2017-09-06 | 2023-09-19 | Georgia Tech Research Corporation | Systems, methods and devices for gesture recognition |
| US11842295B2 (en) | 2018-01-05 | 2023-12-12 | Advanced New Technologies Co., Ltd. | Executing application without unlocking mobile device |
| US11403606B2 (en) * | 2018-01-05 | 2022-08-02 | Advanced New Technologies Co., Ltd. | Executing application without unlocking mobile device |
| US11735177B2 (en) * | 2019-12-12 | 2023-08-22 | Silicon Laboratories Inc. | Keyword spotting using machine learning |
| US20210183380A1 (en) * | 2019-12-12 | 2021-06-17 | Silicon Laboratories Inc. | Keyword Spotting Using Machine Learning |
| CN111524513A (en) * | 2020-04-16 | 2020-08-11 | 歌尔科技有限公司 | Wearable device and voice transmission control method, device and medium thereof |
| CN116722629A (en) * | 2023-08-07 | 2023-09-08 | 深圳市首诺信电子有限公司 | A vehicle-mounted wireless charging device and its charging method |
| US20250251792A1 (en) * | 2024-02-02 | 2025-08-07 | Htc Corporation | Head-mounted display, tap input signal generating method and non-transitory computer readable storage medium thereof |
| US12474778B2 (en) * | 2024-02-02 | 2025-11-18 | Htc Corporation | Head-mounted display, tap input signal generating method and non-transitory computer readable storage medium thereof |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106462216A (en) | 2017-02-22 |
| US9360946B2 (en) | 2016-06-07 |
| CN106462216B (en) | 2019-08-02 |
| KR20170003662A (en) | 2017-01-09 |
| EP3140713A1 (en) | 2017-03-15 |
| US9232331B2 (en) | 2016-01-05 |
| KR102489212B1 (en) | 2023-01-16 |
| WO2015171442A1 (en) | 2015-11-12 |
| EP3140713B1 (en) | 2020-01-15 |
| US20160116988A1 (en) | 2016-04-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9360946B2 (en) | Hand-worn device for surface gesture input | |
| US10416789B2 (en) | Automatic selection of a wireless connectivity protocol for an input device | |
| US10139898B2 (en) | Distracted browsing modes | |
| KR102194272B1 (en) | Enhancing touch inputs with gestures | |
| US9798443B1 (en) | Approaches for seamlessly launching applications | |
| TWI582641B (en) | Button functionality | |
| AU2013360585B2 (en) | Information search method and device and computer readable recording medium thereof | |
| CN103927113B (en) | Portable terminal and the method that haptic effect is provided in portable terminal | |
| US9377860B1 (en) | Enabling gesture input for controlling a presentation of content | |
| CN105683893B (en) | Rendering control interfaces on touch-enabled devices based on motion or lack of motion | |
| Serrano et al. | Bezel-Tap gestures: quick activation of commands from sleep mode on tablets | |
| AU2014200924B2 (en) | Context awareness-based screen scroll method, machine-readable storage medium and terminal therefor | |
| US9268407B1 (en) | Interface elements for managing gesture control | |
| US20160088060A1 (en) | Gesture navigation for secondary user interface | |
| US10474324B2 (en) | Uninterruptable overlay on a display | |
| KR20170008854A (en) | Finger tracking | |
| WO2014039201A1 (en) | Augmented reality surface displaying | |
| CN108885525A (en) | Menu display method and terminal | |
| US20180267624A1 (en) | Systems and methods for spotlight effect manipulation | |
| KR20150098424A (en) | Method and apparatus for processing input of electronic device | |
| CN105683892A (en) | User Interface Elements for Hover Controls | |
| EP3204843A1 (en) | Multiple stage user interface | |
| US9817566B1 (en) | Approaches to managing device functionality | |
| US9507429B1 (en) | Obscure cameras as input |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PRIYANTHA, NISSANKA ARACHCHIGE BODHI;LIU, JIE;GUMMESON, JEREMY;SIGNING DATES FROM 20140430 TO 20140615;REEL/FRAME:033180/0949 |
|
| AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034747/0417 Effective date: 20141014 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:039025/0454 Effective date: 20141014 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |