HK1179719A - Virtual touch interface - Google Patents
Virtual touch interface Download PDFInfo
- Publication number
- HK1179719A HK1179719A HK13106757.5A HK13106757A HK1179719A HK 1179719 A HK1179719 A HK 1179719A HK 13106757 A HK13106757 A HK 13106757A HK 1179719 A HK1179719 A HK 1179719A
- Authority
- HK
- Hong Kong
- Prior art keywords
- light
- command
- light field
- pointer
- images
- Prior art date
Links
Description
Background
In a conventional computing environment, a user typically uses a keyboard and mouse to control and interact with a computer. For example, a user typically moves a mouse to navigate a cursor displayed by a computer on a monitor. The user may also use the mouse to issue a limited number of simple commands to the computer (e.g., click and drag to highlight an item, double click to open the item, right click to access a command menu).
Today, computing users seek a more intuitive, efficient, and powerful way to issue commands to computers. Some devices, such as touch pads, touch panels (e.g., touch-enabled monitors, etc.), and wearable devices (e.g., motion sensor gloves), expand the way users interact with computers. Generally, a touchpad is a navigation sensor located near a keyboard. Rather than using a conventional mouse to control the cursor, the user can physically touch (touch) the touchpad and slide their finger around on the touchpad to control the cursor. While a touch pad may be used to control a computer in place of a mouse, a touch pad may undesirably occupy a large amount of space on a keyboard, as is the case when implemented in a laptop setting.
Touch panels also expand the way users issue commands to a computer. Generally, a touch panel combines a display with a built-in touch interface so that a user can issue commands to a computer by physically touching a screen. Although touch panels generally respond to a greater range of operations (e.g., zooming, scrolling, etc.) than touch pads, touch panels are susceptible to touch smudges, which can undesirably inhibit the display quality of the screen. In addition, the touch panel may be uncomfortable and tedious for the user to operate for a longer period of time, as the user may have to lift their arm up to the screen.
Wearable devices are another example of devices that extend the way users issue commands to a computer. In general, motion sensor gloves enable a user to use her hand as a natural interface device. Sensors located on the glove detect hand movement. This motion is then translated into an input command to the computer. Because motion sensor gloves require multiple optimally placed sensors, the device can be undesirably expensive and cumbersome for the user.
SUMMARY
The virtual touch interface enhances the user's interactive experience in an intelligent environment, such as a computing environment. A user may issue commands to a computing device (e.g., move a cursor, select an object, zoom, scroll, drag, insert an object, select a desired input element from a displayed list of input elements, etc.) by naturally moving a pointer (e.g., a finger, a pen, etc.) within a light field projected by a light source proximate to the user. The light reflected from the pointer is captured by various sensors as a sequence of images. The reflected light captured in the sequence of images may be analyzed to track the movement of the pointer. The tracked movement is then analyzed to issue commands to the computing device.
The virtual touch interface may be implemented in various environments. For example, the virtual touch interface may be used for a desktop computer, a laptop computer, or a mobile device that allows a user to issue commands to the computing device.
Brief Description of Drawings
The "detailed description" is described with reference to the drawings. In the drawings, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items.
FIG. 1 is an illustrative virtual touch interface that allows a user to issue commands to a computing device through a light field.
FIG. 2 is a schematic diagram of an illustrative environment including a virtual touch engine issuing commands.
FIG. 3 is a flow diagram of an illustrative process for issuing commands using a virtual touch interface.
FIG. 4a is an exemplary virtual interface environment illustrating capturing a moving pointer.
FIG. 4b is an exemplary virtual interface environment illustrating positioning of a moving pointer in a captured image.
FIG. 5 is a flow chart of an illustrative process of analyzing an input that captures reflections from a moving pointer in a light field.
FIG. 6 is an illustrative multi-touch (multi-touch) command issued by a user to a computing device through a virtual touch interface.
Detailed Description
Overview
Computing users today seek intuitive, efficient, and powerful ways to interact with computing devices in order to enhance their overall computing experience. A virtual touch interface may enhance the overall computing experience of a user. When interacting with the virtual touch interface, a user may move a pointer within the light field to issue commands to the computing device. As used herein, a "pointer" is any object capable of reflecting light, such as one or more fingers, pens, pencils, reflectors, and the like. As light reflects from the pointer, various sensors capture the light as a sequence of images. The images are then analyzed to issue commands. As used herein, a "command" is any command that may be issued to a computing device, such as moving a cursor, selecting an object, zooming, scrolling, rotating, dragging, inserting an object, selecting a desired input element from a displayed list of input elements, and the like.
The processes and systems described herein may be implemented in a variety of ways. Example implementations are provided below with reference to the accompanying drawings.
Illustrative Environment
FIG. 1 is an illustrative virtual touch interface environment 100. The environment 100 may include a computing device 102 connectable to a network. The computing device may include a display device 104, such as a monitor, to present video images to a user 106. One or more light field generators 108 may each generate a light field 110 to serve as a substitute for a mouse or other computing device interface (touchpad, touch panel, wearable device, etc.). The light field 110 may be planar in shape and positioned parallel to a work surface 112, such as a desktop. In some implementations, the aspect ratio of the light field 110 is substantially equal to the aspect ratio of the display device 104.
Fig. 1 shows that the light field 110 generated by two light field generators covers a portion of the working surface 112. In some embodiments, the light field may cover the entire working surface 112. The light field generator 108 may be implemented as any number of devices operable to emit visible light or any form of non-visible electromagnetic radiation, such as an infrared light source, an infrared laser diode, and/or a photodiode. For example, the light field generator 108 may be implemented as two separate infrared Light Emitting Diodes (LEDs), each equipped with a cylindrical lens to emit electromagnetic radiation in invisible form, such as infrared, to the two light fields. In some cases, the two light fields are parallel to each other. For example, the light field may include a first light field located at a first height relative to the working surface and a second light field located at a second height relative to the working surface. Where a light field is generated by two light field generators 108, each light field generator may operate independently. For example, each light field generator may be independently turned on or off.
The user 106 may move a pointer 114 (e.g., a finger, multiple fingers, a pen, a pencil, etc.) within the light field 110 to issue commands to the computing device 102. As the user 106 moves the pointer 114 within the light field 110, light may reflect from the pointer 114 and toward the one or more sensors 116, as indicated by arrows 118. When light reflects from the pointer 114, the sensor 116 may capture the reflected light 118. In some embodiments, the sensor 116 is tilted with respect to the light field 110 and positioned near the light field generator 108 to maximize the captured intensity of the reflected light 118. For example, the sensor 116 may be implemented such that the focal point of the sensor is centered at a desired location, such as two-thirds of the longest distance to be sensed. In some embodiments, the sensor 116 has a field of view that covers the entire working surface 112.
The sensor 116 may capture the reflected light 118 as a sequence of images. In some embodiments, sensor 116 is implemented as an infrared camera that can be used to capture infrared light. Alternatively, sensor 116 may be any device that may be used to capture reflected light (visible or invisible), such as any combination of cameras, scanning laser diodes, and/or ultrasonic transducers.
After the sensor 116 captures the reflected light 118 as a sequence of images, the sequence of images may be analyzed to track the movement of the pointer 114. In some implementations, two cameras can capture the reflected light 118 to determine the vertical position, lateral position, and/or proximity position of the pointer. As shown in FIG. 1, the vertical position may be defined along the Z-axis of the coordinate system 120 (i.e., the distance towards or away from the work surface 112 that is perpendicular to the plane of the light field 110); the lateral position may be defined along the Y-axis of the coordinate system 120 (i.e., the distance within the plane of the light field 110 parallel to the end side 122 of the keyboard 124); and the proximity location may be defined along the X-axis of the coordinate system 120 (i.e., the distance within the plane of the light field 110 toward or away from the light field generator 108).
For example, if the user 106 moves their finger along the X-axis within the light field 110 toward the sensor 116, the sensor 116 may capture the light reflected from the finger as a sequence of images. The reflected light captured in each image in the sequence of images can then be analyzed to track the movement of the finger, which is a reduced proximity location while the vertical and lateral locations remain unchanged. This tracked movement (i.e., proximity movement) may be translated into a move cursor command. Accordingly, a cursor 126 displayed on the display device 104 may move to the left on the display device toward the folder 130 in a corresponding track 128. Similarly, if the user 106 moves their finger in the Y direction toward the display device 104, the lateral movement may translate into a move cursor command that moves the cursor 126 upward on the display device 104.
The virtual touch interface environment 100 can be used to issue various commands (single-touch or multi-touch) to the computing device 102. Some examples of commands that may be issued may include moving the pointer 114 to issue a cursor command, moving the pointer up and down in the light field 110 to issue a click/press event, circling the pointer in a clockwise circle to issue a browse down command (scroll down, "forward" navigation command, etc.), circling the pointer 114 in a counterclockwise circle to issue a browse up command (scroll up, "back" navigation command, etc.), moving two fingers together or apart to issue a zoom in/out command, rotating two fingers to issue a rotate object command, pinching two fingers together to issue a select (grab) and drag command, drawing a character with the pointer 114 to issue an object input (e.g., typing) command, etc. These examples are merely illustrative commands that may be issued through the virtual touch interface environment 100 of FIG. 1. In other implementations, the touch interface environment 100 can be used to issue any single-touch or multi-touch event to the computing device 102.
Although FIG. 1 shows the sensor 116 connected to the keyboard 124, it should be understood that the sensor may be implemented and/or configured in any manner. For example, the light field generator 108 and/or the sensor 116 may be implemented as an integrated component of the keyboard 124 or they may be mounted to the keyboard as a peripheral accessory. Alternatively, sensor 116 may be implemented as a stand-alone device located in a desired vicinity of light field 110 and in communication (wirelessly or by wire) with computing device 102. For example, the sensor 116 may be located anywhere as long as the path (e.g., line of sight) between the sensor 116 and the light field 110 is not obstructed by other objects.
FIG. 2 illustrates an example of a computing system 200 in which a virtual touch interface can be implemented to issue commands to the computing device 102. Computing system 200 is but one example of a computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the virtual touch interface.
Computing system 200 includes a computing device 202 capable of generating a light field, capturing a reflection from a pointer as the pointer moves within the light field, analyzing the captured reflection, and issuing commands based on the analysis. Computing devices 202 may include, but are not limited to, personal computers 201(1), mobile phones 202(2) (including smart phones), Personal Digital Assistants (PDAs) 202 (M). Other computing devices are also contemplated, such as televisions, set-top boxes, game consoles, and other electronic devices that issue commands in response to sensing movement into the light field 110. Each of the computing devices 202 may include one or more processors 204 and memory 206.
As described above, the computing system 200 issues commands based on analyzing reflections captured from moving pointers. For example, the computing system 200 may be used to control a cursor displayed on the display device 104 in response to a pointer moving within the light field 110.
In some implementations, the computing system 200 can include a light field generator 108 (e.g., an infrared light source, an infrared laser diode, and/or a photodiode), which the light field generator 108 can emit light through a field generator interface 208 to generate the light field 110. As described above with reference to fig. 1, the light field generator 108 may be an infrared Light Emitting Diode (LED) grid that emits electromagnetic radiation in invisible forms, such as infrared. Where light field generator 108 is a grid of LEDs, field generator interface 208 may control the LEB grid to generate an infrared light field that is not visible to user 106.
In some implementations, the computing system 200 can include one or more sensors 116 (e.g., scanning laser diodes, ultrasonic transducers, and/or cameras) that capture light reflected from the pointer through the sensor interface 210. In some implementations, the sensor 116 can capture the reflected light as the image sequence 212.
In some implementations, the computing system 200 can include an output interface 214 that controls a display connected to the computing system 200. For example, the output interface 214 may control a cursor displayed on the display device 104. The display device 104 may be a stand-alone unit or may be incorporated into the computing device 202, as is the case with laptop computers, mobile phones, tablet computers, and the like.
Memory 206 may include applications, modules, and/or data. In some implementations, the memory is one or more of system memory (i.e., Read Only Memory (ROM), Random Access Memory (RAM)), non-removable memory (i.e., hard disk drive), and/or removable memory (i.e., magnetic disk drive, optical disk drive). Various computer storage media may be included in memory 206 that store computer readable instructions, data structures, program modules, and other data for computing device 202. In some implementations, the memory 206 can include a virtual touch engine 216 that analyzes the sequence of images 212 that capture the light reflected from the moving pointer 114.
Virtual touch engine 216 may include an interface module 218, a tracking module 220, and a command module 222. Collectively, these modules may perform various operations to issue commands based on analyzing reflected light captured by the sensor 116. Generally, the interface module 218 generates the light field 110, the tracking module 220 captures and analyzes the sequence of images 212, and the command module 222 issues commands based on the analysis. Additional references are made to these modules in the following sections.
Illustrative operations
FIG. 3 is a flow diagram of an illustrative process 300 for issuing a command based on analyzing reflections captured from a pointer moving within a light field. Process 300 may be performed by virtual touch engine 216 and discussed with reference to FIGS. 1 and 2.
The process 300 is illustrated as a set of blocks in a logic flow diagram, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and so forth that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the process. In addition to process 300, other processes described in this disclosure should be construed accordingly.
At 302, field generator interface 208 controls light field generator 108 (e.g., an LED grid) to generate light field 110. In some implementations, the field generator interface 208 directs the light field generator 108 to project the light field parallel to a work surface, such as a desktop. The light field 110 may be close to the keyboard so that a user may issue commands through the keyboard and/or light field without having to move the location.
At 304, the sensor interface 210 controls one or more sensors 116 (e.g., infrared cameras) to capture light reflected from a pointer within the light field 110 as a sequence of images 212. For example, if the user 106 moves the pointer 114 in the light field 110 to control a cursor, the sensor 116 may capture light reflected from the pointer 114 moving within the light field. In some implementations, the pointer 114 can be a finger of the user such that the sensor interface 210 relies on the natural reflectivity of the user's skin to capture reflected light. In some cases, the user 106 may increase the reflectivity of his finger by attaching a reflective device to the finger. Alternatively, the pointer 114 may be any physical object that includes a reflective item, such as a bar code. It is understood that the sensor may capture visible or invisible light.
At 306, the tracking module 220 analyzes the reflected light captured in the sequence of images 212 to track the movement of the pointer 114. At 306, the tracking module may track the movement of the pointer by determining the position of the reflected light captured in each image of the sequence of images 212. In some embodiments, at 306, each image in the sequence of images is analyzed as a two-dimensional image to determine the location of the reflected light. The position of the reflected light may be determined in terms of a vertical position (i.e., a position along the Z-axis), a lateral position (i.e., a position along the Y-axis), and/or a proximity position (i.e., a position along the X-axis).
At 308, the command module 222 issues a command based on the analysis performed at 306. For example, if the tracking module 220 tracks a moving pointer as a proximity movement based on analyzing the reflected light captured in the image sequence 212 at 306, the command module may issue a move cursor command to move a cursor displayed on the display device at 308.
FIG. 4a illustrates an exemplary environment 400 for tracking the movement of a pointer by analyzing a sequence of images capturing light reflected from the pointer. Exemplary environment 400 illustrates a virtual touch device integrated into a side 402 of a laptop 404 and operable to track movement of a pointer by analyzing a sequence of images capturing light reflected from the pointer. While FIG. 4a shows the virtual touch device integrated into the side 402 of the laptop 404, the virtual touch device may be integrated into any computing device, such as a desktop computer, mobile phone, and/or PDA, that is capable of generating a light field, capturing reflections from a pointer as the pointer moves in the light field, analyzing the captured reflections, and issuing commands based on the analysis. In some implementations, the virtual touch device is built into the computing device. Alternatively, the virtual touch device can be mounted to the computing device as a peripheral accessory. For example, the virtual touch device may be communicatively coupled to the computing device through a Universal Serial Bus (USB) port.
As the user 106 moves the pointer 114 within the light field 110, light in the light field may reflect from the pointer 114 and toward the first sensor 406 and the second sensor 408, as indicated by arrows 118. Both sensor one 406 and sensor two 408 may capture the reflected light 118 as an image sequence 410.
Fig. 4b shows an illustrative image of the image sequence 410 of fig. 4 a. Image one 412a and image two 412b represent two images captured from sensor one 406 and sensor two 408, respectively, in image sequence 410. Light portions 414 and 416 represent areas where reflected light 118 (i.e., light reflected from pointer 114) is captured by sensors 406 and 408, respectively. Darker areas 418, 420 in the image 412 represent areas where the sensors 406, 408 capture ambient light or reflections from less reflective and/or more distant objects.
The image 412 includes a plurality of pixels 422 that can be used to determine the position of the pointer 114. For example, the vertical position (i.e., the position along the Z-axis) of the pointer 114 may be determined based on the vertical pixel distances 424, 426 of the light portions 414, 416. The vertical position may be calculated from the vertical pixel distance 424 of image one 412a or the vertical pixel distance 426 of image two 412 b. Alternatively, the vertical position of the pointer may be calculated using both the vertical pixel distances 424, 426 of image one 412a and image two 412 b. It should be understood that other techniques may be used to determine the vertical position. For example, a virtual touch interface may include more than one parallel light field located one above the other and separated by a predetermined distance. In such a case, the vertical position may be determined based on the number of light fields penetrated by the pointer 114. The more touch fields that penetrate, the larger the vertical position.
The lateral position (i.e., the position along the Y-axis) of the pointer 114 may be determined based on the lateral pixel distances 428, 430 of the light portions. The lateral position may be calculated from lateral pixel distance 428 of image one 412a or lateral pixel distance 430 of image two 412 b. Alternatively, the lateral position may be calculated using both the lateral pixel distances 428, 430 of image one 412a and image two 412 b.
Since the two images are captured from two different cameras (e.g., sensors 406, 408), the proximity location (i.e., the location along the X-axis) can be triangulated based on the vertical pixel distances 424, 426 and the lateral pixel distances 428, 430 of the image 412.
Although fig. 4b shows that the image 412 includes a single light portion (i.e., the sensors 406, 408 capture the light portions 414, 416 as light reflected from a single pointer), the image 412 may contain multiple light portions (i.e., the sensors 406, 408 capture light reflected from multiple pointers within the light field). For example, if the user 106 issues a multi-touch event (e.g., zoom, rotate, etc.), the image 412 may contain multiple light portions that represent that the sensor is capturing reflected light from multiple pointers within the light field 110.
FIG. 5 is a flow diagram of an illustrative process 500 for analyzing image input for one or more computing events. The process 500 also describes tracking the pointer movement element described above (i.e., block 306 of fig. 3). The order of the operations of process 500 is not intended to be construed as a limitation.
At 502, the tracking module 220 receives input. The input may be a sequence of images capturing light reflected from the moving pointer 114 as shown in fig. 1. As shown in fig. 4b, the reflected light may take the form of one or more light portions in the input. The portion of light captured in the input may represent a command issued to the computing device.
At 504, the tracking module 220 processes the input. In some embodiments, a gaussian filter is applied to smooth the input image. At 504, the tracking module 220 may additionally convert the input to a binary format.
At 506, the tracking module 220 analyzes the input. Operations 508 through 512 provide various sub-operations for the tracking module 220 to analyze the input. For example, analyzing the input may include looking up one or more light portions in the input at 508, determining a size of the light portions at 510, and/or determining a location of the light portions at 512.
At 508, the tracking module 220 analyzes the input for light portions. Since the input may contain a single optical portion (e.g., a user issuing a single touch event) or the input may contain multiple optical portions (e.g., a user issuing a multi-touch event), the tracking module 220 may analyze the input to find one or more optical portions at 508.
At 508, the tracking module 220 may look for light portions using edge-based detection techniques. For example, an edge-based detection technique may analyze the input color intensity gradient to locate the edges of light portions because the difference in color intensity between light portions and darker portions is significant. At 508, where one or more of the light portions are hidden, the edge-based detection technique may use an extrapolation technique to find the light portions.
At 510, the tracking module 220 may determine a size of the light portion. The size of the light portion helps determine whether the user 106 intends to issue a command. For example, in some cases, the sensor may capture light reflected by an object other than the pointer 114. In such a case, if the size of the light portion is outside of a predetermined range of pointer sizes, the tracking module 220 may exclude one or more of the light portions at 506. The size of the light portion may additionally be used to determine the type of command issued to the computing device. For example, if the size of the light portion is large (e.g., twice the normal size of the light portion), this may indicate that the user 106 is holding two fingers together, such as pinching a thumb and forefinger together, to issue a grab command.
At 512, the tracking module 220 determines the location of each light portion in the input. Determining the location of the light portions may include calculating vertical pixel distances 424, 426 (i.e., distances perpendicular to the plane of the light field 110 toward or away from the work surface), calculating lateral pixel distances 428, 430 (i.e., distances within the plane of the light field 110 parallel to the end side 122 of the keyboard 124), and/or triangulating the approach distance (i.e., distances within the plane of the light field 110 toward or away from the light field generator) based on the vertical pixel distance and the lateral pixel distance of each light portion.
At 514, the tracking module 220 may track the movement of the pointer 114 based on the input analysis performed at 506. Once the location (e.g., vertical pixel distance, lateral pixel distance, and proximity distance) at each time instance is determined based on the input, the tracking module 220 collects the location of each pointer chronologically to track the movement of the pointer as a function of time at 514.
At 516, the tracking module 220 translates the tracked movements into commands issued to the computing device. For example, if the proximity distance of the pointer 114 decreases in each time sequential input while the vertical pixel distance and the lateral pixel distance remain unchanged, the tracking module 220 may track 514 the movement of the pointer as a command to move the cursor to the left on the display device. Where the input contains multiple light portions, the tracking module 220 may convert 516 the tracked movement into a multi-touch event (as a zoom, rotation, etc.). For example, if two light portions are found at 508 and the two light portions move closer together, the tracking module 220 may translate the movement so tracked into a zoom-out command.
At 518, the command module 222 issues a command to the computing device. In some implementations, the command module 222 can also provide feedback to the user at 518 to enhance the user's interaction experience. For example, the feedback may include one or more of changing the appearance of the object, displaying a temporary window describing the issued command, and/or outputting a voice command describing the issued command.
FIG. 6 illustrates some exemplary multi-touch commands 600 that may be issued to a computing device. According to embodiments, the virtual touch interface may be used to issue single touch commands (e.g., move a cursor, select an object, browse up/down, navigate forward/backward, etc.) or multi-touch commands (e.g., zoom, grab, drag, etc.). For example, the user may issue the grab command 602 by touching the thumb 604 and index finger 606 together. In response to the grab command 602, the selected item, such as folder 608, may respond as if grabbed by the user 106. The user may then issue a drag command 610 by moving the thumb and forefinger within the light field to drag the folder 608 to a desired location 612. When the folder is in the desired position 612, the user may separate the thumb and forefinger to simulate a drop command 614. In response to the drop command 614, the folder can be placed in the desired location 612.
Multi-touch command 600 also illustrates some examples that may provide feedback to the user in order to enhance the user's interaction experience. For example, feedback provided in response to the grab command 602 may include one or more of framing the "grabbed" folder 608 with dashed lines 616, displaying a temporary window 618 describing the command, and/or outputting voice instructions 620 describing the command. Additionally, feedback provided in response to drag command 610 may include one or more of displaying a temporary window 622 describing the command and/or outputting a voice command 624 describing the command.
Conclusion
Although the technology has been described in language specific to structural features and/or methodological acts, it is to be understood that the appended claims are not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing such techniques.
Claims (10)
1. A computer-implemented method, comprising:
emitting light from a source to generate a light field parallel to a work surface;
capturing light as a sequence of images at one or more sensors located outside the light field, the light being reflected from a pointer located within the light field;
analyzing reflected light captured in the sequence of images to track movement of the pointer; and
the tracked movement is analyzed to issue a command to the computing device.
2. The computer-implemented method of claim 1, wherein generating the light field comprises generating an infrared light field via one or more infrared Light Emitting Diode (LED) grids, the infrared light field having an aspect ratio substantially equal to an aspect ratio of a display device of the computing device.
3. The computer-implemented method of claim 1, wherein generating the light field comprises generating an infrared light field via one or more infrared laser diodes.
4. The computer-implemented method of claim 1, wherein the command is one of the following commands for manipulating a user interface via the computing device: zoom commands, navigation commands, and rotate commands.
5. The computer-implemented method of claim 1, wherein analyzing the reflected light analyzes a color intensity gradient of each image of the sequence of images to locate edges of one or more light portions within each image of the sequence of images.
6. The computer-implemented method of claim 1, wherein the one or more sensors comprise one or more infrared cameras.
7. The computer-implemented method of claim 1, wherein analyzing reflected light in the sequence of images comprises:
for each image of the sequence of images, determining a vertical pixel position of the pointer;
for each image of the sequence of images, determining a lateral pixel position of the pointer; and
triangulating a proximity location based on the vertical pixel location and the lateral pixel location for each image of the sequence of images.
8. A virtual touch interface system, comprising:
one or more processors; and
memory storing modules executable by the one or more processors, the modules comprising:
an interface module for generating an infrared light field;
a tracking module for analyzing a moving pointer within a light field captured in a sequence of images by a sensor located outside the light field, the tracking module analyzing the moving pointer to: (1) locating one or more light portions of each image in the sequence of images; (2) tracking movement of the moving pointer based on the located one or more light portions, and (3) converting the tracked movement into a command; and
a command module to issue the command to the computing device.
9. The virtual touch interface system of claim 8, wherein the tracking module further determines a size of each of the one or more light portions.
10. The method of claim 8, wherein the command module further provides feedback based on the issued command, the feedback being one or more of: changing the appearance of the object, displaying a temporary window describing the issued command, and outputting a voice command describing the issued command.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/795,024 | 2010-06-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
HK1179719A true HK1179719A (en) | 2013-10-04 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110298708A1 (en) | Virtual Touch Interface | |
US12386430B2 (en) | Systems and methods of creating a realistic displacement of a virtual object in virtual reality/augmented reality environments | |
EP2972669B1 (en) | Depth-based user interface gesture control | |
KR101365394B1 (en) | Light-based finger gesture user interface | |
US20120274550A1 (en) | Gesture mapping for display device | |
EP2270630A2 (en) | Gesture recognition method and touch system incorporating the same | |
US9639167B2 (en) | Control method of electronic apparatus having non-contact gesture sensitive region | |
Liang et al. | Shadowtouch: Enabling free-form touch-based hand-to-surface interaction with wrist-mounted illuminant by shadow projection | |
Colaço | Sensor design and interaction techniques for gestural input to smart glasses and mobile devices | |
HK1179719A (en) | Virtual touch interface | |
EP4439245A1 (en) | Improved touchless user interface for computer devices | |
WO2024200685A1 (en) | Improved touchless user interface for computer devices | |
Jang et al. | U-Sketchbook: Mobile augmented reality system using IR camera | |
Quigley et al. | Face-to-face collaborative interfaces |