WO2010142732A1 - A touch method of inputting instructions for controlling a computer program, and a system for implementing the method - Google Patents
A touch method of inputting instructions for controlling a computer program, and a system for implementing the method Download PDFInfo
- Publication number
- WO2010142732A1 WO2010142732A1 PCT/EP2010/058097 EP2010058097W WO2010142732A1 WO 2010142732 A1 WO2010142732 A1 WO 2010142732A1 EP 2010058097 W EP2010058097 W EP 2010058097W WO 2010142732 A1 WO2010142732 A1 WO 2010142732A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- contact
- zone
- touch
- hand
- zones
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Definitions
- the present invention relates to a method of inputting instructions for controlling a program executed by a computer, and to a system for implementing the method.
- touch-sensitive devices associated with a software driver enabling the operating system to interpret the signals that come from the touch-sensitive device.
- a touch-sensitive device may be a display screen that is directly provided with contact detectors that may be of the resistive, capacitive, acoustic, or optical type and that are arranged to make acquisitions at regular intervals of the order of about fifteen milliseconds.
- Such a touch- sensitive device may equally well be a transparent slab or a frame fitted with such contact detectors and fastened to a conventional computer screen. Under certain circumstances, the touch-sensitive device may also be offset from the display screen.
- the touch-sensitive device enables one or more contact zones to be detected and it periodically sends signals to the computer that contain data graphically describing each contact zone in a frame of reference associated with the device.
- the computer operating system makes use of these signals to determine the zones of the device on which the user is making contact and to transmit corresponding information to the computer application that is being executed.
- a drawback with present touch systems is that they are limited to pointing to zones on the screen. Nevertheless, it has recently appeared that it is possible to move the image by moving a finger pressed against the screen and to magnify the image by an amount that is determined by pressing two fingers against the screen and moving them towards each other or apart.
- Present touch systems nevertheless are not suitable for inputting a plurality of instructions, possibly instructions that are more complex, such as displaying a menu or a window relating to a portion of the screen that is being pointed to, or opening or closing a context menu, or selecting a portion of the screen and performing a copy/paste operation, or writing on the screen, or selecting a plurality of portions of the displayed image, ... .
- These limitations result in particular from the fact that touch systems make use only of contacts with the ends of fingers, without distinguishing between fingers being used for making said contacts.
- Proposals have been made to display menus and/or a keyboard permanently on the screen so as to enable the operator to enter certain instructions by touch.
- An object of the invention is to provide means for avoiding at least some of the limits of present touch systems .
- the invention provides a method of inputting instructions to control a program executed by a computer provided with a touch-sensitive device arranged to detect zones of contact with at least a portion of a user's hand, the method comprising the steps of: • determining surfaces of the contact zones; and
- detecting surfaces enables the user to use one hand to provide as many inputs as there are surfaces between which the method is capable of distinguishing.
- the use of two hands enables the number of inputs to be increased considerably, thus achieving a quantity of different inputs that is large enough to be compatible with using computer applications that are complex or rich in functions.
- the method includes the step of determining a modification to at least one of the contact zones and of selecting the instruction also as a function of said modification. This makes it possible to further increase the number of possible inputs.
- the modification is a movement of the contact zone, and preferably, the determination of the modification includes a step of measuring the speed or the acceleration of the movement, and/or the step of determining an outline of the contact zone and a major axis of the contact zone, and the step of detecting a rotation of the major axis; and • the modification is a variation in the surface of the contact zone (the variation may be a variation of the shape or of the size of the surface) .
- the method when two contact zones are detected with one of them moving, the method includes a step of identifying temporary masking of the moving zone by the other zone, and preferably, the identification of the temporary masking includes a stage of extrapolating a movement trajectory of the moving zone during the temporary masking.
- the masked portion of the trajectory of the moving contact zone can thus be reconstructed in order to determine the nature of the corresponding instruction. This is particularly advantageous when contact zones of relatively large surface are present since the probability of masking occurring is then correspondingly higher .
- the table contains gestures made up from four basic surfaces, a first surface corresponding substantially to the distal end of a finger, a second surface corresponding substantially to a closed hand, a third surface corresponding to a hand open flat, and a fourth surface corresponding to the edge of a hand.
- Obtaining these four surfaces does not require significant gymnastics to be performed by the user's hand, and these four surfaces are sufficiently different from one another to avoid error in classifying them prior to identifying determined gestures in the instruction selection table. The number of instructions that can then be input is thus relatively large.
- the method includes a calibration step for calibrating the surfaces of the contact zones as a function of the size of the user's hands.
- the user's morphology is thus taken into account to avoid that falsifying the classification of surfaces.
- the method is implemented by means of a touch-sensitive device having two contact detectors disposed in the vicinity of corners of the screen, and the method includes the step of determining the surface of the contact zone as a function of the distances of the contact zone from the detectors. Distance from the detectors has an influence on the shape and the surface of the contact zone as detected. Taking this distance into account thus serves to avoid bias in determining the surface of the contact zone.
- a buffer zone is defined around a contact zone, and any contact that is detected in the buffer zone is ignored or incorporated into the contact zone, and preferably, when the surface as determined for the contact zone corresponds to an open hand, the buffer zone is dimensioned to be close to the contact zone.
- contacts having a duration shorter than a predetermined threshold, and/or contacts that appear after a time lapse that is shorter than a threshold measured from an earlier contact are not taken into account.
- contacts of a duration longer than a predetermined threshold and appearing in a buffer zone around a surface corresponding substantially to a closed hand are incorporated into the surface of the closed hand.
- the instructions are grouped together, with each instruction being associated with a predetermined positioning parameter comprising a surface, a relative position, and/or a variation of the contact zones, and with one of the positioning parameters being common to all of the instructions within a given group.
- a predetermined positioning parameter comprising a surface, a relative position, and/or a variation of the contact zones
- the final result of an instruction sent to the operating system after the operator has performed a gesture is similar to the result that would have been obtained physically by the operator acting on real physical elements, as opposed to computer representations thereof .
- the method of the invention includes the step of displaying an indicator of the determined surface on the screen, and preferably, the indicator is a color halo surrounding the contact zone, with the color depending on the surface that has been determined.
- the surface indicator enabling the user to ensure that the surface taken into account by the system is indeed the intended surface .
- the method includes the steps of:
- a dynamic surface is a surface having graphics characteristics of shape and area that change over time.
- a gesture corresponds to a single static or dynamic contact, or to a combination of static or dynamic surfaces, or to a sequence of static or dynamic surfaces.
- the operator interacts with the operating system of the computer via one or more gestures performed via the touch-sensitive device.
- Each gesture corresponds to an instruction that is forwarded to the operating system of the computer.
- the invention also provides a system for implementing the method, the system comprising a touch- sensitive device provided with means for transmitting contact data in real time to a computer, the contact data comprising coordinates of contact zones detected on the screen, and a software demultiplexing module executable by the computer to:
- the software demultiplexer module is arranged to:
- Figure 1 is a diagrammatic view of a system for implementing the method in accordance with the invention.
- Figure 2 is a flow chart showing how the method in accordance with the invention proceeds;
- Figures 3a to 3c are successive portions of a table showing the various elements that make up a grammar usable with the method of the invention;
- Figure 4 is a view of a flat screen having hands placed thereon in order to show the influence of the position of the hand and the size of the hand on the surface of the detected contact zone;
- Figure 5 is a view analogous to Figure 4 of a screen having superposed thereon a grid for determining weighting coefficients of the calculated surfaces;
- Figures 6 and 7 are views analogous to Figure 4 showing buffer zones;
- Figures 8a and 8b show a way of determining the orientation of a hand that is flat on the screen;
- Figure 9 is a view analogous to Figure 4 showing how a movement trajectory of one contact zone is masked by another contact zone.
- Figure 10 is a view analogous to Figure 4 showing the detection of the edge of a hand.
- the method in accordance with the invention is implemented by means of a computer system comprising a computer 1 connected to a touch-sensitive device given overall reference 2 and having a display surface 3 surrounded by a detector frame 4 with two optical detectors 5 mounted in two top corners thereof, each optical detector 5 having a field of view that covers the side opposite thereto and the bottom of the frame 4.
- Infrared light-emitting diodes (LEDs) 6 are mounted on the sides 7 and on the bottom 8 so that the masking of one or more of these LEDs can be detected by the detectors to reveal the presence of an element in the vicinity of the display surface 3.
- the detector frame is thus arranged to detect the hands of a user or an operator that, on coming close to the display surface 3 or touching it will mask the LEDs and create contact points or zones 21, 22, 23, 24 that are interpreted by the system in the form of quadrilaterals 11, 12, 13, 14 having the contact zones inscribed therein (see Figures 4, 6, 7, and 8) .
- the detector frame 4 is arranged to deliver contact data to the computer 1 continuously and in real time, said data specifying for the contact zone the coordinates of its center and of its four vertices, and also the distances between opposite vertices.
- the detector frame 4 may in known manner identify two contact zones that are present simultaneously, providing the contact zones appear in succession. This type of detector frame 4 is itself known .
- the computer 1 is provided with an operating system that handles inputs/outputs and the hardware and software resources of the computer, which operating system is associated with a software demultiplexing module that operates in a manner described in detail below.
- the computer 1 also runs software for preparing an airborne mission using a map displayed on the touch screen 2.
- the control instructions of the software are input by means of the touch-sensitive device 2.
- the method of inputting instructions that is implemented by the above system comprises the main steps of:
- step 130 • determining surfaces of the contact zones (step 130); and • selecting at least one instruction corresponding to the determined surfaces from a table that associates predetermined instructions with at least one static or dynamic surface or a combination of static or dynamic surfaces, or a sequence of static or dynamic surfaces (step 170) .
- the demultiplexing software module is arranged to implement this method and more particularly:
- step 130 to calculate a surface for the contact zone and determine a corresponding surface class (step 130), which classes are added to each packet (step 140); • to determine an instruction, in particular on the basis of the static or dynamic surface class or of the combination of static or dynamic surfaces, or of the chained sequence of successive static or dynamic surfaces, and on the basis of the above-mentioned table (steps 160 and 170); and
- the table comprises four surfaces: a first surface corresponds substantially to the distal end of a finger; a second surface corresponds substantially to a closed hand; a third surface corresponds to a flat open hand; and a fourth surface corresponds to the edge of a hand.
- the table also comprises certain contact zone modifications, it being possible for the selection of an instruction also to be a function of such a modification.
- the modification may be: a movement of the contact zone, such as a circular or straight-line movement, possibly taking account of the speed and/or the acceleration of the movement; or else a change of shape, and/or of area of the contact zone (e.g. resulting from closing or opening a hand that is pressing against the touch- sensitive device.
- the table thus associates gesture grammar elements with instructions.
- there follows a list of the grammar elements that appear in the table together with the corresponding instructions note: for certain instructions, a distinction is drawn between a display that is in two dimensions (2D) , or a display that is in three dimensions (3D) ) :
- magnification takes place around the pointed-to zone.
- the operator may deselect an object by pressing a second time with a finger in order to establish the magnifying- glass effect before putting down the other hand and moving it. The operator closes the magnifying glass by moving the flat hand in the opposite direction, so as to close down the magnifying glass.
- the operator applies a Open a context menu. hand flat and moves it over the surface.
- the operator applies a Close a context menu. hand flat on the previously-opened menu and moves the hand over the surface.
- the operator applies a Select operation, e.g. in a finger and releases it. menu on the screen.
- the operator applies two In a representation where fingers and makes free several views are movements on the overlapping, the movements surface . are applied to the "top" view and to the views associated therewith, independently of the others.
- 2D zoom in, zoom out, rotate, and pan, both dynamically and continuously.
- Variant with account being taken of speed the last movement imparted to the representation is conserved as being continuous if the movement was terminated quickly with the fist being raised. To stop the continuous movement, it suffices to make contact with the surface.
- 3D zoom in, zoom out, rotate, and pan, both dynamically and continuously.
- Variant with account being taken of speed the last movement imparted to the representation is conserved as being continuous if the movement was terminated quickly with the fist being raised. To stop the continuous movement it suffices to make contact with the surface.
- this gesture can be used to cause the representation of the screen to move up or down.
- the operator applies a Scroll-wheel effect, e.g. finger to the surface, used with a representation makes a fist with the of a plurality of views that other hand, and moves occupy the entire display the fist upwards or surface, and also used when downwards .
- a plurality of objects are represented by a single representation.
- the result of moving the fist is to modify the order of the views in real time, on the principle that the view on top of the others is the active view for panning, rotating, or zooming manipulation when a manipulation is to be performed on the active view (and those that are associated therewith, if any) without propagating to the other views .
- the result is to cause the objects to scroll. This gesture is also used to scroll drop-down lists, or to vary numerical values, etc.
- the instructions are grouped together, and each group is associated with a common grammatical element.
- the grammatical elements (or predetermined positioning parameters) , one of which is common to all of the instructions of a group, comprise a surface, a relative position, and/or a variation in the contact zones.
- One of the positioning parameters is common to at least one of the contact zones associated with all of the instructions in a given group. For example, in the embodiment described:
- both hands applied with at least one hand being on edge and moving (75, 76, 77, 78) .
- the method includes the step of determining the modification .
- the movement is identified by the fact that the coordinates of the contact zone change and the speed and/or acceleration of the movement is optionally measured on the basis of these changing coordinates .
- movement is linear (rectilinear or circular)
- the method includes the step of identifying the moving zone being masked temporarily by the other zone ( Figure 9) . Identifying temporary masking includes a stage of extrapolating a trajectory for the movement of the moving zone while it is temporarily masked.
- the method includes the step of determining the outline of the contact zone and of determining a major axis of the contact zone (from the coordinates of the two points of the outline that are furthest apart from each other) , and the step of detecting rotation of the major axis.
- rotation of the major axis is detected from variation in the distance between the two points of the outline that are furthest apart.
- An analogous method consists in measuring deformations of the quadrilateral within which the contact zone is inscribed (compare
- the method includes the step of displaying a marker of the determined surface on the display surface 3.
- the marker in this example is a colored halo surrounding the contact zone, with it being possible for its color to depend on which surface has been detected. This enables the user to verify that the system has indeed taken the intended surface into account.
- the method of the invention also implements means for avoiding taking account of interfering contacts that are not the result of deliberate user action. These means enable contact zones to be stabilized (step 150) and make it possible to eliminate interfering contacts by tracking the variation of contacts over time. This stage also makes it possible to identify contact zones that are in movement.
- the time component consists in observing the contacts that have a duration shorter than a predetermined threshold and/or contacts that appear after an earlier contact, but after a time lapse that is shorter than a threshold measured from the earlier contact, and continuing to observe them in order to identify how they vary.
- the time component consists more precisely in ignoring contacts of a duration that is shorter than a first predetermined threshold (it is assumed that a contact that is too short is accidental) , and in ignoring contacts that occur within a time lapse that is shorter than a second predetermined threshold after a first contact that is taken into account (such a contact may be the result of an involuntary movement of the hand before it stabilizes) .
- the software demultiplexing module prevents packets corresponding to such contacts being issued.
- the thresholds are the result of a compromise between having a system that is highly reactive and the drawbacks of having a system that is too sensitive to interfering contacts (processing time or the consequences of a wrongly interpreted contact on the functioning of the application) . This makes it possible to take a contact zone into account only once it has stabilized and corresponds to the will of the user.
- the graphics component consists in defining a buffer zone 33 around the contact zone 23 ( Figure 6) and any contact detected in the buffer zone is either ignored or incorporated, as appropriate.
- the distance between the outline of the buffer zone and the outline of the contact zone is, as above, the result of a compromise.
- Contacts that appear in the buffer zone may be the result of hand movements for increasing the surface of the contact zone, e.g. moving from a hand that is closed as a fist to a hand that is opened out flat.
- the buffer zone 32 is dimensioned to stay close to the contact zone ( Figure 7) .
- a hand that is flat is stable and only a few residual movements of the hand might provoke interfering contacts in the immediate vicinity thereof. This nevertheless makes it possible to ignore interfering contacts that might result from the hand being closed.
- the method includes a prior calibration step (step 100) for calibrating the surfaces of the contact zones as a function of the size of the user's hands.
- step 100 the user is asked to place successively on the touch screen 2: a finger, a closed hand, an open hand, and the edge of a hand so as to determine the reference surfaces to be used for each of the corresponding contact zones in order to classify the surfaces of the contact zones, prior to looking up instructions in the table.
- the method of the invention comprises, after calculating the surface of the contact zone, making a correction to the surface by means of a weighing factor of value that is determined by means of the grid 30 shown superposed on the display surface 3.
- the grid 30 is made from two bundles of lines extending from each of the top corners of the detector frame 4 towards the sides that are adjacent to the opposite bottom corner. The intersections between the bundles of lines form quadrilaterals of area that varies as a function of distance from the top corners, i.e. as a function of their coordinates in the frame of reference of the touch screen 2.
- the weighting factor is determined as a ratio between the area of the quadrilateral corresponding to the coordinates of the contact zone whose area is being calculated, and the area of an arbitrarily-chosen reference quadrilateral.
- the weighting step thus makes it possible to determine the surface of the contact zone as a function of its distance from the detectors .
- the method of the invention may be implemented with other types of touch screen such as touch screens that are resistive, capacitive, acoustic, optical (a camera disposed looking at the screen) , ... .
- the method of the invention can be used for controlling applications or software of types other than those associated with graphics or mapping.
- the method of the invention may be implemented to take account only of surfaces, or of surfaces with only one type of modification (variation in surface or movement) , or only of movement (rotary or linear) .
- the application controlled by the touch-sensitive device may also receive instructions via some other input interface such as a keyboard or a mouse, for example.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Position Input By Displaying (AREA)
Abstract
A method of inputting instructions to control a program executed by a computer provided with a touch- sensitive device arranged to detect zones of contact with at least a portion of a user's hand, the method comprising the steps of : • determining surfaces of the contact zones; and • selecting at least one instruction corresponding to the determined surfaces from a table that associates predetermined instructions with at least one surface or combination of surfaces.
Description
A TOUCH METHOD OF INPUTTING INSTRUCTIONS FOR CONTROLLING A COMPUTER PROGRAM, AND A SYSTEM FOR IMPLEMENTING THE METHOD
The present invention relates to a method of inputting instructions for controlling a program executed by a computer, and to a system for implementing the method.
For many years, it has been sought to make inputting data and instructions to a computer more natural and to limit, at least in part, any recourse to the keyboard and mouse that are usually used.
For this purpose, there exist computers that are equipped with touch systems. Such touch systems comprise touch-sensitive devices associated with a software driver enabling the operating system to interpret the signals that come from the touch-sensitive device. A touch- sensitive device may be a display screen that is directly provided with contact detectors that may be of the resistive, capacitive, acoustic, or optical type and that are arranged to make acquisitions at regular intervals of the order of about fifteen milliseconds. Such a touch- sensitive device may equally well be a transparent slab or a frame fitted with such contact detectors and fastened to a conventional computer screen. Under certain circumstances, the touch-sensitive device may also be offset from the display screen.
The touch-sensitive device enables one or more contact zones to be detected and it periodically sends signals to the computer that contain data graphically describing each contact zone in a frame of reference associated with the device. The computer operating system makes use of these signals to determine the zones of the device on which the user is making contact and to transmit corresponding information to the computer application that is being executed.
A drawback with present touch systems is that they are limited to pointing to zones on the screen.
Nevertheless, it has recently appeared that it is possible to move the image by moving a finger pressed against the screen and to magnify the image by an amount that is determined by pressing two fingers against the screen and moving them towards each other or apart.
Present touch systems nevertheless are not suitable for inputting a plurality of instructions, possibly instructions that are more complex, such as displaying a menu or a window relating to a portion of the screen that is being pointed to, or opening or closing a context menu, or selecting a portion of the screen and performing a copy/paste operation, or writing on the screen, or selecting a plurality of portions of the displayed image, ... . These limitations result in particular from the fact that touch systems make use only of contacts with the ends of fingers, without distinguishing between fingers being used for making said contacts.
Proposals have been made to display menus and/or a keyboard permanently on the screen so as to enable the operator to enter certain instructions by touch.
Nevertheless, in certain applications, and particularly in graphics applications and more particularly still in mapping applications, the graphics content that is displayed on the screen ought not to be masked, and it is inappropriate to reduce the display surface that is dedicated to graphics content.
An object of the invention is to provide means for avoiding at least some of the limits of present touch systems . To this end, the invention provides a method of inputting instructions to control a program executed by a computer provided with a touch-sensitive device arranged to detect zones of contact with at least a portion of a user's hand, the method comprising the steps of: • determining surfaces of the contact zones; and
• selecting at least one instruction corresponding to the determined surfaces from a table that associates
predetermined instructions with at least one surface or combination of surfaces.
Thus, detecting surfaces enables the user to use one hand to provide as many inputs as there are surfaces between which the method is capable of distinguishing. The use of two hands enables the number of inputs to be increased considerably, thus achieving a quantity of different inputs that is large enough to be compatible with using computer applications that are complex or rich in functions.
Preferably, the method includes the step of determining a modification to at least one of the contact zones and of selecting the instruction also as a function of said modification. This makes it possible to further increase the number of possible inputs.
As examples of such modifications: • the modification is a movement of the contact zone, and preferably, the determination of the modification includes a step of measuring the speed or the acceleration of the movement, and/or the step of determining an outline of the contact zone and a major axis of the contact zone, and the step of detecting a rotation of the major axis; and • the modification is a variation in the surface of the contact zone (the variation may be a variation of the shape or of the size of the surface) .
According to a particular characteristic, when two contact zones are detected with one of them moving, the method includes a step of identifying temporary masking of the moving zone by the other zone, and preferably, the identification of the temporary masking includes a stage of extrapolating a movement trajectory of the moving zone during the temporary masking. The masked portion of the trajectory of the moving contact zone can thus be reconstructed in order to determine the nature of the corresponding instruction.
This is particularly advantageous when contact zones of relatively large surface are present since the probability of masking occurring is then correspondingly higher . Advantageously, the table contains gestures made up from four basic surfaces, a first surface corresponding substantially to the distal end of a finger, a second surface corresponding substantially to a closed hand, a third surface corresponding to a hand open flat, and a fourth surface corresponding to the edge of a hand. Obtaining these four surfaces does not require significant gymnastics to be performed by the user's hand, and these four surfaces are sufficiently different from one another to avoid error in classifying them prior to identifying determined gestures in the instruction selection table. The number of instructions that can then be input is thus relatively large.
Preferably, the method includes a calibration step for calibrating the surfaces of the contact zones as a function of the size of the user's hands.
The user's morphology is thus taken into account to avoid that falsifying the classification of surfaces.
According to another particular characteristic, the method is implemented by means of a touch-sensitive device having two contact detectors disposed in the vicinity of corners of the screen, and the method includes the step of determining the surface of the contact zone as a function of the distances of the contact zone from the detectors. Distance from the detectors has an influence on the shape and the surface of the contact zone as detected. Taking this distance into account thus serves to avoid bias in determining the surface of the contact zone. According to an additional particular characteristic, a buffer zone is defined around a contact zone, and any contact that is detected in the buffer zone is ignored or incorporated into the contact zone, and
preferably, when the surface as determined for the contact zone corresponds to an open hand, the buffer zone is dimensioned to be close to the contact zone.
This makes it possible to limit taking account of interfering contacts that would disturb the operation of the method. When the user touches the screen, a finger or hand approaches the screen progressively, possibly vacillating at the time of contact, before being pressed against the screen, so a plurality of very small contacts might be detected before the entire contact zone is actually formed. These are said to be transitional interfering contacts (i.e. at the transition between the absence and the presence of a contact zone of predetermined surface, and vice versa) . This is particularly important when the touch system is capable of handling only a limited number of simultaneous contacts: it is necessary to ensure that the contacts that are taken into consideration are useful contacts.
According to yet another particular characteristic, contacts having a duration shorter than a predetermined threshold, and/or contacts that appear after a time lapse that is shorter than a threshold measured from an earlier contact are not taken into account.
This also makes it possible to limit the extent to which involuntary interfering contacts are taken into account, e.g. contacts of the kind that arise by accidentally brushing against the touch screen, which contacts would otherwise disturb the operation of the method. In order to implement this characteristic, it is possible to make provision for inhibiting the detection of contact zones during a predetermined duration or a predetermined time lapse after a first contact zone has been detected.
According to another particular characteristic, contacts of a duration longer than a predetermined threshold and appearing in a buffer zone around a surface
corresponding substantially to a closed hand are incorporated into the surface of the closed hand.
This makes it possible to take account continuously of the opening of a closed hand that is applied to the touch-sensitive device, even if a gap appears between the surface corresponding to the palm and the surface (s) corresponding to contact with the fingers. This also makes it possible to take account continuously of the closing of an open hand that is placed flat against the touch-sensitive device, even if a gap appears between the surface corresponding to the palm and the surface (s) corresponding to contacts with the fingers.
Preferably, the instructions are grouped together, with each instruction being associated with a predetermined positioning parameter comprising a surface, a relative position, and/or a variation of the contact zones, and with one of the positioning parameters being common to all of the instructions within a given group. Such grouping associated with a predetermined positioning parameter makes it easier for the user to learn the gesture grammar.
Preferably, the final result of an instruction sent to the operating system after the operator has performed a gesture is similar to the result that would have been obtained physically by the operator acting on real physical elements, as opposed to computer representations thereof .
This similarity between virtual results and real results increases the capacity of an operator to absorb the gestures and makes it easier to perform them, particularly in contexts where the operator is subjected to stress reducing the operator's capacity for concentration.
Advantageously, the method of the invention includes the step of displaying an indicator of the determined surface on the screen, and preferably, the indicator is a
color halo surrounding the contact zone, with the color depending on the surface that has been determined.
The risk of error is thus limited, the surface indicator enabling the user to ensure that the surface taken into account by the system is indeed the intended surface .
In a preferred implementation, the method includes the steps of:
• determining the surfaces of the contact zones; and • selecting at least one instruction corresponding to the determined surfaces from a table associating predetermined instructions with at least one static or dynamic surface, or a combination of static or dynamic surfaces, or indeed a sequence of one or more static or dynamic surfaces chained in succession. A dynamic surface is a surface having graphics characteristics of shape and area that change over time.
A gesture corresponds to a single static or dynamic contact, or to a combination of static or dynamic surfaces, or to a sequence of static or dynamic surfaces. The operator interacts with the operating system of the computer via one or more gestures performed via the touch-sensitive device. Each gesture corresponds to an instruction that is forwarded to the operating system of the computer.
The invention also provides a system for implementing the method, the system comprising a touch- sensitive device provided with means for transmitting contact data in real time to a computer, the contact data comprising coordinates of contact zones detected on the screen, and a software demultiplexing module executable by the computer to:
• transform the contact data into identified data packets, each comprising an identifier, a position, an instant, a height, and a width;
• calculating a surface of the contact zone and determining a corresponding surface class, which surface and class are added to each packet; and
• determining an instruction, in particular on the basis of the surface class.
Preferably, the software demultiplexer module is arranged to:
• determine an instruction, in particular on the basis of the dynamic variation in the surface class; • determine an instruction, in particular on the basis of a combination of static or dynamic surface classes; and/or
• determine an instruction in particular on the basis of a sequence of static or dynamic surface classes chained in succession.
Other characteristics and advantages of the invention appear on reading the following description of particular, non-limiting embodiments of the invention.
Reference is made to the accompanying drawings, in which:
• Figure 1 is a diagrammatic view of a system for implementing the method in accordance with the invention;
• Figure 2 is a flow chart showing how the method in accordance with the invention proceeds; • Figures 3a to 3c are successive portions of a table showing the various elements that make up a grammar usable with the method of the invention;
• Figure 4 is a view of a flat screen having hands placed thereon in order to show the influence of the position of the hand and the size of the hand on the surface of the detected contact zone;
• Figure 5 is a view analogous to Figure 4 of a screen having superposed thereon a grid for determining weighting coefficients of the calculated surfaces; • Figures 6 and 7 are views analogous to Figure 4 showing buffer zones;
• Figures 8a and 8b show a way of determining the orientation of a hand that is flat on the screen;
• Figure 9 is a view analogous to Figure 4 showing how a movement trajectory of one contact zone is masked by another contact zone; and
• Figure 10 is a view analogous to Figure 4 showing the detection of the edge of a hand.
With reference to the figures, the method in accordance with the invention is implemented by means of a computer system comprising a computer 1 connected to a touch-sensitive device given overall reference 2 and having a display surface 3 surrounded by a detector frame 4 with two optical detectors 5 mounted in two top corners thereof, each optical detector 5 having a field of view that covers the side opposite thereto and the bottom of the frame 4. Infrared light-emitting diodes (LEDs) 6 are mounted on the sides 7 and on the bottom 8 so that the masking of one or more of these LEDs can be detected by the detectors to reveal the presence of an element in the vicinity of the display surface 3. The detector frame is thus arranged to detect the hands of a user or an operator that, on coming close to the display surface 3 or touching it will mask the LEDs and create contact points or zones 21, 22, 23, 24 that are interpreted by the system in the form of quadrilaterals 11, 12, 13, 14 having the contact zones inscribed therein (see Figures 4, 6, 7, and 8) . The detector frame 4 is arranged to deliver contact data to the computer 1 continuously and in real time, said data specifying for the contact zone the coordinates of its center and of its four vertices, and also the distances between opposite vertices. The detector frame 4 may in known manner identify two contact zones that are present simultaneously, providing the contact zones appear in succession. This type of detector frame 4 is itself known .
The computer 1 is provided with an operating system that handles inputs/outputs and the hardware and software resources of the computer, which operating system is associated with a software demultiplexing module that operates in a manner described in detail below.
The computer 1 also runs software for preparing an airborne mission using a map displayed on the touch screen 2. The control instructions of the software are input by means of the touch-sensitive device 2. The method of inputting instructions that is implemented by the above system comprises the main steps of:
• determining surfaces of the contact zones (step 130); and • selecting at least one instruction corresponding to the determined surfaces from a table that associates predetermined instructions with at least one static or dynamic surface or a combination of static or dynamic surfaces, or a sequence of static or dynamic surfaces (step 170) .
The demultiplexing software module is arranged to implement this method and more particularly:
• to receive contact data in real time (step 110);
• to transform the contact data into packets of identified data comprising an identifier, a position, an instant, a height, and a width (step 120);
• to calculate a surface for the contact zone and determine a corresponding surface class (step 130), which classes are added to each packet (step 140); • to determine an instruction, in particular on the basis of the static or dynamic surface class or of the combination of static or dynamic surfaces, or of the chained sequence of successive static or dynamic surfaces, and on the basis of the above-mentioned table (steps 160 and 170); and
• to transmit the instruction to the application (step 180) .
The table comprises four surfaces: a first surface corresponds substantially to the distal end of a finger; a second surface corresponds substantially to a closed hand; a third surface corresponds to a flat open hand; and a fourth surface corresponds to the edge of a hand. The table also comprises certain contact zone modifications, it being possible for the selection of an instruction also to be a function of such a modification. The modification may be: a movement of the contact zone, such as a circular or straight-line movement, possibly taking account of the speed and/or the acceleration of the movement; or else a change of shape, and/or of area of the contact zone (e.g. resulting from closing or opening a hand that is pressing against the touch- sensitive device.
The table thus associates gesture grammar elements with instructions. With reference to Figure 3, there follows a list of the grammar elements that appear in the table together with the corresponding instructions (note: for certain instructions, a distinction is drawn between a display that is in two dimensions (2D) , or a display that is in three dimensions (3D) ) :
hand flat against the takes on a "graphic surface (anywhere where information" mode: an it does not mask the enlarged representation of finger) , and then moves the pointed-to object is the flat hand further opened. The pointed-to away. object appears on its own in a window to the left of the finger. The magnification is progressive (with the object being shown in ever- increasing detail) as the hand moves progressively away from the finger. - If the operator points a finger without selecting an object, then a "magnifying glass" zone is displayed: magnification takes place around the pointed-to zone. The operator may deselect an object by pressing a second time with a finger in order to establish the magnifying- glass effect before putting down the other hand and moving it. The operator closes the magnifying glass by moving the flat hand in the opposite direction, so as to close down the magnifying glass.
The operator applies a Open a context menu. hand flat and moves it over the surface.
The operator applies a Close a context menu. hand flat on the previously-opened menu and moves the hand over the surface.
The operator applies a Select operation, e.g. in a finger and releases it. menu on the screen.
The operator applies the An "erasure" effect under edge of the hand and two circumstances: moves it obliquely - with no selected object, upwards . then all objects situated under the surface swept by the operator are erased from the screen;
speed: the last movement imparted to the representation (however complicated it might be) is conserved as being continuous if the movement was terminated quickly with the fist being raised. To stop the continuous movement it suffices to make contact with the surface. In 3D, this gesture can be used to cause the representation of the screen to move up or down.
The operator applies two In a representation where fingers and makes free several views are movements on the overlapping, the movements surface . are applied to the "top" view and to the views associated therewith, independently of the others. In 2D: zoom in, zoom out, rotate, and pan, both dynamically and continuously. Variant with account being taken of speed: the last movement imparted to the representation is conserved as being continuous if the movement was terminated quickly with the fist being raised. To stop the continuous movement, it suffices to make contact with the surface. In 3D: zoom in, zoom out, rotate, and pan, both dynamically and continuously. Variant with account being taken of speed: the last movement imparted to the representation is conserved as being continuous if the movement was terminated quickly with the fist being raised. To stop the
continuous movement it suffices to make contact with the surface. In 3D, this gesture can be used to cause the representation of the screen to move up or down.
The operator applies a Scroll-wheel effect, e.g. finger to the surface, used with a representation makes a fist with the of a plurality of views that other hand, and moves occupy the entire display the fist upwards or surface, and also used when downwards . a plurality of objects are represented by a single representation. When a plurality of views overlie one another, the result of moving the fist is to modify the order of the views in real time, on the principle that the view on top of the others is the active view for panning, rotating, or zooming manipulation when a manipulation is to be performed on the active view (and those that are associated therewith, if any) without propagating to the other views . When there is only one representation for a plurality of objects, e.g. trajectories that coincide, the result is to cause the objects to scroll. This gesture is also used to scroll drop-down lists, or to vary numerical values, etc
The operator places both Move two overlapping views. hands flat on the One of the hands causes the surface, far enough top view to move, the other apart to avoid creating the bottom view. artifacts. Thereafter the operator moves both hands upwards or downwards while creating a vertical offset
Preferably, the instructions are grouped together, and each group is associated with a common grammatical element. The grammatical elements (or predetermined positioning parameters) , one of which is common to all of the instructions of a group, comprise a surface, a relative position, and/or a variation in the contact zones. One of the positioning parameters is common to at least one of the contact zones associated with all of the instructions in a given group. For example, in the embodiment described:
• displaying semantic or graphics information, pointing with a stationary finger (51, 52, 53, 66);
• inputting, with one hand placed flat and not moving (59, 60) ; • selecting in 2D and in 3D, one hand closed and stationary (61, 62);
• moving in the representation, the end of the finger is applied and moved (63, 65);
• manipulating the representation, one hand is closed and moving (66, 64, 68, 70); and
• switching between modes, both hands applied with at least one hand being on edge and moving (75, 76, 77, 78) .
This may make it easier for the operator to learn the grammar.
When the combinations incorporate modifying at least one of the surfaces, as in the implementation described, the method includes the step of determining the modification . With a movement, the movement is identified by the fact that the coordinates of the contact zone change and the speed and/or acceleration of the movement is optionally measured on the basis of these changing coordinates . When movement is linear (rectilinear or circular) , then two contact zones are detected and the method includes the step of identifying the moving zone being
masked temporarily by the other zone (Figure 9) . Identifying temporary masking includes a stage of extrapolating a trajectory for the movement of the moving zone while it is temporarily masked. When a contact zone turns on the spot, the method includes the step of determining the outline of the contact zone and of determining a major axis of the contact zone (from the coordinates of the two points of the outline that are furthest apart from each other) , and the step of detecting rotation of the major axis. By way of example, rotation of the major axis is detected from variation in the distance between the two points of the outline that are furthest apart. An analogous method consists in measuring deformations of the quadrilateral within which the contact zone is inscribed (compare
Figure 8a with Figure 8b) . These mathematical methods for determining rotation of the contact zone are themselves known and there is no need to describe them in greater detail herein. In order to facilitate use of the touch-sensitive device 2, the method includes the step of displaying a marker of the determined surface on the display surface 3. The marker in this example is a colored halo surrounding the contact zone, with it being possible for its color to depend on which surface has been detected. This enables the user to verify that the system has indeed taken the intended surface into account.
The method of the invention also implements means for avoiding taking account of interfering contacts that are not the result of deliberate user action. These means enable contact zones to be stabilized (step 150) and make it possible to eliminate interfering contacts by tracking the variation of contacts over time. This stage also makes it possible to identify contact zones that are in movement.
These means include a time component and a graphics component .
The time component consists in observing the contacts that have a duration shorter than a predetermined threshold and/or contacts that appear after an earlier contact, but after a time lapse that is shorter than a threshold measured from the earlier contact, and continuing to observe them in order to identify how they vary. In this example, the time component consists more precisely in ignoring contacts of a duration that is shorter than a first predetermined threshold (it is assumed that a contact that is too short is accidental) , and in ignoring contacts that occur within a time lapse that is shorter than a second predetermined threshold after a first contact that is taken into account (such a contact may be the result of an involuntary movement of the hand before it stabilizes) . The software demultiplexing module prevents packets corresponding to such contacts being issued. The thresholds are the result of a compromise between having a system that is highly reactive and the drawbacks of having a system that is too sensitive to interfering contacts (processing time or the consequences of a wrongly interpreted contact on the functioning of the application) . This makes it possible to take a contact zone into account only once it has stabilized and corresponds to the will of the user.
The graphics component consists in defining a buffer zone 33 around the contact zone 23 (Figure 6) and any contact detected in the buffer zone is either ignored or incorporated, as appropriate. The distance between the outline of the buffer zone and the outline of the contact zone is, as above, the result of a compromise. Contacts that appear in the buffer zone may be the result of hand movements for increasing the surface of the contact zone, e.g. moving from a hand that is closed as a fist to a hand that is opened out flat. When the determined surface of the contact zone 22 corresponds to an open hand, the buffer zone 32 is dimensioned to stay close to
the contact zone (Figure 7) . A hand that is flat is stable and only a few residual movements of the hand might provoke interfering contacts in the immediate vicinity thereof. This nevertheless makes it possible to ignore interfering contacts that might result from the hand being closed.
Contacts that occur in the buffer zone are thus filtered depending on the respective variations of the contact in the buffer zone and of the contact that gave rise to the buffer zone.
To take account of morphological differences between potential users of the invention, the method includes a prior calibration step (step 100) for calibrating the surfaces of the contact zones as a function of the size of the user's hands. During this step, the user is asked to place successively on the touch screen 2: a finger, a closed hand, an open hand, and the edge of a hand so as to determine the reference surfaces to be used for each of the corresponding contact zones in order to classify the surfaces of the contact zones, prior to looking up instructions in the table.
In Figure 4, it can be seen that given the detection technology that is used in this example, the surface of a contact zone depends on how far away it is from the optical detectors 5. In this example, the method of the invention comprises, after calculating the surface of the contact zone, making a correction to the surface by means of a weighing factor of value that is determined by means of the grid 30 shown superposed on the display surface 3. The grid 30 is made from two bundles of lines extending from each of the top corners of the detector frame 4 towards the sides that are adjacent to the opposite bottom corner. The intersections between the bundles of lines form quadrilaterals of area that varies as a function of distance from the top corners, i.e. as a function of their coordinates in the frame of reference of the touch screen 2. The weighting factor is
determined as a ratio between the area of the quadrilateral corresponding to the coordinates of the contact zone whose area is being calculated, and the area of an arbitrarily-chosen reference quadrilateral. The weighting step thus makes it possible to determine the surface of the contact zone as a function of its distance from the detectors .
Naturally, the invention is not limited to the embodiments described but covers any variant coming within the ambit of the invention as defined by the claims .
In particular, the method of the invention may be implemented with other types of touch screen such as touch screens that are resistive, capacitive, acoustic, optical (a camera disposed looking at the screen) , ... .
The method of the invention can be used for controlling applications or software of types other than those associated with graphics or mapping.
The method of the invention may be implemented to take account only of surfaces, or of surfaces with only one type of modification (variation in surface or movement) , or only of movement (rotary or linear) .
The application controlled by the touch-sensitive device may also receive instructions via some other input interface such as a keyboard or a mouse, for example.
Claims
1. A method of inputting instructions to control a program executed by a computer provided with a touch- sensitive device arranged to detect zones of contact with at least a portion of a user's hand, the method comprising the steps of:
• determining surfaces of the contact zones; and
• selecting at least one instruction corresponding to at least one of the determined surfaces from a table that associates predetermined instructions with at least one surface or combination of surfaces.
2. A method according to claim 1, including the step of determining a modification to at least one of the contact zones and of selecting the instruction also as a function of said modification.
3. A method according to claim 2, wherein the modification is a movement of the contact zone.
4. A method according to claim 3, wherein the determination of the modification includes a step of measuring the speed and/or the acceleration of the movement .
5. A method according to claim 3, including the step of determining an outline of the contact zone and a major axis of the contact zone, and the step of detecting a rotation of the major axis.
6. A method according to claim 2, wherein the modification is a variation in the surface of the contact zone .
7. A method according to claim 1, wherein, when two contact zones are detected with one of them moving, the method includes a step of identifying temporary masking of the moving zone by the other zone.
8. A method according to claim 7, wherein the identification of the temporary masking includes a stage of extrapolating a movement trajectory of the moving zone during the temporary masking.
9. A method according to claim 1, wherein the table has four surfaces, a first surface corresponding substantially to the distal end of a finger, a second surface corresponding substantially to a closed hand, a third surface corresponding to a hand open flat, and a fourth surface corresponding to the edge of a hand.
10. A method according to claim 1, including a calibration step for calibrating the surfaces of the contact zones as a function of the size of the user's hands .
11. A method according to claim 1, implemented by means of a touch-sensitive device having two contact detectors disposed in the vicinity of corners of the screen, the method including the step of determining the surface of the contact zone as a function of the distances of the contact zone from the detectors.
12. A method according to claim 1, wherein a buffer zone is defined around a contact zone and any contact that is detected in the buffer zone is filtered depending on variation of the contact in the buffer zone and of the contact that gave rise to the buffer zone.
13. A method according to claim 12, wherein, when the surface as determined for the contact zone corresponds to an open hand, the buffer zone is dimensioned to be close to the contact zone.
14. A method according to claim 1, wherein contacts having a duration shorter than a predetermined threshold, and/or contacts that appear after a time lapse that is shorter than a threshold measured from an earlier contact are not taken into account directly but are observed in order to identify how they vary.
15. A method according to claim 1, wherein contacts within a buffer zone and of a duration that is longer than a predetermined threshold are incorporated into the contact that gave rise initially to the buffer zone.
16. A method according to claim 1, wherein the instructions are grouped together, with each instruction being associated with a predetermined positioning parameter comprising a surface, a relative position, and/or a variation of the contact zones, and with one of the positioning parameters being common to all of the instructions within a given group.
17. A method according to claim 1, including the step of displaying an indicator of the determined surface on the screen .
18. A method according to claim 17, wherein the indicator is a color halo surrounding the contact zone, with the color depending on the determined surface.
19. A method according to claim 1, wherein a predetermined gesture performed by an operator is defined by at least one static or dynamic contact surface or by a combination of static or dynamic contact surfaces, or by a sequence of static or dynamic contact surfaces.
20. A system for implementing the method according to any preceding claim, the system comprising a touch-sensitive device provided with means for transmitting contact data in real time to a computer, the contact data comprising coordinates of contact zones detected on the touch- sensitive device, and a software demultiplexing module executable by the computer to:
• transform the contact data into identified data packets, each comprising an identifier, a position, an instant, a height, and a width;
• calculating a surface of the contact zone and determining a corresponding surface class, which surface and class are added to each packet; and
• determining an instruction, in particular on the basis of the surface class.
21. A system according to claim 20, wherein the touch- sensitive device is a display screen directly provided with contact detectors that are of the resistive, capacitive, acoustic, or optical type and that are arranged to make acquisitions at regular intervals of the order of about fifteen milliseconds.
22. A system according to claim 20, wherein the touch- sensitive device is a transparent slab fitted with contact detectors and fastened to a computer screen.
23. A system according to claim 20, wherein the touch- sensitive device is offset from a display screen.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0902832 | 2009-06-11 | ||
FR0902832A FR2946768B1 (en) | 2009-06-11 | 2009-06-11 | METHOD OF TACTILE INPUTTING CONTROL INSTRUCTIONS OF A COMPUTER PROGRAM AND SYSTEM FOR IMPLEMENTING SAID METHOD |
US21870809P | 2009-06-19 | 2009-06-19 | |
US61/218,708 | 2009-06-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010142732A1 true WO2010142732A1 (en) | 2010-12-16 |
Family
ID=41603747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2010/058097 WO2010142732A1 (en) | 2009-06-11 | 2010-06-09 | A touch method of inputting instructions for controlling a computer program, and a system for implementing the method |
Country Status (2)
Country | Link |
---|---|
FR (1) | FR2946768B1 (en) |
WO (1) | WO2010142732A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3013861B1 (en) * | 2013-11-27 | 2017-06-02 | Airbus Operations Sas | METHOD FOR VALIDATING AN INTERACTION ON A TOUCH SURFACE BY OCCULOMETRY |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070262964A1 (en) * | 2006-05-12 | 2007-11-15 | Microsoft Corporation | Multi-touch uses, gestures, and implementation |
WO2008038883A1 (en) * | 2006-09-29 | 2008-04-03 | Lg Electronics Inc. | Method of generating key code in coordinate recognition device and video device controller using the same |
US20080165141A1 (en) * | 2007-01-05 | 2008-07-10 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
EP2000894A2 (en) * | 2004-07-30 | 2008-12-10 | Apple Inc. | Mode-based graphical user interfaces for touch sensitive input devices |
US20080309632A1 (en) * | 2007-06-13 | 2008-12-18 | Apple Inc. | Pinch-throw and translation gestures |
-
2009
- 2009-06-11 FR FR0902832A patent/FR2946768B1/en active Active
-
2010
- 2010-06-09 WO PCT/EP2010/058097 patent/WO2010142732A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2000894A2 (en) * | 2004-07-30 | 2008-12-10 | Apple Inc. | Mode-based graphical user interfaces for touch sensitive input devices |
US20070262964A1 (en) * | 2006-05-12 | 2007-11-15 | Microsoft Corporation | Multi-touch uses, gestures, and implementation |
WO2008038883A1 (en) * | 2006-09-29 | 2008-04-03 | Lg Electronics Inc. | Method of generating key code in coordinate recognition device and video device controller using the same |
US20080165141A1 (en) * | 2007-01-05 | 2008-07-10 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
US20080309632A1 (en) * | 2007-06-13 | 2008-12-18 | Apple Inc. | Pinch-throw and translation gestures |
Also Published As
Publication number | Publication date |
---|---|
FR2946768A1 (en) | 2010-12-17 |
FR2946768B1 (en) | 2012-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11048333B2 (en) | System and method for close-range movement tracking | |
US9910498B2 (en) | System and method for close-range movement tracking | |
US9996176B2 (en) | Multi-touch uses, gestures, and implementation | |
KR101757080B1 (en) | Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand | |
JP6539816B2 (en) | Multi-modal gesture based interactive system and method using one single sensing system | |
KR101292467B1 (en) | Virtual controller for visual displays | |
KR101809636B1 (en) | Remote control of computer devices | |
US9671869B2 (en) | Systems and methods of direct pointing detection for interaction with a digital device | |
US20170228138A1 (en) | System and method for spatial interaction for viewing and manipulating off-screen content | |
US20030048280A1 (en) | Interactive environment using computer vision and touchscreens | |
US20140123077A1 (en) | System and method for user interaction and control of electronic devices | |
JP2013037675A5 (en) | ||
EP2219097A1 (en) | Man-machine interface method executed by an interactive device | |
WO2010142732A1 (en) | A touch method of inputting instructions for controlling a computer program, and a system for implementing the method | |
Pame et al. | A Novel Approach to Improve User Experience of Mouse Control using CNN Based Hand Gesture Recognition | |
CN105528059B (en) | A kind of gesture operation in three-dimensional space method and system | |
Radhakrishnan | Investigating a multi-touch user interface for three-dimensional CAD operations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10725086 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10725086 Country of ref document: EP Kind code of ref document: A1 |