[go: up one dir, main page]

HK1191705B - Sensing user input at display area edge - Google Patents

Sensing user input at display area edge Download PDF

Info

Publication number
HK1191705B
HK1191705B HK14104832.8A HK14104832A HK1191705B HK 1191705 B HK1191705 B HK 1191705B HK 14104832 A HK14104832 A HK 14104832A HK 1191705 B HK1191705 B HK 1191705B
Authority
HK
Hong Kong
Prior art keywords
active display
display area
input
user input
area
Prior art date
Application number
HK14104832.8A
Other languages
Chinese (zh)
Other versions
HK1191705A (en
Inventor
A. Whitman Christopher
Manohar Dighde Rajesh
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Publication of HK1191705A publication Critical patent/HK1191705A/en
Publication of HK1191705B publication Critical patent/HK1191705B/en

Links

Description

Sensing user input at an edge of a display area
RELATED APPLICATIONS
This application claims priority from the following U.S. provisional patent applications, in accordance with 35 U.S. C. ยง 119(e), the entire disclosure of each of which is hereby incorporated by reference herein in its entirety:
attorney docket No. 61/606,321 entitled "Screen Edge" filed 3/2/2012, U.S. provisional patent application No. 336082.01;
U.S. provisional patent application No. 61/606,301 entitled "Input Device Functionality" attorney docket No. 336083.01, filed 3, 2, 2012;
U.S. provisional patent application No. 61/606,313 entitled "Functional Hinge" attorney docket No. 336084.01, filed 3/2/2012;
U.S. provisional patent application No. 61/606,333 entitled attorney docket number 336086.01, filed 3, 2/3/2012;
U.S. provisional patent application No. 61/613,745 entitled attorney docket number 336086.02 filed on 21/3/2012, entitled "use and Authentication";
U.S. provisional patent application No. 61/606,336 entitled attorney docket number 336087.01, entitled "Kickstand and Camera," filed 3, 2, 2012; and
U.S. Provisional patent application No. 61/607,451 entitled "Spanaway professional" attorney docket number 336143.01, filed 3, 6, 2012.
Background
Mobile computing devices have been developed to increase the functionality available to users in mobile settings. For example, a user may interact with a mobile phone, tablet, or other mobile computing device to check email, surf the web, compose text, interact with an application, and so forth. Conventional mobile computing devices oftentimes employ a display with touch screen functionality to allow a user to enter various data or requests into the computing device. However, it may be difficult to identify certain user inputs with such conventional mobile computing devices, which provides a frustrating and unfriendly experience for the user.
Disclosure of Invention
Techniques for sensing user input at an edge of a display area are described.
In one or more implementations, input data for user input is received. The input data includes: data for at least a portion of the user input in the active display area of the device and data for at least a portion of the user input in an area outside of the active display area of the device. A user input is determined based on the received input data.
In one or more implementations, a computing device includes a housing configured in a handheld form factor and a display device supported by the housing. The display device has an active display area and one or more sensors arranged to sense user input based at least in part on proximity of an object to the active display area and based at least in part on proximity of an object to an area outside the active display area.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Brief description of the drawings
The detailed description will be described with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items. The entities illustrated in the figures may represent one or more entities, and thus, the singular or plural of these entities may be referred to interchangeably in this discussion.
FIG. 1 is an illustration of an environment in an example implementation that is operable to employ techniques described herein.
FIG. 2 is an illustration of an environment that is operable, in another example implementation, to employ techniques described herein.
FIG. 3 depicts an example implementation of the input device of FIG. 2 showing the flexible hinge in more detail.
Fig. 4 depicts an example implementation showing a perspective view of the connection portion of fig. 3 including a mechanical coupling protrusion and a plurality of communication contacts.
FIG. 5 illustrates an example display device that implements techniques to sense user input in a display area.
FIG. 6 illustrates a cross-sectional view of an example display device that implements techniques for sensing user input in a display area.
FIG. 7 illustrates a cross-sectional view of another example display device that implements techniques for sensing user input in a display area.
FIG. 8 is an illustration of a system that is operable, in an example implementation, to employ techniques described herein.
FIG. 9 illustrates the example display device of FIG. 5 with example user inputs.
FIG. 10 illustrates the example display device of FIG. 5 with another example user input.
FIG. 11 illustrates the example display device of FIG. 5 with another example user input.
FIG. 12 illustrates the example display device of FIG. 5 with another example user input.
Fig. 13 is a flow diagram illustrating an example process for implementing the techniques described herein in accordance with one or more embodiments.
Fig. 14 illustrates an example system including various components of an example device, which may be implemented as any type of computing device as described with reference to fig. 1-13 to implement embodiments of the techniques described herein.
Detailed Description
Overview
Techniques for sensing user input at an edge of a display area are described. One or more sensors are arranged to sense user input in the active display area and user input in an extended area outside the active display area. Data for user input such as gestures may include data from user input sensed in and outside of the active display area. Thus, user input may originate and/or terminate outside of the active display area.
In the following discussion, an example environment is first described in which the techniques described herein may be employed. Example procedures are then described as being performed in the example environment and in other environments. Thus, execution of the example processes is not limited to the example environment, and the example environment is not limited to execution of the example processes.
Example Environment and Processes
FIG. 1 is an illustration of an environment 100 that is operable, in an example implementation, to employ techniques described herein. The illustrated environment 100 includes an example of a computing device 102, which may be configured in a variety of ways. For example, the computing device 102 may be configured for mobile use, such as a mobile phone, a tablet computer, and so forth. However, the techniques discussed herein are also applicable to a variety of devices other than those for mobile use, and may be used with any of a variety of different devices that use input sensors on or in a display area. For example, the computing device 102 may be a desktop computer, a kiosk point, an interactive display or monitor (e.g., located at a hospital, airport, mall, etc.), and the like. The computing device 102 may range from rich resource devices with substantial memory and processor resources to less resource devices with limited memory and/or processing resources. Computing device 102 may also refer to software that causes computing device 102 to perform one or more operations.
Computing device 102 is shown, for example, as including input/output module 108. Input/output module 108 represents functionality related to the processing of inputs and the rendering of outputs by computing device 102. Input/output module 108 may process a variety of different inputs, such as inputs related to functions corresponding to keys of an input device coupled to computing device 102 or keys of a virtual keyboard displayed by display device 110, inputs that are gestures recognized through touch screen functionality of display device 110 and that cause operations corresponding to the gestures to be performed, and so forth. Thus, display device 110 is also referred to as an interactive display device due to the ability of the display device to receive user input via any of a variety of input sensing technologies. The input/output module 108 may support a variety of different input technologies by recognizing and utilizing a distinction (division) between input types including key presses, gestures, and the like.
FIG. 2 is an illustration of an environment 200 that is operable, in another example implementation, to employ techniques described herein. The illustrated environment 200 includes an example of a computing device 202 that is physically and communicatively coupled to an input device 204 via a flexible hinge 206. Similar to the computing device 102 of FIG. 1, the computing device 202 may be configured in a variety of ways. Computing device 202 may also involve software that causes computing device 202 to perform one or more operations.
Computing device 202 is shown, for example, as including an input/output module 208. Input/output module 208 represents functionality related to the processing of inputs and the rendering of outputs by computing device 202. Input/output module 208 may process a variety of different inputs, such as inputs related to functions corresponding to keys of input device 204 or keys of a virtual keyboard displayed by display device 210, inputs that are gestures recognized through touch screen functionality of display device 210 and that cause operations corresponding to the gestures to be performed, and so on. Thus, display device 210 is also referred to as an interactive display device due to the ability of the display device to receive user input via any of a variety of input sensing technologies. The input/output module 208 may support a variety of different input technologies by recognizing and utilizing distinctions between input types including key presses, gestures, and the like.
In the illustrated example, the input device 204 is configured as a keyboard having a QWERTY key layout, although other key layouts are also contemplated. In addition, other non-conventional configurations are also contemplated, such as game controllers, configurations that mimic musical instruments, and the like. Thus, the input device 204 and the keys incorporated by the input device 204 may take on a variety of different configurations to support a variety of different functions.
As previously described, in this example, the input device 204 is physically and communicatively coupled to the computing device 202 using the flexible hinge 206. The flexible hinge 206 is flexible in that: the rotational movement supported by the hinge is accomplished by flexing (e.g., bending) of the material comprising the hinge, as opposed to mechanical rotation supported by a pin, although embodiments of mechanical rotation are also contemplated. Further, such flexible rotation may be configured to support movement in one direction (e.g., vertical in the figure), and further, to restrict movement in other directions, such as lateral movement of the input device 204 relative to the computing device 202. This may be used to support consistent alignment of the input device 204 with respect to the computing device 202, such as aligning sensors for changing power states, application states, and so forth.
The flexible hinge 206 may be formed, for example, by using one or more layers of fabric and include conductors formed as flexible traces to communicatively couple the input device 204 to the computing device 202 and vice versa. This communication may be used, for example, to transmit the results of the key press to the computing device 202, receive power from the computing device, authenticate, provide supplemental power to the computing device 202, and so forth. The flexible hinge 206 can be configured in a variety of ways, further discussion of which may be found in relation to the following figures.
FIG. 3 depicts an example implementation 300 of the input device 204 of FIG. 2, showing the flexible hinge 206 in greater detail. In this example, a connection 302 of the input device is shown, the connection 302 configured to provide communication and physical connection between the input device 204 and the computing device 202. In this example, the connection 302 has a height and cross-section configured to be received in a channel in a housing of the computing device 202, although this arrangement may be reversed without departing from the spirit and scope thereof.
The connection portion 302 is flexibly connected to a portion of the input device 204 including the key by using the flexible hinge 206. Thus, when the connection 302 is physically connected to the computing device, the combination of the connection 302 and the flexible hinge 206 supports movement of the input device 204 relative to the computing device 202, similar to a hinge of a book.
For example, the flexible hinge 206 may support rotational movement such that the input device 204 may be placed facing the display device 210 of the computing device 202, thereby acting as a cover. The input device 204 may also be rotated to be disposed to face a back side of the computing device 202, such as a housing of a back side of the computing device 202 (which is disposed on the computing device 202 opposite the display device 210).
Naturally, a variety of other orientations are also supported. For example, the computing device 202 and the input device 204 may be in an arrangement such that both are laid flat facing a surface as shown in FIG. 2. In another example, a typing arrangement may be supported in which the surface facing the input device 204 is laid flat and the computing device 202 is placed at an angle, such as by using a stand disposed on the back of the computing device 202, to allow viewing of the display device 210. Other examples are also contemplated, such as tripod arrangements, conference arrangements, presentation arrangements, and the like.
In this example, the connection portion 302 is shown to include magnetic coupling devices 304, 306, mechanical coupling protrusions 308, 310, and a plurality of communication contacts 312. The magnetic coupling devices 304, 306 are configured to magnetically couple to complementary magnetic coupling devices of the computing device 202 through the use of one or more magnets. In this manner, the input device 204 may be physically secured to the computing device 202 by using magnetic attraction forces.
The connection portion 302 also includes mechanical coupling protrusions 308, 310 for forming a mechanical-physical connection between the input device 204 and the computing device 202. The mechanical coupling protrusions 308, 310 will be shown in more detail in the following figures.
Fig. 4 depicts an example implementation 400 showing a perspective view of the connection portion 302 of fig. 3, including the mechanical coupling protrusions 308, 310 and the plurality of communication contacts 312. As shown, the mechanical coupling protrusions 308, 310 are configured to extend away from the surface of the connection portion 302, which in this example is perpendicular, although other angles are also contemplated.
The mechanical coupling protrusions 308, 310 are configured to be received in complementary cavities within the channel of the computing device 202. When so received, the mechanical coupling protrusions 308, 310 facilitate mechanical binding between the devices when a force is applied that is not in-line with the axis (defined as corresponding to the height of the protrusions and the depth of the cavities).
For example, when a force is applied that is consistent with the aforementioned longitudinal axis along the height of the protrusion and the depth of the cavity, the user will separate the computing device 202 from the input device 204 against the force applied by the magnet alone. However, at other angles, the mechanical coupling protrusions 308, 310 are configured to mechanically bind within the cavity, thereby creating a force to resist removal of the input device 204 from the computing device 202 in addition to the magnetic force of the magnetic coupling devices 304, 306. In this manner, the mechanical coupling tabs 308, 310 may bias the removal of the input device 204 from the computing device 202 (mimicking tearing pages from a book) and limit other attempts to separate devices.
The connection portion 302 is also shown to include a plurality of communication contacts 312. The plurality of communication contacts 312 are configured to contact corresponding communication contacts of the computing device 202 to form a communicative coupling between the devices. The communication contacts 312 may be configured in a variety of ways, such as by being formed using a plurality of spring loaded pins configured to provide consistent communication contacts between the input device 204 and the computing device 202. Thus, the communication contacts may be configured to remain unchanged during small movements of the impact of the device. A variety of other examples are also contemplated, including placement of pins on the computing device 202 and contacts on the input device 204.
Techniques to sense user input at the edge of the display area use one or more sensors disposed in the extended sensor area to sense user input outside of the active display area. One or more sensors are also provided for sensing user input in the active display area. The extended sensor region is in close proximity (e.g., within 5 mm of) the active display region, typically adjacent the active display region.
FIG. 5 illustrates an example display device 500 that implements techniques for sensing user input in a display area. Display device 500 is an interactive display device that includes an active display area 502 in which a computing device may display various data and information. Display area 502 is referred to as an active display area because the displayed data and information may be changed by the computing device over time, optionally in response to user input received by the computing device. The display device 500 also includes an extended sensor region 504, shown with cross-hatching, around the active display region 502 and adjacent to the active display region 502. User input may be received when an object, such as a user's finger, stylus (stylus), pen, etc., touches and/or is proximate to a surface of active display area 502 and/or a surface of extended sensor area 504. The extended sensor region 504 facilitates sensing user input along an edge of the active display region 502. The edge of the active display area 502 refers to the outer perimeter of the active display area 502, which is the portion of the active display area 502 that is closest to the extended sensor area 504.
The extended sensor region 504 may extend, for example, 2 millimeters beyond the active display region 502, although other amounts of extension are also contemplated. Around the active display area 502, the extended sensor areas 504 may all extend the same amount beyond the active display area 502, or alternatively may extend different amounts. For example, the extended sensor region 504 may extend 2 millimeters vertically and 4 millimeters horizontally beyond the active display region 502. The extended sensor region 504 may also vary from different types of devices and may be customized for a particular type of device. For example, interactive devices capable of receiving input from a greater distance (e.g., kiosk points and interactive displays capable of sensing input 10 centimeters away) may have an extended sensor region that extends further (e.g., 10-15 centimeters, rather than 2-4 millimeters) beyond the display region than devices that receive input from closer interactions (e.g., touch-sensing tablets).
Display devices implementing techniques for sensing user input at the edges of a display area may use a variety of active display techniques. These active display technologies may include, for example, flexible display technologies, electronic reader display technologies, Liquid Crystal (LCD) display technologies, Light Emitting Diode (LED) display technologies, Organic Light Emitting Diode (OLED) display technologies, plasma display technologies, and so forth. Although various examples of display technologies are discussed herein, other display technologies are also contemplated.
Display devices implementing techniques for sensing user input at the edges of a display area may use a variety of different input sensing techniques. These input sensing technologies may include capacitive systems and/or resistive systems that sense touch. These input sensing techniques may also include inductive systems that sense input from a pen (or other object). These input sensing technologies may also include light-based systems, such as Pixel Sensor (SIP) systems, infrared systems, optical imaging systems, etc., that sense the reflection or splitting (scattering) of light from an object touching (or near) the surface of the display device. Other types of input sensing technologies may also be used, such as surface acoustic wave systems, acoustic pulse recognition systems, dispersive (dispersive) signal systems, and the like. Although various examples of input sensing techniques are discussed herein, other input sensing techniques are also contemplated. Furthermore, these input sensing technologies may be combined together, such as piezoelectric sensors with extended capacitive sensors, to provide other tactile inputs.
According to input sensing techniques for display devices, user input may be received when an object (e.g., a user's finger, a stylus, a pen, etc.) touches and/or approaches a surface of the display device. This close proximity may be, for example, 5 millimeters (although different proximities are also contemplated), and may vary with the manner in which the display device is implemented. The proximity of an object with respect to a display device refers to the distance of the object from the display device in a direction perpendicular to the plane of the display device.
FIG. 6 illustrates a cross-sectional view of an example display device 600 that implements techniques for sensing user input to a display area. The display device 600 includes an active display layer 602 with an input sensing layer 604 disposed above the active display layer 602. Although layers 602 and 604 are shown as separate layers, it should be noted that each of layers 602 and 604 may itself be comprised of multiple layers. As discussed above, the input sensing layer 604 and the active display layer 602 may be implemented using a variety of different technologies. Although not shown in fig. 6, it should be noted that any number of additional layers may be included in display device 600. For example, an additional protective layer made of glass or plastic may be disposed over the input sensing layer 604.
A user's finger 606 (or other object) touching or in close proximity to the input sensing layer 604 is sensed by the input sensing layer 604. The position at which the layer 604 senses the user's finger 606 (or other object) is provided by the layer 604 as the position of the sensed object and is used to identify user input, as will be discussed in more detail below.
The input sensing layer 604 includes a plurality of sensors and extends beyond the active display area 602 to extended sensor areas 608, 610. The number of sensors and the manner in which the sensors are arranged may vary based on the implementation and input sensing technology used for the input sensing layer 604. The input sensing layer 604 includes a portion 612 and portions 614 and 616.
One or more sensors may be disposed in the input sensing layer 604 (in portion 612) above the active display layer 602. These sensors disposed above the layer 602 sense a user's finger 606 (or other object) touching or in close proximity to the layer 604 above the active display layer 602, and thus are also referred to as sensing user input in and/or above the active display area and disposed in the active display area.
One or more sensors may also be disposed in the input sensing layer 604 (in portions 614, 616) over the extended sensor regions 608, 610, respectively. As shown in fig. 6, the extended sensor regions 608, 610 are not located above the active display layer 602. These sensors disposed above the extended sensor regions 608, 610 sense a user's finger 606 (or other object) touching or in close proximity to the layer 604 above the extended sensor regions 608, 610, and are thus also referred to as sensing user input in and/or above the extended sensor regions 608, 610. Because the extended sensor regions 608, 610 are not located above the active display layer 602, these sensors disposed above the extended sensor regions 608, 610 are also referred to as sensing user input in an area outside of the active display region and are disposed in an area outside of the active display region.
Alternatively, the sensors may be arranged in the input sensing layer 604 in other ways, such as along an outer edge (perimeter) of the input sensing layer 604, at corners of the input sensing layer 604, and so forth. These sensors may still sense user input in and/or over the active display area as well as in areas outside the active display area.
FIG. 7 illustrates a cross-sectional view of another example display device 700 that implements techniques for sensing user input to a display area. The display device 700 includes an active display layer 702, with an input sensing layer 704 disposed over the active display layer 702. As discussed above, the input sensing layer 704 and the active display layer 702 may be implemented using a variety of different technologies. Layers 702 and 704 are disposed between lower slab layer 706 and upper slab layer 708. The faceplate layers 706, 708 may be made of various materials such as glass, plastic, and the like. Although layers 702, 704, 706, and 708 are shown as separate layers, it should be noted that each of layers 702, 704, 706, and 708 may themselves be comprised of multiple layers. These layers may also be flexible layers and may be suitable for use in three-dimensional (3D) interactive devices.
Optionally, additional support material 714, 716, shown with cross-hatching in fig. 7, is included between the faceplate layers 706, 708. The support materials 714, 716 provide additional support to areas between the layers of the panel to which the layers 702 and 704 do not extend. The support material 714, 716 may be various materials such as glass, plastic, adhesive glue (bonding adhesive), and the like.
A user's finger 606 (or other object) touching or in close proximity to the input sensing layer 704 is sensed by the input sensing layer 704. The position at which the layer 704 senses the user's finger 606 (or other object) is provided by the layer 704 as the position of the sensed object and is used to identify user input, as will be discussed in more detail below.
The input sensing layer 704 includes a plurality of sensors and extends beyond the active display area 702 to extended sensor areas 710, 712. However, as shown, the input sensing layer 704 need not extend as far as the panel layers 706, 708. The number of sensors included in the input sensing layer 704 and the manner in which the sensors are arranged may vary based on the implementation and input sensing technology used for the input sensing layer 704. The input sensing layer 704 includes a portion 718 and portions 720 and 722.
One or more sensors may be disposed in the input sensing layer 704 (in portion 718) above the active display layer 702. These sensors disposed above the layer 702 sense a user's finger 606 (or other object) touching or in close proximity to a panel layer 708 above the active display layer 702, and thus are also referred to as sensing user input in and/or above the active display area and disposed in the active display area.
One or more sensors are also disposed in the input sensing layer 704 (in portions 720, 722) over the extended sensor regions 710, 712, respectively. As shown in fig. 7, the extended sensor regions 710, 712 are not located above the active display layer 702. These sensors disposed above the extended sensor regions 710, 712 sense a user's finger 706 (or other object) touching or in close proximity to the panel layer 708 above the extended sensor regions 710, 712, and are thus also referred to as sensing user input in and/or above the extended sensor regions 710, 712. Because the extended sensor regions 710, 712 are not located above the active display layer 702, these sensors disposed above the extended sensor regions 710, 712 are also referred to as sensing user input in regions outside of the active display region and are disposed in regions outside of the active display region.
Alternatively, the sensors may be arranged in the input sensing layer 704 in other ways, such as along an outer edge (perimeter) of the input sensing layer 704, at corners of the input sensing layer 704, and so forth. These sensors may still sense user input in and/or over the active display area as well as in areas outside the active display area.
It should be noted that although the input sensing layer in fig. 6 and 7 is shown as being disposed above the active display layer, other arrangements are also contemplated. For example, the input sensing layer may be within or below the active display layer. The input sensing layer may also have a variety of configurations. The input sensing layer may be on both sides of the plastic substrate and/or glass substrate, or on the same side of the plastic layer, glass layer, and/or other optically transparent layer.
Fig. 8 is an illustration of a system 800 that is operable, in an example implementation, to employ techniques described herein. The system 800 includes an input data collection module 802 and an input handler (handler) module 804. System 800 may be implemented, for example, in computing device 102 of fig. 1 or computing device 202 of fig. 2. Although modules 802 and 804 are shown in system 800, it should be noted that one or more additional modules may be included in system 800. It should also be noted that the functionality of the module 802 and/or the module 804 may be separated into multiple modules.
The input data collection module 802 receives an indication of the location 806 of the sensed object. The indications 806 of the locations of these sensed objects are indications of the locations of objects (e.g., user fingers or pens) sensed by an input sensing layer of the display device. Optionally, time information associated with the sensed position of the input sensing layer may also be included as part of the indication 806 of the sensed position of the object. This time information indicates when a particular location is sensed and may take different forms. This time information may be relative to a fixed time frame (timeframe) or clock, for example, or may be an amount of time since the previous location was sensed. Alternatively, the input data collection module 802 may generate temporal information based on the time of receipt of the sensed object location indication 806.
The input data collection module 802 generates input data 808 using the sensed object position indication 806. The input data 808 describes the position and movement of the user input. The input data 808 may be the sensed object location indication 806 and any associated temporal information for the location received and/or generated by the module 802.
Further, the user input may have an associated lifetime, which refers to a duration that begins when an object touching (or in close proximity to) the surface is sensed and ends when the object is no longer sensed as touching (or in close proximity to) the surface of the display device. The age of this association may be identified by the input data collection module 802 and included as part of the input data 808.
The user input may also have an associated velocity, which refers to the velocity at which the sensed object is moving. This speed is a particular distance divided by a particular amount of time, such as a particular number of inches per second, a particular number of millimeters per millisecond, and so forth. This associated speed may be identified by the input data collection module 802 and included as part of the input data 808, or otherwise used (e.g., to determine when to provide the input data 808 to the input handler module 804, as discussed in more detail below).
The input data collection module 802 provides input data 808 to the input handler module 804, and the input handler module 804 determines what the user input is. The user input may take various forms, such as gestures or mouse movements. A gesture refers to a motion or path taken by an object (e.g., a user's finger) that initiates one or more functions of a computing device. For example, the gesture may be a sliding of a user's finger in a particular direction, the user's finger tracing a particular character or symbol (e.g., a circle, the letter "Z", etc.), and so forth. Gestures may also include multi-touch inputs in which multiple objects (e.g., multiple user fingers) take particular motions or paths to initiate one or more functions of a computing device. Mouse movement refers to the motion or path an object (e.g., a user's finger) takes to move something (e.g., a cursor or pointer, a dragged and dropped object, etc.) on a display device. Although gestures and mouse movements are discussed herein, various other types of user input are also contemplated.
The input handler module 804 can use any of a variety of public and/or private technologies to determine what the user input is based on the input data 808. For example, the input handler module 804 may determine that the user input is a particular gesture, a particular mouse movement, and/or the like. The input handler 804 may also be configured to analyze characteristics of the input (e.g., size of the input and/or speed of the input) to configure the display or other output to achieve a customized user experience. For example, a small finger with a small input may be processed to adjust the font, color, application, etc. appropriate for the child.
The input handler module 804 may also take various actions based on the determined user input. For example, the input handler module 804 may provide an indication of the determined user input to one or more other modules of the computing device to perform the requested function or movement. As another example, the input handler module 804 may itself perform the requested function or move.
The input data collection module 802 may provide input data 808 to the input handler module 804 at various times. For example, the input data collection module 802 may provide the input data 808 to the input handler module 804 when generating the input data 808. As another example, the input data collection module 802 may provide the input data 808 to the input handler after the user input is complete (e.g., after a lifetime associated with the user input has elapsed and the object is no longer sensed to be touching (or in close proximity) to the surface of the display device).
Alternatively, the input data collection module 802 may retain the input data 808 for user input, but not provide the input data 808 to the input handler module 804 until a particular event occurs. Various different events may cause module 802 to provide input data 808 to module 804. One event that module 802 may be caused to provide input data 808 to module 804 is that user input indicated by the location of the object is located in the active display area. Accordingly, in response to a user input being in the active display area, module 802 provides input data 808 to module 804.
Another event that may cause module 802 to provide input data 808 to module 804 is that the user input is outside the active display area but is expected to be in the active display area in the future (e.g., during the associated lifetime of the user input). The user input may be expected to be in the active display area based on various rules or criteria, such as based on the speed of the user input and/or the direction of the user input. For example, if the user input is outside of the active display area and the direction of the user input is toward the active display area, the user input is expected to be in the active display area in the future. As another example, if the user input is outside of the active display area, the direction of the user input is toward the active display area, and the speed of the user input is greater than a threshold amount, then the user input is expected to be in the active display area in the future. This threshold amount may be, for example, 4 inches per second, although other threshold amounts are also contemplated. Accordingly, in response to an anticipated future user input being in the active display area, module 802 provides input data 808 to module 804.
Fig. 9 illustrates the example display device 500 of fig. 5 with example user inputs. The display device 500 includes an active display area 502 surrounded by an extended sensor area 504 (shown with cross-hatching), as discussed above. User input is received via a user's finger 606.
The user input in FIG. 9 is shown moving from right to left, where the user input begins with the sensor region 504 extended and moves into the active display region 502. The end position of the user's finger is shown using a dashed outline of the hand. Thus, sensing of user input begins in the extended sensor region 504 before the user's finger 606 moves into the active display region 502. User input identified by movement of the user's finger 606 in fig. 9 may be recognized more quickly than if the extended sensor region 504 were not included in the display device 500. The reason that the user input can be recognized more quickly is that sensing the position of the user's finger 606 does not begin until after the user's finger 606 reaches the edge of the active display area 502 without extending the sensor area 504.
The user input in fig. 9 is shown starting at the extended sensor region 504. It should be noted, however, that the user input may begin from outside of both the active display area 502 and the extended sensor area 504 (e.g., along an edge of the display device 500). User input may also be recognized faster than if the extended sensor region 504 were not included in the display device 502 because movement begins to be sensed when the user's finger 606 reaches the extended sensor region 504 (rather than waiting until the user's finger 606 reaches the active display region 502).
FIG. 10 illustrates the example display device 500 of FIG. 5 with another example user input. The display device 500 includes an active display area 502 surrounded by an extended sensor area 504 (shown with cross-hatching), as discussed above. User input is received via a user's finger 606.
The user input in FIG. 10 is shown moving from left to right, where the user input begins in the active display area 502 and ends in the extended sensor area 504. The end position of the user's finger is shown using a dashed outline of the hand. Alternatively, it should be noted that the termination location of the movement may be outside of both the active display area 502 and the extended sensor area 504 (e.g., along an edge of the display device 500). Sensing of user input begins in the active display area before the user's finger 606 moves into the extended sensor area 504. By terminating movement of the user's finger 606 at (or passing movement of the user's finger through) the extended sensor region 504, the location of the user input in the extended sensor region 504 can be used to identify the user input. For example, the input handler module 804 of fig. 8 may determine that the user input is a swipe or gesture from left to right across the display device, rather than an input that the user intended to stop on a particular icon or object displayed near the edge of the display area.
FIG. 11 illustrates the example display device 500 of FIG. 5 with another example user input. The display device 500 includes an active display area 502 surrounded by an extended sensor area 504 (shown with cross-hatching), as discussed above. User input is received via a user's finger 606.
The user input in fig. 11 is shown moving from right to left and top to bottom in a "<" shape. The user input in FIG. 11 begins at the active display area 504 and ends at the active display area 504, but passes through the extended sensor area 504. The end position of the user's finger is shown using a dashed outline of the hand. Sensing of user input in the extended sensor region 504 allows for user input shown in FIG. 11 to be input along the edge of the active display region 504. User input is sensed in the extended sensor region 504 even though the user input passes outside the edge of the active display region 504.
FIG. 12 illustrates the example display device 500 of FIG. 5 with another example user input. The display device 500 includes an active display area 502 surrounded by an extended sensor area 504 (shown with cross-hatching), as discussed above. User input is received via a user's finger 606.
The user input in fig. 12 is shown moving from left to right, where the user input begins and ends with the extended sensor region 504 without moving into the active display region 502. The end position of the user's finger is shown using a dashed outline of the hand. Sensing of user input begins with extending sensor region 504. However, because the user's finger 606 is not moved into the active display area 502 and the direction of movement of the user's finger 606 is not towards the active display area 502, there is no need to provide input data for user input to the input handler module 804 of FIG. 8. Thus, as long as the user input remains in the extended sensor region 504, no action based on the user input need be taken.
Fig. 13 is a flow diagram illustrating an example process 1300 for implementing the techniques described herein in accordance with one or more embodiments. Process 1300 may be implemented by a computing device, such as computing device 102 of fig. 1 or computing device 202 of fig. 2, and process 1300 may be implemented in software, firmware, hardware, or a combination thereof. Process 1300 is shown as a set of acts, but is not limited to the order shown for performing the operations of the various acts. Process 1300 is an example process for implementing the techniques described herein; additional discussions of implementing the techniques described herein are also included herein with reference to different figures.
In process 1300, input data is received (act 1302). As discussed above, the input data includes: data relating to at least a portion of the user input in the active display area of the device and data relating to at least a portion of the user input in an area outside of the active display area of the device.
Based on the input data, user input is determined (act 1304). As discussed above, any of a variety of public and/or private technologies may be used to determine what the user input is.
The action indicated by the user input is performed (act 1306). As discussed above, this action may be the performance of various functions or movements.
Example systems and devices
Fig. 14 illustrates an example system generally at 1400, which includes an example computing device 1402, the example computing device 1402 representing one or more computing systems and/or devices that can implement the various techniques described herein. The computing device 1402 may be configured, for example, to assume a mobile configuration using a housing shaped or sized to be grasped and carried by one or more hands of a user, illustrative examples of which include mobile phones, mobile games and music devices, and tablets, although other examples and configurations are also contemplated.
The example computing device 1402 as shown includes a processing system 1404, one or more computer-readable media 1406, and one or more I/O interfaces 1408 communicatively coupled to each other. Although not shown, the computing device 1402 may also include a system bus or other data and command transfer system that couples the various components to one another. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Various other examples are also contemplated, such as control and data lines.
Processing system 1404 represents functionality to perform one or more operations using hardware. Accordingly, processing system 1404 is shown to include hardware components 1410, which may be configured as processors, functional blocks, and so forth. This may include hardware implementations, such as application specific integrated circuits or other logic devices fabricated using one or more semiconductors. Hardware component 1410 is not limited by the materials of construction or the processing mechanisms employed therein. For example, a processor may include semiconductors and/or transistors (e.g., electronic Integrated Circuits (ICs)). In such a context, processor-executable instructions may be electronically-executable instructions.
The computer-readable storage medium 1406 is shown including a memory/storage unit 1412. Memory/storage unit 1412 represents the capacity of a memory/storage unit associated with one or more computer-readable media. The memory/storage unit component 1412 may include volatile media (such as Random Access Memory (RAM)) and/or nonvolatile media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). The memory/storage unit component 1412 may include fixed media (e.g., RAM, ROM, a fixed hard disk, etc.) as well as removable media (e.g., flash memory, a removable hard disk drive, an optical disk, and so forth). Computer-readable media 1406 may be configured in a variety of other ways, which will be discussed further below.
Input/output interface 1408 represents functionality that allows a user to enter commands and information to computing device 1402, and also allows information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive sensors or other sensors configured to detect physical touches), a camera (e.g., which may recognize movement in visible wavelengths or invisible wavelengths such as infrared frequencies as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, a haptic-response device, and so forth. Thus, the computing device 1402 can be configured in a variety of ways to support user interaction.
Computing device 1402 is also shown to include one or more modules 1418, which can be configured to support various functions. The one or more modules 1418, for example, may be configured to generate input data based on the sensed object position, to determine what the user input is based on the input data, and so on. The modules 1418 may include, for example, the input data collection module 802 and/or the input handler module 804 of fig. 8.
Various techniques may be described herein in the general context of software, hardware components, or program modules. Generally, these modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and "component" as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
An implementation of the described modules and techniques may be stored on or transmitted across some form of computer readable media. Computer readable media can include a variety of media that can be accessed by computing device 1402. By way of example, and not limitation, computer-readable media may comprise "computer-readable storage media" and "computer-readable signal media".
"computer-readable storage media" refers to media and/or devices for the permanent and/or non-transitory storage of information, as opposed to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media include hardware, such as volatile and nonvolatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information, such as computer-readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture suitable for storing the desired information and accessible by a computer.
"computer-readable signal medium" may refer to a signal-bearing medium configured to transmit instructions to hardware of computing device 1402, such as via a network. Signal media may typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, data signal, or other transport medium. Signal media also include any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
As previously mentioned, hardware component 1410 and computer-readable medium 1406 represent modules, programmable logic devices, and/or fixed device logic implemented in hardware, which in some embodiments may be employed to implement at least some aspects of the techniques described herein, such as executing one or more instructions. The hardware may include components in the form of: integrated circuits or systems on a chip, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Complex Programmable Logic Devices (CPLDs), and other silicon implementations or other hardware. In this context, hardware may function as a processing device that performs program tasks defined by instructions and/or logic implemented by hardware, as well as hardware for storing executable instructions (e.g., a computer-readable memory medium as described previously).
Combinations of the above components may also be used to implement the various techniques described herein. Thus, software, hardware, or executable modules may be implemented as one or more instructions and/or logic implemented on some form of computer-readable storage medium and/or by one or more hardware components 1410. Computing device 1402 may be configured to implement particular instructions and/or functions corresponding to software and/or hardware modules. Thus, implementations as software modules executable by computing device 1402 may be accomplished, at least in part, in hardware, for example, through the use of computer-readable storage media and/or hardware components 1410 of processing system 1404. The instructions and/or functions may be executed/performed by one or more articles of manufacture (e.g., one or more computing devices 1402 and/or processing systems 1404) to implement the techniques, modules, and examples described herein.
Conclusion
Although example implementations have been described in language specific to structural features and/or methodological acts, it is to be understood that implementations defined in the appended claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed features.

Claims (10)

1. A method, comprising:
receiving (1302) input data for user input, the input data describing a position of an object in both an active display area of an interactive display device, in which data or information is displayed, and an area outside the active display area of the interactive display device, in which no display layer is present; and
determining (1304), based on the input data, the user input comprising a gesture indicating a function to be initiated by a computing device including the interactive display device, the gesture comprising a path taken by the object, the path located at least partially in the area outside of the active display area, the location of the object comprising:
a position of the object in the active display area before the path passes through the area outside the active display area,
a position of the object in the area outside the active display area when the path passes through the area outside the active display area, an
A position of the object in the active display area after the path passes through the area outside the active display area.
2. The method of claim 1, the receiving comprising: receiving the input data in response to an expectation that the user input will be in the active display area.
3. The method of claim 2, in response to the direction of the user input being toward the active display area, anticipating that the user input will be in the active display area.
4. The method of claim 2, in response to the direction of the user input being toward the active display area and the speed of the user input being greater than a threshold amount, the user input is expected to be in the active display area at a future time.
5. The method of claim 1, the receiving comprising: receiving the input data in response to the user input being in the active display area.
6. A computing device comprising a housing configured in a handheld form factor and a display device (110, 120) supported by the housing and having an active display area (502), displaying data or information in the active display area, the display device having one or more sensors (604, 704), the one or more sensors are arranged to sense the position of an object in both the active display area and an area outside the active display area, no display layer in the area outside of the active display area, the computing device further comprising an input handler configured to determine a gesture indicating a function of the computing device to be performed based on the sensed location, the gesture includes a path taken by the object, the path located at least partially in the area outside of the active display area, the location of the object including:
a position of the object in the active display area before the path passes through the area outside the active display area,
a position of the object in the area outside the active display area when the path passes through the area outside the active display area, an
A position of the object in the active display area after the path passes through the area outside the active display area.
7. The computing device of claim 6, at least one of the one or more sensors arranged in an extended sensor region around the active display region such that a location of the object is sensed by the computing device along an edge of the active display region before the location of the object is sensed in the active display region.
8. The computing device of claim 6, the one or more sensors included in an input sensor layer of the display device that extends beyond an active display layer of the display device.
9. The computing device of claim 6, wherein input data describing a location of the object is not provided to the input handler for determining the gesture as long as the object is sensed as being outside of the active display area and the gesture is not expected to be in the active display area in the future.
10. The computing device of claim 6, the area outside of the active display area comprising an extended sensor area around and adjacent to the active display area.
HK14104832.8A 2012-03-02 2014-05-23 Sensing user input at display area edge HK1191705B (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US61/606336 2012-03-02
US61/606333 2012-03-02
US61/606301 2012-03-02
US61/606313 2012-03-02
US61/606321 2012-03-02
US61/607451 2012-03-06
US61/613745 2012-03-21
US13/471376 2012-05-14

Publications (2)

Publication Number Publication Date
HK1191705A HK1191705A (en) 2014-08-01
HK1191705B true HK1191705B (en) 2018-05-11

Family

ID=

Similar Documents

Publication Publication Date Title
KR102087456B1 (en) Sensing user input at display area edge
US9400581B2 (en) Touch-sensitive button with two levels
US9035883B2 (en) Systems and methods for modifying virtual keyboards on a user interface
US20100020036A1 (en) Portable electronic device and method of controlling same
US20150007025A1 (en) Apparatus
CN102099768A (en) Haptic feedback for key emulation in touch screens
TW201447741A (en) Feedback for gestures
US20180024638A1 (en) Drive controlling apparatus, electronic device, computer-readable recording medium, and drive controlling method
HK1191705B (en) Sensing user input at display area edge
HK1191705A (en) Sensing user input at display area edge
JP6399216B2 (en) Drive control apparatus, electronic device, drive control program, and drive control method
JP6512299B2 (en) Drive control device, electronic device, drive control program, and drive control method
TW201039203A (en) Electronic device, display device and method for identifying gestures of touch input