HK1177519B - Event recognition - Google Patents
Event recognition Download PDFInfo
- Publication number
- HK1177519B HK1177519B HK13104376.1A HK13104376A HK1177519B HK 1177519 B HK1177519 B HK 1177519B HK 13104376 A HK13104376 A HK 13104376A HK 1177519 B HK1177519 B HK 1177519B
- Authority
- HK
- Hong Kong
- Prior art keywords
- event
- touch
- software application
- gesture
- sequence
- Prior art date
Links
Abstract
A method includes displaying one or more views of a view hierarchy, and executing software elements associated with a particular view. Each particular view includes event recognizers. Each event recognizer has one or more event definitions, and an event handler that specifies an action for a target and is configured to send the action to the target in response to event recognition. The method includes detecting a sequence of sub-events, and identifying one of the views of the view hierarchy as a hit view that establishes which views are actively involved views. The method includes delivering a respective sub-event to event recognizers for each actively involved view. A respective event recognizer has event definitions, and one of the event definitions is selected based on the internal state. The respective event recognizer processes the respective sub-event prior to processing a next sub-event in the sequence of sub-events.
Description
Technical Field
The present invention relates generally to user interface processing, including, but not limited to, apparatus and methods of recognizing touch input.
Background
Electronic devices typically include a user interface for interacting with the computing device. The user interface may include a display and/or input devices such as a keyboard, mouse, and touch-sensitive surface for interacting with various aspects of the user interface. In some devices having a touch-sensitive surface as an input device, a first set of touch-based gestures (e.g., two or more: taps, double-taps, horizontal swipes, vertical swipes, pinches, swipes, two-finger swipes) are identified as suitable inputs in a particular context (e.g., in a particular mode of a first application), and other different sets of touch-based gestures are identified as suitable inputs in other contexts (e.g., different applications and/or different modes or contexts within the first application). As a result, the software and logic required to recognize and respond to touch-based gestures may become complex and may require correction each time an application is updated or a new application is added to the computing device. These and similar problems may arise in user interfaces that use input sources other than touch-based gestures.
Accordingly, it is desirable to have an integrated framework or mechanism for recognizing touch-based gestures and events, as well as gestures and events from other input sources, that is easily adaptable to almost all contexts or modes of all applications on a computing device.
Disclosure of Invention
To address the foregoing disadvantages, some embodiments provide a method performed in an electronic device having a touch-sensitive display. The electronic device is configured to execute at least a first software application and a second software application. The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. The respective gesture recognizers have corresponding gesture processors. The method includes displaying at least a subset of the one or more views of the second software application and detecting a sequence of touch inputs on the touch-sensitive display while displaying at least the subset of the one or more views of the second software application. The touch input sequence includes a first portion of one or more touch inputs and a second portion of one or more touch inputs following the first portion. The method further comprises the following steps: during a first phase of detecting the sequence of touch inputs, communicating the first portion of one or more touch inputs to the first software application and the second software application, asserting one or more matching gesture recognizers from the gesture recognizers in the first set that recognize the first portion of one or more touch inputs; and processing the first portion of one or more touch inputs using one or more gesture processors corresponding to the one or more matched gesture recognizers.
According to some embodiments, a method performed in an electronic device with a touch-sensitive display is provided. The electronic device is configured to execute at least a first software application and a second software application. The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. The respective gesture recognizers have corresponding gesture processors. The method includes displaying a first set of one or more views. The first set of one or more views includes at least a subset of the one or more views of the second software application. The method further comprises the following steps: while displaying the first set of one or more views, a sequence of touch inputs on the touch-sensitive display is detected. The touch input sequence includes a first portion of one or more touch inputs and a second portion of one or more touch inputs following the first portion. The method comprises the following steps: determining whether at least one gesture recognizer of the first set of one or more gesture recognizers recognizes the first portion of one or more touch inputs. The method further comprises the following steps: in accordance with a determination that at least one gesture recognizer of the first set of one or more gesture recognizers recognized the first portion of one or more touch inputs, transmitting the sequence of touch inputs to the first software application without transmitting the sequence of touch inputs to the second software application, and determining whether at least one gesture recognizer of the first set of one or more gesture recognizers recognized the sequence of touch inputs. The method further comprises: in accordance with a determination that at least one gesture recognizer of the first set of one or more gesture recognizers recognized the sequence of touch inputs, processing the sequence of touch inputs using the at least one gesture recognizer of the first set of one or more gesture recognizers that recognized the sequence of touch inputs. The method further comprises the following steps: in accordance with a determination that none of the first set of one or more gesture recognizers identifies the first portion of one or more touch inputs, transmit the sequence of touch inputs to the second software application and determine whether at least one gesture recognizer of the second set of one or more gesture recognizers identifies the sequence of touch inputs. The method further comprises: in accordance with a determination that at least one gesture recognizer of the second set of one or more gesture recognizers recognized the sequence of touch inputs, processing the sequence of touch inputs using the at least one gesture recognizer of the second set of one or more gesture recognizers that recognized the sequence of touch inputs.
According to some embodiments, a method performed in an electronic device having an internal state is provided. The electronic device is configured to execute software including a view hierarchy having a plurality of views. The method comprises the following steps: one or more views in the view hierarchy are displayed and one or more software elements are executed. Each software element is associated with a particular view, and each particular view includes one or more event recognizers. Each event recognizer has one or more event definitions based on one or more sub-events and an event handler specifying an action on a target and configured to send the action to the target in response to the event recognizer detecting an event corresponding to a particular event definition of the one or more event definitions. The method further comprises the following steps: a sequence of one or more sub-events is detected, and one of the views of the view hierarchy is identified as a click view (hitview). The click view establishes which views in the view hierarchy are actively involved views (activerlyinvvolvedview). The method further comprises: transmitting respective sub-events to an event recognizer for each actively involved view in the view hierarchy. At least one event recognizer for a view actively involved in the view hierarchy has a plurality of event definitions and selects one of the plurality of event definitions in dependence on an internal state of the electronic device. The at least one event recognizer processes the respective sub-event before processing a next sub-event in the sequence of sub-events according to the selected event definition.
According to some embodiments, a non-transitory computer readable storage medium stores one or more programs for execution by one of a plurality of processors of an electronic device. The one or more programs include one or more instructions that, when executed by the electronic device, cause the electronic device to perform any of the methods described above.
In accordance with some embodiments, an electronic device with a touch-sensitive display includes one or more processors and memory storing one or more programs for execution by the one or more processors. The one or more programs include instructions for implementing any of the methods described above.
According to some embodiments, an electronic device with a touch-sensitive display includes means for implementing any of the methods described above.
According to some embodiments, an information processing apparatus in a multifunction device with a touch-sensitive display includes means for implementing any of the methods described above.
According to some embodiments, an electronic device includes a touch-sensitive display unit configured to receive touch input and a processing unit coupled to the touch-sensitive display unit. The processing unit is configured to execute at least a first software application and a second software application. The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. The respective gesture recognizers have corresponding gesture processors. The processing unit is configured to: enabling display of at least a subset of the one or more views of the second software application; detecting a sequence of touch inputs on the touch-sensitive display unit while displaying at least a subset of the one or more views of the second software application. The touch input sequence includes a first portion of one or more touch inputs and a second portion of one or more touch inputs following the first portion. The processing unit is configured to, during a first phase of detecting the sequence of touch inputs: transmitting the first portion of one or more touch inputs to the first software application and the second software application; identifying, from the gesture recognizers in the first set, one or more matched gesture recognizers that recognize the first portion of one or more touch inputs; and processing the first portion of one or more touch inputs with one or more gesture processors corresponding to the one or more matched gesture recognizers.
According to some embodiments, an electronic device includes a touch-sensitive display unit configured to receive touch input and a processing unit coupled to the touch-sensitive display unit. The processing unit is configured to execute at least a first software application and a second software application. The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. The respective gesture recognizers have corresponding gesture processors. The processing unit is configured to enable display of a first set of one or more views. The first set of one or more views includes at least a subset of the one or more views of the second software application. The processing unit is configured to, when displaying the first set of one or more views: detecting a sequence of touch inputs on the touch-sensitive display unit (the sequence of touch inputs including a first portion of one or more touch inputs and a second portion of one or more touch inputs subsequent to the first portion); and determining whether at least one gesture recognizer of the first set of one or more gesture recognizers recognizes the first portion of one or more touch inputs. The processing unit is configured to, in accordance with a determination that at least one gesture recognizer of the first set of one or more gesture recognizers recognized the first portion of one or more touch inputs: transmitting the sequence of touch inputs to the first software application without transmitting the sequence of touch inputs to the second software application; and determining whether at least one gesture recognizer of the first set of one or more gesture recognizers recognizes the sequence of touch inputs. The processing unit is configured to process the touch input sequence using at least one gesture recognizer of the first set of one or more gesture recognizers that recognizes the touch input sequence in accordance with a determination that the at least one gesture recognizer of the first set of one or more gesture recognizers recognized the touch input sequence. The processing unit is configured to, in accordance with a determination that none of the first set of one or more gesture recognizers identifies the first portion of one or more touch inputs, transmit the sequence of touch inputs to the second software application, determine whether at least one gesture recognizer of the second set of one or more gesture recognizers recognizes the sequence of touch inputs; and processing the touch input sequence using at least one gesture recognizer of the second set of one or more gesture recognizers that recognizes the touch input sequence in accordance with a determination that the at least one gesture recognizer of the second set of one or more gesture recognizers recognized the touch input sequence.
According to some embodiments, an electronic device comprises: a display unit configured to display one or more views; a memory cell configured to store an internal state; and a processing unit coupled to the display unit and the memory unit. The processing unit is configured to: executing software comprising a view hierarchy having a plurality of views; enabling display of one or more views of the view hierarchy; and executes one or more software elements. Each software element is associated with a particular view, and each particular view includes one or more event recognizers. Each event recognizer has: one or more event definitions based on the one or more sub-events, and an event handler. The event handler specifies an action for a target and is configured to send the action to the target in response to the event recognizer detecting an event corresponding to a particular event definition of the one or more event definitions. The processing unit is configured to: detecting a sequence of one or more sub-events; and identifying one of the views of the view hierarchy as a click view. The click-to-view establishes which views in the view hierarchy are actively involved views. The processing unit is configured to transmit a respective sub-event to an event recognizer for each actively involved view in the view hierarchy. At least one event recognizer for a view actively involved in the view hierarchy has a plurality of event definitions, one of which is selected according to an internal state of the electronic device, and according to the selected event definition, the at least one event recognizer processes a respective sub-event before processing a next sub-event in the sequence of sub-events.
Drawings
1A-1C are block diagrams illustrating electronic devices according to some embodiments.
Fig. 2 is a diagram of an input/output processing stack of an example electronic device, according to some embodiments.
FIG. 3A illustrates an example view hierarchy in accordance with some embodiments.
Fig. 3B and 3C are block diagrams illustrating example event recognizer methods and data structures, according to some embodiments.
FIG. 3D is a block diagram illustrating example components for event processing, according to some embodiments.
FIG. 3E is a block diagram illustrating example classes and instances of a gesture recognizer in accordance with some embodiments.
FIG. 3F is a block diagram illustrating event information flow according to some embodiments.
Fig. 4A and 4B are flowcharts illustrating example state machines in accordance with some embodiments.
Fig. 4C illustrates an example state machine to example sub-event group of fig. 4A and 4B, in accordance with some embodiments.
5A-5C illustrate example event sequences with an example event recognizer state machine, according to some embodiments.
Fig. 6A and 6B are flow diagrams of event recognition methods according to some embodiments.
7A-7S illustrate example user interfaces and user inputs recognized by an event recognizer for navigating through concurrently open applications, according to some embodiments.
Fig. 8A and 8B are flow diagrams illustrating an event recognition method according to some embodiments.
9A-9C are flow diagrams illustrating an event recognition method according to some embodiments.
10A and 10B are flow diagrams illustrating an event recognition method according to some embodiments.
FIG. 11 is a functional block diagram of an electronic device according to some embodiments.
Fig. 12 is a functional block diagram of an electronic device according to some embodiments.
FIG. 13 is a functional block diagram of an electronic device according to some embodiments.
Like reference numerals refer to corresponding parts throughout the drawings.
Detailed Description
Electronic devices with small screens (e.g., smart phones and tablet computers) typically display a single application at a time, even though multiple applications may be running on the device. Many of these devices have touch-sensitive displays that are configured to receive gestures as touch inputs. For such devices, a user may want to perform operations provided by a hidden application (e.g., an application that runs in the background and is not simultaneously displayed on a display of the electronic device, such as an application launcher software application that runs in the background). Existing methods for performing the operations provided by a hidden application typically require that the hidden application be first displayed and then touch input be provided to the application that is currently displayed. Thus, the existing method requires additional steps. Further, the user may not want to see the hidden application but still want to perform the operations provided by the hidden application. In the embodiments described below, an improved method for interacting with a hidden application is achieved by sending touch input to the hidden application and processing the touch input using the hidden application without displaying the hidden application. Thus, these methods simplify (streamline) interaction with hidden applications, thereby eliminating the need for an additional, separate step of displaying hidden applications while providing the ability to interact and control with hidden applications based on gestural input.
Additionally, in some embodiments, the electronic devices have at least one gesture recognizer with multiple gesture definitions. This helps the gesture recognizer to work in distinct modes of operation. For example, the device may have a normal operating mode and an assisted (accessibility) operating mode (e.g., for visually impaired people). In a normal operating mode, the next application gesture is used to move between applications, and the next application gesture is defined as a three-finger left-swipe gesture. In the secondary mode of operation, the three-finger left-swipe gesture is used to perform different functions. Thus, a different gesture from a three-finger left swipe is required in the secondary operating mode to correspond to the next application gesture (e.g., a four-finger left swipe gesture in the secondary operating mode). By associating multiple gesture definitions to a next application gesture, the device may select one gesture definition for the next application gesture depending on the current operating mode. This provides flexibility in using the gesture recognizer in different modes of operation. In some embodiments, multiple gesture recognizers with multiple gesture definitions are adjusted depending on the mode of operation (e.g., a gesture performed by three fingers in a normal mode of operation is performed by four fingers in an auxiliary mode of operation).
1A-1C and FIG. 2 provide a description of example devices. Fig. 3A-3F depict components for event processing and the operation of such components (e.g., event information flow). FIGS. 4A-4C and 5A-5C describe the operation of the event recognizer in more detail. 6A-6B are flow diagrams illustrating an event identification method. 7A-7S are example user interfaces illustrating operation using the event recognition methods of FIGS. 8A-8B, 9A-9C, and 10. 8A-8B are flow diagrams illustrating an event recognition method for processing event information using a gesture processor of a hidden open application. 9A-9C are flow diagrams illustrating an event recognition method for conditionally processing event information using a gesture recognizer of a hidden open application or a displayed application. FIG. 10 is a flow diagram illustrating an event recognition method for selecting an event definition from a plurality of event definitions for a single event recognizer.
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. may be used herein to refer to various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact may be termed a second contact, and, similarly, a second contact may be termed a first contact, without departing from the scope of the present invention. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or," as used herein, refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" may be interpreted in accordance with the context as "when.. or" once.. or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if [ a declared condition or event ] is detected" may be interpreted contextually as "upon determining.
As used herein, the term "event" refers to an input detected by one or more sensors of a device. In particular, the term "event" includes a touch on a touch sensitive surface. An event includes one or more sub-events. A sub-event generally refers to a change to an event (e.g., touch down, touch move, touch up may be a sub-event). Sub-events in the sequence of one or more sub-events may include many forms including, but not limited to, pressing a key, holding a key, releasing a key, pressing a button hold, releasing a button, moving a joystick, moving a mouse, pressing a mouse button, releasing a mouse button, touching a stylus, moving a stylus, releasing a stylus, verbal instructions, detected eye movement, biometric input, detected physiological changes of the user, and others. Since an event may include a single sub-event (e.g., a short lateral motion of the device), the term "sub-event" as used herein also refers to an event.
As used herein, the terms "event recognizer" and "gesture recognizer" are used interchangeably to refer to a recognizer that can recognize gestures or other events (e.g., motion of a device). As used herein, the terms "event handler" and "gesture handler" are used interchangeably to refer to a processor that performs a predetermined set of operations (e.g., updating data, updating objects, and/or updating a display) in response to recognition of an event/sub-event or gesture.
As described above, in some devices having a touch-sensitive surface as an input device, a first set of touch-based gestures (e.g., two or more: tap, double tap, horizontal swipe, vertical swipe) are recognized as suitable inputs in a particular context (e.g., in a particular mode of a first application), and a different set of touch-based gestures are recognized as suitable inputs in other contexts (e.g., different applications and/or different modes or contexts within the first application). As a result, the software and logic required for recognizing and responding to touch-based gestures may become complex and may require correction each time an application is updated or a new application is added to the computing device. Embodiments described herein address these issues by providing a comprehensive framework for processing event and/or gesture inputs.
In the embodiments described below, touch-based gestures are events. Upon identifying a predefined event, such as an event corresponding to a suitable input in the current context of the application, information relating to the event is sent to the application. Further, each respective event is defined as a sequence of sub-events. In devices having a multi-touch display device (generally referred to herein as a "screen") or other multi-touch sensitive surface and accepting multi-touch based gestures, defining a multi-touch event-based sub-event may include a multi-touch sub-event (requiring two or more fingers to simultaneously contact the touch sensitive surface of the device). For example, in a device with a multi-touch sensitive display, a multi-touch sequence of respective sub-events may begin when a user's finger first touches the screen. Additional sub-events may occur when one or more additional fingers sequentially or simultaneously touch the screen, while other sub-events may occur when all fingers touching the screen move across the screen. The sequence ends when the user's last finger is lifted off the screen.
When using touch-based gestures to control an application running in a device having a touch-sensitive surface, touches have both temporal and spatial aspects. The temporal aspect, referred to as the phase, indicates when the touch begins, whether the touch is moving or stationary, and when the touch ends (i.e., when the finger lifts off the screen). The spatial aspect of the touch is the set of views or user interface windows over which the touch occurs. The view or window in which the touch is detected may correspond to a program level within a program or view hierarchy. For example, the view of the lowest level in which a touch is detected may be referred to as a click view, and the group of events identified as being suitable for input may be determined based at least in part on the click view of the initial contact that began based on the gesture of the touch. Alternatively or additionally, the event is identified as a suitable input based at least in part on one or more software programs (i.e., software applications) in the program hierarchy. For example, a five-finger pinch gesture is recognized as a suitable input in an application launcher with a five-finger pinch gesture recognizer, but not in a web browser application without a five-finger pinch gesture recognizer.
1A-1C are block diagrams illustrating different embodiments of an electronic device 102 according to some embodiments. The electronic device 102 may be any electronic device including, but not limited to, a desktop computer system, a laptop computer system, a mobile phone, a smart phone, a personal digital assistant, or a navigation system. The electronic device 102 may also be a portable electronic device having a touch screen display (e.g., touch-sensitive display 156, fig. 1B) configured to present a user interface, a computer having a touch screen display configured to present a user interface, a computer having a touch-sensitive surface and a display configured to present a user interface, and any other form of computing device, including but not limited to, consumer electronic devices, mobile phones, video game systems, electronic music players, tablet PCs, electronic book reading systems, electronic books, PDAs, electronic organizers, email devices, laptop or other computers, computer stations (kiosk computers), vending machines, smart appliances, and the like. The electronic device 102 includes a user interface 113.
In some embodiments, the electronic device 102 includes a touch-sensitive display 156 (FIG. 1B). In these embodiments, the user interface 113 may include an on-screen keyboard (not shown) for interaction with the electronic device 102 by a user. In some embodiments, the electronic device 102 also includes one or more input devices 128 (e.g., a keyboard, a mouse, a trackball, a microphone, physical buttons, a touchpad, and so forth). In some embodiments, touch-sensitive display 156 is capable of detecting two or more different, simultaneous (or partially simultaneous) touches, and in these embodiments, display 156 is sometimes referred to herein as a multi-touch display or a multi-touch sensitive display. In some embodiments, the keyboard of the one or more input devices 128 may be separate and distinct from the electronic device 102. For example, the keyboard may be a wired or wireless keyboard coupled to the electronic device 102.
In some embodiments, the electronic device 102 includes a display 126 and one or more input devices 128 (e.g., keyboard, mouse, trackball, microphone, physical buttons, touchpad, trackpad, and the like) coupled to the electronic device 102. In these embodiments, one or more of the input devices 128 may optionally be separate and distinct from the electronic device 102. For example, the one or more input devices may include one or more of the following: a keyboard, a mouse, a track pad, a track ball, and an electronic pen, any of which may be selectively separable from the electronic device. Optionally, device 102 may include one or more sensors 116, such as one or more accelerometers, gyroscopes, GPS systems, speakers, Infrared (IR) sensors, biometric sensors, cameras, and so forth. It should be noted that the above description of various example devices as input devices 128 or as sensors 116 has no material effect on the operation of the embodiments described herein. And any input or sensor device described herein as an input device may well be equivalently described as a sensor, and vice versa. In some embodiments, the signals generated by the one or more sensors 116 are used as an input source for detecting events.
In some embodiments, the electronic device 102 includes a touch-sensitive display 156 (i.e., a display having a touch-sensitive surface) and one or more input devices 128 (fig. 1B) coupled to the electronic device 102. In some embodiments, touch-sensitive display 156 is capable of detecting two or more different simultaneous (or partially simultaneous) touches, and in these embodiments, display 156 is sometimes referred to herein as a multi-touch display or a multi-touch-sensitive display.
In some embodiments of the electronic device 102 discussed herein, the input device 128 is disposed in the electronic device 102. In other embodiments, one or more of the input devices 128 are separate and distinct from the electronic device 102. For example, one or more of the input devices 128 may be coupled to the electronic device 102 via a cable (e.g., a USB cable) or a wireless connection (e.g., a bluetooth connection).
When using the input device 128, or when performing touch-based gestures on the touch-sensitive display 156 of the electronic device 102, the user generates a sequence of sub-events that are processed by one or more CPUs 110 of the electronic device 102. In some embodiments, one or more CPUs 110 of the electronic device 102 process the sequence of sub-events to identify an event.
The electronic device 102 typically includes one or more single-core or multi-core processing units (CPU or CPUs) 110 and one or more network or other communication interfaces 112. The electronic device 102 includes a memory 111 and one or more communication buses 115 for interconnecting these components. The communication bus 115 may include circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components (not shown herein). As described above, the electronic device 102 includes the user interface 113 that includes a display (e.g., the display 126 or the touch-sensitive display 156). Further, the electronic device 102 generally includes an input device 128 (e.g., a keyboard, mouse, touch-sensitive surface, keypad, etc.). In some embodiments, the input device 128 comprises an on-screen input device (e.g., a touch-sensitive surface of a display device). Memory 111 may include high speed random access memory such as DRAM, SRAM, ddr ram or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 111 may optionally include one or more storage devices located remotely from CPU 110. The memory 111, or alternatively a non-volatile memory device in the memory 111, includes a computer-readable storage medium. In some embodiments, memory 111 or a non-volatile memory device in memory 111 includes a non-transitory computer-readable storage medium. In some embodiments, memory 111 (of electronic device 102) or a computer-readable storage medium of memory 111 stores the following programs, modules, and data structures, or a subset thereof:
an operating system 118, including procedures for handling stock of basic system services and for performing hardware dependent tasks;
an assistance module 127 (FIG. 1C) for modifying the behavior of one or more software applications in the application software 124, or modifying data from the touch-sensitive display 156 or input device 128, to improve the ease of use of one or more software applications in the application software 124 or of content (e.g., web pages) displayed therein (e.g., for visually impaired or mobility-limited persons);
a communication module 120 for connecting the electronic device 102 to other devices via one or more respective communication interfaces 112 (wired or wireless) and one or more communication networks such as the internet, other wide area networks, local area networks, metropolitan area networks, etc.;
a user interface module 123 (FIG. 1C) for displaying a user interface including user interface objects on the display 126 or touch-sensitive display 156;
control application 132 (FIG. 1C) for controlling a process (e.g., click-view determination, thread management, and/or event monitoring, etc.); in some embodiments, the control application 132 comprises a running application; in other embodiments, the running applications include a control application 132;
event delivery system 122, which may be implemented in various alternative embodiments in operating system 118 or in application software 124; however, in some embodiments, some aspects of the event delivery system 122 may be implemented in the operating system 118 while other aspects are implemented in the application software 124;
application software 124, including one or more software applications (e.g., applications 133-1, 133-2, and 133-3 in FIG. 1C, each of which may be one of an email application, a web browser application, a notepad application, a text messaging application, etc.); the respective software application typically has, at least during execution, an application state that indicates a state of the respective software application and its components (e.g., gesture recognizer); see application internal state 321 (FIG. 3D) described below; and
device/global internal state 134 (fig. 1C), including one or more of the following: an application state indicating the state of the software application and its components (e.g., gesture recognizer and representatives); a display state indicating what applications, views, or other information occupy various areas of the touch-sensitive display 156 or display 126; sensor status, including information obtained from the various sensors 116, input device 128, and/or touch-sensitive display 156 of the device; location information relating to the location and/or orientation of the device; and other states.
As used in the specification and claims, the term "open application" refers to a software application that has retained state information (e.g., as part of device/global internal state 134 and/or application internal state 321 (FIG. 3D)). An open application is one of the following types of applications:
an active application that is currently displayed on the display 126 or touch-sensitive display 156 (or a corresponding application view is currently displayed on the display);
a background application (or background process) that is not currently displayed on the display 126 or the touch-sensitive display 156, but one or more application processes (e.g., instructions) for the corresponding application are being processed (e.g., run) by the one or more processors 110;
a suspended application that is not currently running and that is stored in volatile memory (e.g., DRAM, SRAM, ddr ram, or other volatile random access solid state memory device of memory 111); and
a dormant application that is not currently running and that is stored in non-volatile memory (e.g., one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices of the memory 111).
As used herein, the term "closed application" refers to a software application that does not have state information retained (e.g., the state information of the closed application is not stored in the memory of the device). Accordingly, closing an application includes stopping and/or removing the application process of the application and removing the state information of the application from the memory of the device. Generally, opening the second application while in the first application does not close the first application. When the first application stops displaying while the second application is displaying, the first application, which was active at the time of displaying, may become a background application, a suspended application, or a dormant application, but when its state information is retained by the device, the first application is still an open application.
Each of the identified elements described above may be stored in one or more of the aforementioned memory devices. Each of the identified modules, applications, or system elements described above corresponds to a set of instructions for performing a function described herein. The set of instructions may be executed by one or more processors (e.g., one or more CPUs 110). The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 111 may store a subset of the modules and data structures identified above. Further, memory 111 may store additional modules and data structures not described above.
Fig. 2 is a diagram of an input/output processing stack 200 of an example electronic device or apparatus (e.g., device 102) according to some embodiments of the invention. Hardware (e.g., electronic circuitry) 212 of the device is at the base layer of the input/output processing stack 200. The hardware 212 may include various hardware interface components, such as those described in fig. 1A and/or 1B. The hardware 212 may also include one or more of the sensors 116 described above. At least some of the other elements (202-210) of the input/output processing stack 200 are software processes or partial software processes that process inputs received from hardware 212 and generate various outputs presented through a hardware user interface (e.g., one or more of a display, a speaker, a device vibration actuator, etc.).
A driver or set of drivers 210 communicates with hardware 212. The driver 210 may receive and process input data received from the hardware 212. A kernel Operating System (OS)208 may communicate with drivers 210. The core OS208 may process raw input data received from the driver 210. In some embodiments, the driver 210 may be considered part of the core OS 208.
A set of OS application programming interfaces ("OSAPIs") 206 are software processes that communicate with a core OS 208. In some embodiments, the API206 is included in the operating system of the device, but at a level above the core OS 208. The API206 is designed for use by applications running on the electronic device or apparatus discussed herein. The User Interface (UI) APIs 204 may use the OSAPI 206. Application software ("application") 202 running on the device may use the UIAPI204 to communicate with the user. UIAPI204 may in turn communicate with lower level elements to ultimately communicate with various user interface hardware (e.g., multi-touch display 156). In some embodiments, application software 202 comprises an application in application software 124 (FIG. 1A).
Although each layer of the input/output processing stack 200 may use the layer below it, it is not always required. For example, in some embodiments, the application 202 may communicate directly with the OSAPI 206. Generally, layers at or above the OSAPI layer 206 may not directly access the core OS208, drivers 210, or hardware 212, as these layers are considered private. Applications in the layers 202 and UIAPI204 typically call the OSAPI206 directly, and the OSAPI206 in turn accesses the layers of the core OS208, drivers 210, and hardware 212.
In other words, one or more hardware elements 212 of the electronic device 102 and the software that is active on the device detect input events (which may correspond to sub-events in gestures) at one or more input devices 128 and/or the touch-sensitive display 156 and generate or update various data structures (stored in the memory 111 of the device 102) that are used by the current active event recognizer set to determine whether and when the input events correspond to events to be sent to the application 124. Embodiments of event recognition methods, apparatus, and computer program products are described in more detail below.
FIG. 3A depicts an example view hierarchy 300, which view hierarchy 300 is in this example a search program displayed in an outermost view 302. The outermost view 302 generally includes the entire user interface with which a user can directly interact, and includes dependent views, e.g.,
search results panel 304, which groups search results and can scroll vertically;
a search field 306, which accepts text input; and
start bit row 310, which groups applications for quick access.
In this example, each dependent view comprises a lower level dependent view. In other examples, the number of view levels in hierarchy 300 may be different at different branches of the hierarchy, with one or more dependent views having lower levels of dependent views and one or more other dependent views not having any such lower levels of dependent views. Continuing with the example shown in FIG. 3A, for each search result, search results panel 304 contains a separate dependent view 305 (dependent on panel 304). Here, the example shows one search result in a dependent view called a map view 305. The search field 306 includes a dependent view, referred to herein as a clear content icon view 307, the view 307 clearing the content of the search field when the user performs a particular action (e.g., a single touch or tap gesture) on the clear content icon in the view 307. The home row 310 includes dependent views 310-1, 310-2, 310-3, and 3104, which correspond to a contacts application, an email application, a web browser, and an iPod music interface, respectively.
Touch sub-event 301-1 is represented in outermost view 302. Given that touch sub-event 301-1 is located on both search results panel 304 and map view 305, touch sub-events will also be represented as 301-2 and 301-3 on search results panel 304 and map view 305, respectively. The views effectively involved in the touch sub-event include view search results panel 304, map view 305, and outermost view 302. Additional information regarding the delivery of sub-events and the views effectively involved will be provided below with reference to fig. 3B and 3C.
Views (and corresponding program levels) may be nested. In other words, one view may include other views. Thus, the software element (e.g., event recognizer) associated with the first view may include or be linked to one or more software elements associated with the views in the first view. While some views may be associated with an application, other views may be associated with high-level OS elements (e.g., graphical user interfaces, window managers, etc.). In some embodiments, some views are associated with other OS elements. In some embodiments, the view hierarchy includes views from multiple software applications. For example, the view hierarchy may include a view from an application launcher (e.g., a home screen) and a view from a web browser application (e.g., a view that includes web page content).
The program hierarchy includes one or more software elements or software applications in the hierarchy. To simplify the discussion that follows, only views and view hierarchies will generally be referred to, but it must be understood that in some embodiments, the method may work with a program hierarchy and/or view hierarchy having multiple program layers.
Fig. 3B and 3C depict example methods and structures associated with an event recognizer. FIG. 3B depicts a method and data structures for event processing when an event processor is associated with a particular view in a view hierarchy. FIG. 3C depicts a method and data structures for event processing when an event processor is associated with a particular level in a hierarchy of program levels. Event recognizer global methods 312 and 350 include click view and click level determination modules 314 and 352, activity event recognizer determination modules 316 and 354, and sub-event delivery modules 318 and 356, respectively.
In some embodiments, the electronic device 102 includes one or more of the following: event recognizer global methods 312 and 350. In some embodiments, the electronic device 102 includes one or more of the following: click view determination module 314 and click level determination module 352. In some embodiments, the electronic device 102 includes one or more of the following: active event recognizer determination modules 316 and 354. In some embodiments, the electronic device 102 includes one or more of the following: sub-event delivery modules 318 and 356. In some embodiments, one or more of these methods or modules are included in fewer or more methods or modules. For example, in some embodiments, the electronic device 102 includes a click view/level determination module that incorporates the functionality of the click view determination module 314 and the click level determination module 352. In some embodiments, the activity event recognizer determination module included in the electronic device 102 includes the functionality of the activity event recognizer determination modules 316 and 354.
The click view and click level determination modules 314 and 352, respectively, provide software programs for determining where a sub-event occurred in one or more views (e.g., the example view hierarchy 300 with 3 primary branches depicted in FIG. 3A) and/or one or more software elements in a program hierarchy corresponding to the sub-event (e.g., one or more of the application programs 133 in FIG. 1C).
The click view determination module 314 in FIG. 3B receives information related to a sub-event (e.g., a user touch on the search results panel 304, represented on outermost view 302 as 301-1, on the search results (map view 305)). The hit view determination module 314 identifies the hit view as the lowest view in the hierarchy that should process the sub-event. In most cases, the click view is the lowest level view where the initial sub-event (i.e., the first sub-event in the sequence of sub-events forming an event or potential event) occurs. In some embodiments, once the click view is asserted, the click view will receive all sub-events related to the same touch or input source that the click view asserts. In some embodiments, one or more other views (e.g., default or predefined views) receive at least some of the sub-events received by the click view.
In some embodiments, the click level confirmation module 352 of FIG. 3C may use a similar process. For example, in some embodiments, the click level validation module 352 identifies the click level as the lowest level of the program hierarchy (or software application in the lowest program level in the program hierarchy) that should handle the sub-event. In some embodiments, once a click level is asserted, the click level, or a software application in the click level, will receive all sub-events related to the same touch or input source that the click level asserts. In some embodiments, one or more other levels or software applications (e.g., default or predefined software applications) receive at least some of the sub-events received by the click view.
The active event recognizer determination modules 316 and 354 of the event recognizer global methods 312 and 350, respectively, determine which view or views in the view hierarchy and/or program hierarchy should receive a particular sequence of sub-events. FIG. 3A depicts an example set of active views 302, 304, and 305 that receive a sub-event 301. In the example of FIG. 3A, active event recognizer determination module 316 will determine that outermost view 302, search results panel 304, and map view 305 are the views that are effectively involved because these views include the physical location of the touch represented by sub-event 301. It should be noted that even if the touch sub-event 301 is entirely confined to the area associated with the map view 305, the search result panel 304 and the outermost view 302 will remain as validly involved views because the search result panel 304 and the outermost view 302 are ancestors of the map view 305.
In some embodiments, activity event recognizer determination modules 316 and 354 use similar processing. In the example of FIG. 3A, the active event recognizer determination module 350 will determine that the mapping application is actively involved because the map application's view is displayed and/or includes the physical location of the touch represented by the sub-event 301. It should be noted that even if the touch sub-event 301 is entirely confined to the area associated with the map application, other applications in the program hierarchy will remain as actively involved applications (or applications in the program level of active involvement).
The sub-event delivery module 318 delivers the sub-event to the event recognizer for the views that are actively involved. Using the example in FIG. 3A, a user's touch is represented in different views of the hierarchy by touch markers 301-1, 301-2, and 301-3. In some embodiments, sub-event data representing such user's touches is transferred by the sub-event transfer module 318 to the event recognizer at the views actively involved, i.e., the top-level view 302, the search results panel 304, and the map view 305. Further, the event recognizer of a view may receive a sequence of sub-events of an event that begins in the view (e.g., when an initial sub-event occurs in the view). In other words, the view may receive a sub-event associated with a user interaction that begins in the view even though it continues outside of the view.
In some embodiments, the sub-event delivery module 356 delivers the sub-events to the event recognizer for the program level of active involvement in a process similar to that used by the sub-event delivery module 318. For example, the sub-event delivery module 356 delivers the sub-event to the event recognizer for the application that is actively involved. Using the example of FIG. 3A, a user's touch 301 is transmitted by the sub-event delivery module 356 to an event recognizer at a view that is actively involved (e.g., a mapping application and any other actively involved application in a program hierarchy). In some embodiments, default or predefined software applications are included in the program hierarchy by default.
In some embodiments, a separate event recognizer structure 320 or 360 is generated and stored in the memory of the device for each event recognizer actively involved. The event recognizer structures 320 and 360 generally include event recognizer states 334, 374, respectively (discussed in more detail below with reference to fig. 4A and 4B), and event recognizer specific codes 338, 378, respectively, having state machines 340, 380. Event recognizer structure 320 also includes view hierarchy reference 336 and event recognizer structure 360 includes program hierarchy reference 376. Each instance of a particular event recognizer references exactly one view or program level. The view hierarchy reference 336 or the program hierarchy reference 376 (for one particular event identifier) is used to establish which view or program level is logically coupled to the respective event identifier.
The view metadata 341 and the level metadata 381 may include data regarding views or levels, respectively. The view or level metadata may include at least the following characteristics that may affect the transmission of sub-events to the event recognizer:
a stop feature 342, 382 that, when set for a view or program level, prevents a sub-event from being passed to an event recognizer associated with the view or program level and an ancestor of the view or program level in the view or program hierarchy.
Skip properties 343, 383 that, when set for a view or program level, prevent a sub-event from passing to an event recognizer associated with that view or program level, but allow a sub-event to pass to an ancestor of that view or program level in the view or program hierarchy.
A non-click skip property 344, 384, which when set for a view, prevents a sub-event from passing to an event recognizer associated with the view unless the view is a click view. As described above, the click view determination module 314 considers the click view (or click level in the case of click level determination module 352) to be the lowest view in the hierarchy that should process the sub-event.
Event recognizer structures 320 and 360 may include metadata 322, 362, respectively. In some embodiments, the metadata 322, 362 includes configurable characteristics, flags, and lists that indicate how the event delivery system should perform sub-event delivery to the event recognizer that is actively involved. In some embodiments, the metadata 322, 362 may include configurable characteristics, flags, and lists that indicate how event recognizers may interact with each other. In some embodiments, the metadata 322, 362 may include configurable properties, flags, and lists that indicate whether a sub-event is passed to a level of change in the view or program hierarchy. In some embodiments, both the combination of event recognizer metadata 322, 362 and the view or level metadata (341, 381, respectively) are used to configure the event delivery system to: a) performing a sub-event transfer to the event recognizers that are actively involved, b) indicating how the event recognizers can interact with each other, and c) indicating whether and when the sub-events are transferred to different levels of the view or program hierarchy.
It should be noted that in some embodiments, the respective event recognizers send event recognition actions 333, 373 to their respective targets 335, 375 as specified by the fields of the event recognizer's structure 320, 360. Sending actions to targets is different from sending (and delaying sending) sub-events to respective click views or levels.
The metadata characteristics stored in the respective event recognizer structures 320, 360 of the corresponding event recognizers include one or more of:
an exclusive flag 324, 364, which when set for an event recognizer, indicates that once an event is recognized by the event recognizer, the event delivery system should stop delivering sub-events to any other event recognizer at the view or program level that is actively involved (with the exception of any other event recognizer listed in exception list 326, 366). When the receipt of a sub-event causes a particular event recognizer to enter the exclusive state as indicated by its corresponding exclusive flag 324 or 364, the next sub-event is merely passed to the event recognizer in the exclusive state (and any other event recognizers listed in the exception lists 326, 366).
Some event recognizer structures 320, 360, which may include exclusive exception lists 326, 366. When included in the event recognizer structures 320, 360 for the respective event recognizer, the lists 326, 366 indicate that the event recognizer set continues to receive sub-events even after the respective event recognizer has entered the exclusive state, if any. For example, if the event recognizer for a single tap event is brought into an exclusive state and the presently involved view includes an event recognizer for a double tap event, the lists 320, 360 will list double tap event recognizers so that a double tap event can be recognized even after a single tap event is detected. Accordingly, the exclusive exception lists 326, 366 allow event recognizers to identify different events that share a common sequence of sub-events, e.g., single tap event recognition does not exclude double tap or triple tap events subsequently recognized by other event recognizers.
Some event recognizer structures 320, 360, which may include waitlists 327, 367. When included in the event recognizer structures 320, 360 for respective event recognizers, such lists 327, 367 indicate that the set of event recognizers must enter an event impossible or event cancelled state before the respective event recognizers can recognize the respective event, if any. In effect, the listed event recognizers have a higher priority for recognizing events than the event recognizers with the equal lists 327, 367.
Delayed touch start flags 328, 368, when set for an event recognizer, cause the event recognizer to delay sending sub-events (including touch start or finger down sub-events and subsequent events) to the event recognizer's respective click view or level until it has been determined that the sequence of sub-events does not correspond to the event type of this event recognizer. This flag can be used to make the click view or level see no sub-events at any time if a gesture is recognized. When the event recognizer fails to recognize the event, the touch start sub-event (and subsequent touch end sub-event) may be passed to a click view or level. In one example, communicating such sub-events to a click view or level causes the user interface to concisely highlight an object without invoking an action associated with the object
A delay touch end flag 330, 370, which, when set for an event recognizer, causes the event recognizer to delay sending sub-events (e.g., touch end sub-events) to the event recognizer's respective click view or level until it has been determined that the sequence of sub-events does not correspond to the event type of this event recognizer. This may be used to prevent a click view or level from acting upon a touch end sub-event if a gesture is later recognized. Touch cancel may be sent to the click view or level only if the touch end sub-event is not sent. If an event is identified, the corresponding action is performed by the application and the touch end sub-event is passed to the click view or level.
Touch cancel flags 332, 372, which, when set for an event recognizer, cause the event recognizer to send a touch or input cancel to the event recognizer's respective click view or level if it has been determined that the sequence of sub-events does not correspond to the event type of this event recognizer. A touch or input cancellation sent to a click view or level indicates that a previous sub-event (e.g., a touch start sub-event) has been cancelled. A touch or input cancellation may cause the state of the input source processor (see fig. 4B) to enter a secondary entry sequence cancellation state 460 (discussed below).
In some embodiments, the exception lists 326, 366 may also be used by non-exclusive event recognizers. In particular, when a non-exclusive event recognizer recognizes an event, subsequent sub-events are not sent to the exclusive event recognizer associated with the current active view, except for those exclusive event recognizers listed in the exception list 326, 366 of event recognizers that recognized the event.
In some embodiments, the event recognizer may be configured to use a touch cancel flag in conjunction with a delayed touch end flag to prevent unwanted sub-events from being passed to the click view. For example, the definition of the single tap pose is the same as the definition of the first half of the double tap pose. Once the single tap event recognizer successfully recognizes the single tap, an unwanted action may occur. If the late touch end flag is set, the single tap event recognizer is prevented from sending sub-events to the click view until a single tap event is recognized. In addition, the waiting list of the single-tap event recognizer may identify a double-tap event recognizer, thereby preventing the single-tap event recognizer from recognizing a single tap until the double-tap event recognizer has entered an event impossibility state. The use of a waiting list avoids the performance of actions associated with a single tap when performing a double tap gesture. Alternatively, in response to recognition of a double tap event, only the action associated with the double tap will be performed.
Turning in particular to the form of user touches on a touch sensitive surface, as described above, touches and user gestures may include actions that are not necessarily instantaneous, e.g., a touch may include an action of moving or holding a finger on a display for a period of time. However, the touch data structure defines the state of the touch (or, more generally, the state of any input source) at a particular time. Thus, the values stored in the touch data structure may change during the course of a single touch, thereby enabling the state of the single touch to be transmitted to the application at different points in time.
Each touch data structure may include different fields. In some embodiments, the touch data structure may include data corresponding to at least the touch-specific field 339 in fig. 3B or the input source-specific field 379 in fig. 3C.
For example, the "first touch for View" field 345 in FIG. 3B (the "first touch for level" field 385 in FIG. 3C) may indicate whether the touch data structure defines a first touch for a particular view (since the software element implementing the view is instantiated). The "timestamp" fields 346, 386 may indicate a specific time associated with the touch data structure.
Alternatively, "info" fields 347, 387 may be used to indicate whether a touch is a basic gesture. For example, the "info" field 347, 387 may indicate whether the touch is a swipe and, if so, which direction the swipe is directed. A swipe is a quick drag of one or more fingers in a linear direction. The API implementation (discussed below) may determine whether the touch is a swipe and pass this information to the application through the "info" field 347, 387, thereby mitigating some of the data processing of the application that would otherwise be necessary if the touch was a swipe.
Optionally, the "tap count" field 348 in fig. 3B (the "event count" field 388 in fig. 3C) may indicate how many taps have been performed consecutively at the location of the initial touch. A tap may be defined as a quick press and lift of a finger on the touch sensitive panel at a particular location. If the finger is depressed and released again at the same location of the panel in rapid succession, multiple successive taps may occur. The event delivery system 122 may count the taps and forward this information to the application via the "tap count" field 348. Multiple taps at the same location are sometimes considered useful and easy to register commands for the touch-enabled interface. The event delivery system 122 can then, in turn, mitigate some data processing from the application by counting taps.
The "phase" fields 349, 389 may indicate a particular phase in which the touch-based gesture is currently in. The phase fields 349, 389 may have various values, such as "touch phase start" indicating that the touch data structure defines a new touch that the previous touch data structure has not yet referenced. The "touch phase move" value may indicate that the touch being defined has moved from a previous location. The "touch phase stationary" value may indicate that the touch has stayed at the same location. The "touch phase end" value may indicate that the touch has ended (e.g., the user has lifted his/her finger off the surface of the multi-touch display). The "touch phase cancel" value may indicate that the touch has been cancelled by the device. The cancelled touch may be a touch that does not have to be ended by the user but the device has decided to ignore. For example, the device may determine that the touch was unintentionally generated (i.e., as a result of placing the portable multi-touch enabled device in one's pocket) and thus ignore the touch. Each value of the "stage" fields 349, 389 may be an integer.
Thus, each touch data structure can define what is happening at a particular time for a respective touch (or other input source) (e.g., whether the touch is stationary, being moved, etc.) and other information associated with the touch (such as location). Accordingly, each touch data structure can define the state of a particular touch at a particular point in time. One or more touch data structures that reference the same time may be added to the touch event data structure that may define the state of all touches that a particular view receives at one time (as described above, some touch data structures may also reference touches that have ended and are no longer being received). Over time, to provide the software with continuous information describing touches that are occurring in the view, a multi-touch event data structure may be sent to the software implementing the view.
Fig. 3D is a block diagram illustrating example components for event processing (e.g., event processing component 390), according to some embodiments. In some embodiments, memory 111 (FIG. 1A) includes event recognizer global method 312 and one or more applications (e.g., 133-1 to 133-3).
In some embodiments, the event recognizer global method 312 includes an event monitor 311, a click view determination module 314, an activity event recognizer determination module 316, and an event scheduling module 315. In some embodiments, the event recognizer global method 312 is located in the event delivery system 122 (FIG. 1A). In some embodiments, the event recognizer global method 312 is implemented in the operating system 118 (FIG. 1A). Instead, event recognizer global method 312 is implemented in the respective application 133-1. In yet another embodiment, the event recognizer global method 312 is implemented as a standalone module or as part of another module stored in the memory 111, such as a contact/motion module (not shown).
Event monitor 311 receives event information from one or more sensors 116, touch-sensitive display 156, and/or one or more input devices 128. The event information includes information about an event (e.g., a user touch on the touch-sensitive display 156 as part of a multi-touch gesture or motion of the device 102) and/or a sub-event (e.g., movement of a touch across the touch-sensitive display 156). For example, the event information of the touch event includes one or more of: the location of the touch and the timestamp. Similarly, the event information of the swipe event includes two or more of: location, timestamp, direction and speed of swipe. The sensors 116, touch-sensitive display 156, and input device 128 send information event and sub-event information to event monitor 311 either directly or through a peripheral interface that retrieves and stores event information. The sensors 116 include one or more of the following: proximity sensors, accelerometers, gyroscopes, microphones, and cameras. In some embodiments, the sensor 116 also includes an input device 128 and/or a touch-sensitive display 156.
In some embodiments, event monitor 311 sends requests to sensors 116 and/or peripheral interfaces at predetermined intervals. In response, the sensor 116 and/or the peripheral interface transmit event information. In other embodiments, the sensor 116 and/or the peripheral interface transmit event information only when there is a significant event (e.g., the received input exceeds a predetermined noise threshold and/or exceeds a predetermined duration).
The event monitor 311 receives the event information and forwards the event information to the event scheduler module 315. In some embodiments, event monitor 311 determines one or more respective applications (e.g., 133-1) to which the event information is to be communicated. In some embodiments, event monitor 311 also determines one or more respective application views 317 of one or more respective applications to which the event information is to be transferred.
In some applications, the event recognizer global method 312 further includes a click view determination module 314 and/or an activity event recognizer determination module 316.
If the click view determination module 314 is present, when the touch-sensitive display 156 displays more than one view, the click view determination module 314 provides a software program for determining where in one or more views an event or sub-event occurred. A view consists of a control or other element that a user can see on a display.
Another aspect of the user interface associated with a respective application (e.g., 133-1) is a set of views 317, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected may correspond to a particular view in the view hierarchy of the application. For example, the lowest level view in which a touch is detected may be referred to as a click view, and the event buttons identified as suitable inputs may be determined based at least in part on the click view that initiated the initial touch based on the gesture of the touch.
The click view determination module 314 receives information related to events and/or sub-events. When the application has multiple views organized in a hierarchy, the hit view determination module 314 identifies the hit view as the lowest view in the hierarchy that should handle the event or sub-event. In most cases, the click view is the lowest level view in which the initial event or sub-event (i.e., the first event or sub-event in the sequence of events and/or sub-events that form the gesture) occurred. Once the click view is identified by the click view determination module, the click view typically receives all events and/or sub-events related to the same touch or input source identified as the click view. However, a click view is not always the only view that receives all events and/or sub-events related to the same touch or input source that is deemed to be the click view. In other words, in some embodiments, another application (e.g., 133-2) or another view of the same application also receives at least a subset of events and/or sub-events related to the same touch or input source, regardless of whether a click view has been determined for the touch or input source.
The active event recognizer determination module 316 determines which view or views should receive a particular sequence of events and/or sub-events in the view hierarchy. In some application contexts, the active event recognizer determination module 316 determines that only the click view should receive a particular sequence of events and/or sub-events. In other application contexts, the active event recognizer determination module 316 determines that all views that include the event or sub-event physical location are actively involved views and thus determines that all actively involved views should receive a particular sequence of events and/or sub-events. In other application contexts, even if the touch events and/or sub-events are all confined to an area associated with a particular view, the higher level views in the hierarchy remain as actively involved views, and thus the higher level views in the hierarchy should receive a particular sequence of events and/or sub-events. Additionally or alternatively, the active event recognizer determination module 316 determines which application or applications should receive a particular sequence of events and/or sub-events in the program hierarchy. Thus, in some embodiments, the active event recognizer determination module 316 determines that only respective applications in the program hierarchy should receive a particular sequence of events and/or sub-events. In some embodiments, the active event recognizer determination module 316 determines that multiple applications in the program hierarchy should receive a particular sequence of events and/or sub-events.
The event scheduler module 315 schedules event information to an event recognizer (also referred to herein as a "gesture recognizer") (e.g., event recognizer 325-1). In embodiments that include the active event recognizer determination module 316, the event scheduler module 315 communicates event information to the event recognizer determined by the active event recognizer determination module 316. In some embodiments, the event scheduler module 315 stores the event information retrieved by the respective event recognizer 325 (or by the event receiver 3031 in the respective event recognizer 325) in an event queue.
In some embodiments, the respective application (e.g., 133-1) includes an application internal state 321, where the application internal state 321 indicates a current application view that is displayed on the touch-sensitive display 156 while the application is active or executing. In some embodiments, the device/global internal state 134 (FIG. 1C) is used by the event recognizer global method 312 to determine which application or applications are currently active, and the application internal state 321 is used by the event recognizer global method 312 to determine the application view 317 to which event information is to be transferred.
In some embodiments, the application internal state 321 includes additional information, such as one or more of the following: resume information to be used when the application 133-1 resumes execution; user interface state information indicating information being displayed or ready to be displayed by the application 133-1; a state queue for enabling a user to rollback to a previous state or view of the application 133-1; and a redo/undo queue of previous actions performed by the user. In some embodiments, the application internal state 321 further includes context information/text and metadata 323.
In some embodiments, the application 133-1 includes one or more application views 317, each of which has corresponding instructions for processing touch events that occur within a particular view of the application's user interface (e.g., a corresponding event handler 319). At least one application view 317 of the application 133-1 includes one or more event recognizers 325. Typically, the respective application view 317 includes a plurality of event recognizers 325. In other embodiments, one or more of the event recognizers 325 are part of a separate module, such as a user interface suite (not shown) or a higher level object, from which the application 133-1 inherits methods or other characteristics. In some embodiments, the respective application view 317 further includes one or more of: a data updater, an object updater, a GU1 updater, and/or received event data.
The respective application (e.g., 133-1) also includes one or more event handlers 319. Typically, the respective application (e.g., 133-1) includes a plurality of event handlers 319.
The respective event recognizer 325-1 receives event information from the event scheduler module 315 (either directly or indirectly through the application 133-1) and recognizes events from the event information. The event recognizer 325-1 includes an event receiver 3031 and an event comparator 3033.
The event information includes information about an event (e.g., touch) or sub-event (e.g., touch movement). The event information also includes additional information, such as the location of the event or sub-event, depending on the event or sub-event. When an event or sub-event relates to motion of a touch, the event information may also include the speed and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation of the device (also referred to as the device orientation).
Event comparator 3033 compares the event information to one or more predefined gesture definitions (also referred to herein as "event definitions"), and determines an event or sub-event, or determines or updates the state of the event or sub-event, based on the comparison. In some embodiments, the event comparator 3033 includes one or more gesture definitions 3035 (also referred to herein as "event definitions" as described above). Gesture definition 3035 contains definitions of gestures (e.g., predefined sequences of events and/or sub-events), such as gesture 1(3037-1), gesture 2(3037-2), and others. In some embodiments, sub-events in the gesture definition 3035 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, gesture 1(3037-1) is defined as a double tap on a displayed object. For example, a double tap includes a first touch on a displayed object (touch start) within a predetermined phase of the gesture, a first lift-off (touch end) within a next predetermined phase of the gesture, a second touch on a displayed object (touch start) within a subsequent predetermined phase of the gesture, and a second lift-off (touch end) within a final predetermined phase of the gesture. In another example, the definition of gesture 2(3037-2) includes a drag on the displayed object. For example, dragging includes touching (or contacting) on a displayed object, movement of the touch across the touch-sensitive display 156, and lift-off of the touch (touch-off).
In some embodiments, event recognizer 325-1 also includes information for event delivery 3039. The information for the event delivery 3039 includes a reference to the corresponding event handler 319. Optionally, the information for the event delivery 3039 includes action-target pairs. In some embodiments, in response to recognizing a gesture (or a portion of a gesture), event information (e.g., an action message) is sent to one or more targets identified by the action-target pair. In other embodiments, in response to recognizing a gesture (or a portion of a gesture), an action-target pair is activated.
In some embodiments, the gesture definitions 3035 include definitions of gestures for respective user interface objects. In some embodiments, event comparator 3033 performs a click test to determine which user interface object is associated with a sub-event. For example, in an application view where three user interface objects are displayed on the touch-sensitive display 156, when a touch is detected on the touch-sensitive display 156, the event comparator 3033 performs a click test to determine which, if any, of the three user interface objects is associated with the touch (event). If each displayed object is associated with a respective event handler 319, the event comparator 3033 uses the results of the click test to determine which event handler 319 should be activated. For example, the event comparator 3033 selects the event handler 319 associated with the event and the object that triggered the click test.
In some embodiments, the respective gesture definitions 3037 for the respective gestures further include a delayed action that delays the transmission of event information until it has been determined that the event and/or sub-event sequence corresponds or does not correspond to the event type of the event recognizer.
When the respective event recognizer 325-1 determines that the sequence of events and/or sub-events does not match any of the events in the gesture definition 3035, the respective event recognizer 325-1 enters an event fail state after which the respective event recognizer 325-1 disregards subsequent events and/or sub-events based on the touch gesture. In this case, other event recognizers that remain active for the click view continue to track and process ongoing touch-based gesture events and/or sub-events, if any.
In some embodiments, when no event recognizer remains for the clicked view, the event information is sent to one or more event recognizers in higher views in the view hierarchy. Instead, when no event recognizer remains for the click view, the event information is ignored. In some embodiments, when no event recognizer remains for a view in the view hierarchy, event information is sent to one or more event recognizers in higher program levels in the program hierarchy. Instead, when no event recognizer remains for a view in the view hierarchy, the event information is ignored.
In some embodiments, the respective event recognizer 325-1 includes an event recognizer state 334. Event recognizer state 334 includes the state of the respective event recognizer 325-1. Examples of event recognizer states are described in more detail below with reference to FIGS. 4A-4B and 5A-5C.
In some embodiments, event recognizer state 334 includes recognizer metadata and properties 3043. In some embodiments, the recognizer metadata and characteristics 3043 include one or more of the following: A) configurable characteristics, flags and/or lists indicating how the event delivery system should perform the delivery of events and/or sub-events to the event recognizers that are actively involved; B) configurable properties, flags, and/or lists that indicate how event recognizers interact with each other; C) configurable characteristics, flags, and/or lists that instruct the event recognizer how to receive event information; D) configurable characteristics, flags, and/or lists indicating how the event recognizer may recognize gestures; E) configurable properties, flags, and/or lists indicating whether an event and/or sub-event is conveyed to a level of change in the view hierarchy; and F) a reference to the corresponding event handler 319.
In some embodiments, event recognizer state 334 includes event/touch metadata 3045. The event/touch metadata 3045 includes event/touch information regarding respective events/touches that have been detected and that correspond to respective gesture definitions 3037 of the gesture definitions 3035. The event/touch information includes one or more of: location, timestamp, speed, direction, distance, range (or range change), and angle (or angle change) of the respective event/touch.
In some embodiments, when one or more particular events and/or sub-events of a gesture are recognized, the respective event recognizer 325 activates an event handler 319 associated with the respective event recognizer 325. In some embodiments, the respective event recognizer 325 communicates event information associated with the event to the event handler 319.
The event handler 319, when activated, performs one or more of the following: create and/or update data, create and update objects, and prepare display information and send the display information for display on the display 126 or touch-sensitive display 156.
In some embodiments, the respective application view 317-2 includes view metadata 341. As described above with respect to fig. 3B, the view metadata 341 includes data about the view. Optionally, the view metadata 341 includes one or more of: stop feature 342, skip feature 343, non-click skip feature 344, and other view metadata 329.
In some embodiments, a first actively involved view in the view hierarchy may be configured to prevent communication of a corresponding sub-event to an event recognizer associated with the first actively involved view. This behavior may implement skip feature 343. When the skip property is set for an application view, the transfer of the corresponding sub-events is still performed for the event recognizers associated with other actively involved views in the view hierarchy.
Instead, the first actively involved view in the view hierarchy may be configured to prevent the transmission of the corresponding sub-event to the event recognizer associated with the first actively involved view unless the first actively involved view is a click view. This behavior may implement a conditional non-click skip feature 344.
In some embodiments, the second actively involved view in the view hierarchy is configured to prevent communication of the corresponding sub-event to the event identifier associated with the second actively involved view and to the event identifier associated with the ancestor of the second actively involved view. This behavior may implement stop feature 342.
FIG. 3E is a block diagram illustrating example classes and instances (e.g., event processing component 390) of a gesture recognizer in accordance with some embodiments.
A software application (e.g., application 133-1) has one or more event recognizers 3040. In some embodiments, the respective event recognizer (e.g., 3040-2) is an event recognizer class. The respective event recognizer (e.g., 3040-2) includes event recognizer specific code 338 (e.g., a set of instructions defining the operation of the event recognizer) and state machine 340.
In some embodiments, application state 321 of a software application (e.g., application 133-1) includes an instance of an event recognizer. Each instance of an event recognizer is an object having a state (e.g., event recognizer state 334). "execution" of the respective event recognizer instance is accomplished by executing the corresponding event recognizer specific code (e.g., 338) and updating or maintaining the state 334 of the event recognizer instance 3047. The state 334 of the event recognizer instance 3047 includes a state 3038 of the event recognizer instance's state machine 340.
In some embodiments, application state 321 includes a plurality of event recognizer instances 3047. The respective event recognizer instance 3047 generally corresponds to an event recognizer that has been bound to (also referred to as "affiliated with") a view of an application. In some embodiments, one or more event recognizer instances 3047 are bound to respective applications in a program hierarchy without reference to any particular view of the respective applications. In some embodiments, application state 321 includes multiple instances (e.g., 3047-1 through 3047-L) of respective event recognizers (e.g., 3040-2). In some embodiments, application state 321 includes instances 3047 of multiple event recognizers (e.g., 3040-1 through 3040-R).
In some embodiments, a respective instance 3047-2 of gesture recognizer 3040 includes event recognizer state 334. As discussed above, in some embodiments, event recognizer state 334 includes recognizer metadata and characteristics 3043 and event/touch metadata 3045. In some embodiments, event recognizer state 334 also includes a view hierarchy reference 336 to indicate to which view the respective instance 3047-2 of gesture recognizer 3040-2 pertains.
In some embodiments, recognizer metadata and characteristics 3043 includes the following, or a subset or superset thereof:
exclusive flag 324;
exclusive exception list 326;
waiting list 327;
delayed touch start flag 328;
delayed touch end flag 330; and
touch cancel flag 332.
In some embodiments, one or more event recognizers may be adapted to delay the transmission of one or more sub-events in the sequence of sub-events until after the event recognizer recognizes the event. This behavior reflects a delayed event. For example, consider a single tap gesture in a view for which multiple tap gestures are also possible. In this case, the tap event becomes the "tap + delay" identifier. Essentially, when the event recognizer implements this behavior, the event recognizer will delay the event recognition until it confirms that the sequence of sub-events does in fact correspond to its event definition. This behavior may be appropriate when the receiving view cannot properly respond to the event of cancellation. In some embodiments, the event recognizer will delay updating its event recognition state to its respective actively involved view until the event recognizer confirms that the sequence of sub-events does not correspond to its event definition. A delayed touch start flag 328, a delayed touch end flag 330, and a touch cancel flag 332 are provided to tailor the sub-event delivery technique and event recognizer and view state information updates to specific needs.
In some embodiments, recognizer metadata and characteristics 3043 includes the following, or a subset or superset thereof:
state machine state/phase 3038, which indicates the state of the state machine (e.g., 340) for the respective event recognizer instance (e.g., 3047-2); the state machine state/phase 3038 may have various state values such as "event possible", "event identified", "event failed", and others, as described below; alternatively or additionally, the state machine state/phase 3038 can have various phase values, such as "touch phase Start" can indicate that the touch data structure defines a new touch that the previous touch data structure has not yet referenced; the "touch phase move" value may indicate that the touch being defined has moved from a previous location; a "touch phase stationary" value may indicate that the touch has stayed at the same location; the "touch phase end" value may indicate that the touch has ended (e.g., the user has lifted his/her finger off the surface of the multi-touch display); a "touch phase cancel" value may indicate that the touch has been cancelled by the device; the cancelled touch may be a touch that does not have to be ended by the user but that the device has determined to ignore; for example, the device may determine that the touch was unintentionally generated (i.e., as a result of placing the portable multi-touch enabled device in one's pocket) and thus ignore the touch; each value of the state machine state/phase 3038 can be an integer (referred to herein as a "gesture recognizer state value");
action-target pairs 3051, where each pair asserts a target in response to recognizing an event or touch as a gesture or part of a gesture, the respective event recognizer instance sending the asserted action message to the target;
a representative 3053, which is a reference to a corresponding representative when the representative is assigned to a respective event recognizer instance; when a delegate is not assigned to a respective event recognizer instance, delegate 346 contains a null value; and
an enable characteristic 3055 indicating whether the respective event recognizer instance is enabled; in some embodiments, the respective event recognizer instance does not process events or touches when the respective event recognizer instance is not enabled (e.g., disabled).
In some embodiments, exception list 326 may also be used by a non-exclusive event recognizer. In particular, when a non-exclusive event recognizer recognizes an event or sub-event, subsequent events and/or sub-events are not passed to the exclusive event recognizer associated with the current active view, except for those exclusive event recognizers listed in the exception list 326 of event recognizers that recognized the event or sub-event.
In some embodiments, the event recognizer may be configured to use a touch cancel flag 332 in conjunction with the delayed touch end flag 330 to prevent unwanted events and/or sub-events from being sent to the click view. For example, the definition of the first half of the single tap pose and the double tap pose are the same. Once the single tap event recognizer successfully recognizes the single tap, unwanted actions may occur. If the late touch end flag is set, the single tap event recognizer is prevented from sending sub-events to the click view until a single tap event is recognized. In addition, the waiting list of the single-tap event recognizer may identify a double-tap event recognizer, thereby preventing the single-tap event recognizer from recognizing a single tap until the double-tap event recognizer has entered an event impossibility state. The use of a waiting list avoids performing actions associated with single taps when performing double tap gestures. Instead, only the action associated with a double tap will be performed in response to the recognition of the double tap event.
Turning in particular to the form of user touches on a touch sensitive surface, as described above, touches and user gestures may include actions that are not necessarily instantaneous, e.g., a touch may include an action of moving or holding a finger on a display for a period of time. However, the touch data structure defines the state of the touch (or, more generally, the state of any input source) at a particular time. Thus, the values stored in the touch pattern data structure can change during a single touch, enabling the state of the single touch to be forwarded to the application at different points in time.
Each touch data structure may include various entries. In some embodiments, the touch data structure may include data corresponding to at least touch-specific entries in event/touch metadata 3045, such as the following or a subset or superset thereof:
"first touch for View" item 345;
"per touch information" entry 3051, including "timestamp" information indicating a particular time (e.g., time of touch) to which the touch data structure relates; optionally, the "per touch information" entry 3051 includes other information such as the location of the corresponding touch; and
an optional "tap count" entry 348.
Thus, each touch data structure can define what is happening (e.g., whether the touch is stationary, being moved, etc.) for a respective touch (or other input source) at a particular time and other information associated with the touch (such as location). Accordingly, each touch data structure may define the state of a particular touch at a particular time. One or more touch data structures that reference the same time can be added to the touch event data structure that can define the state of all touches that a particular view is receiving at a certain time (as described above, some touch data structures can also reference touches that have ended and are no longer being received). Over time, to provide the software with continuous information describing touches that are occurring in the view, a multi-touch event data structure may be sent to the software implementing the view.
The ability to process complex touch-based gestures, optionally including multi-touch gestures, can increase the complexity of various software applications. In some cases, this added complexity may be necessary to achieve advanced and desirable interface features. For example, a game may require the ability to handle multiple simultaneous touches that occur in different views, as games often require multiple buttons to be pressed simultaneously, or accelerometer data to be integrated with touches on a touch-sensitive surface. However, some simpler applications and/or views do not require advanced interface features. For example, a simple soft button (i.e., a button displayed on a touch-sensitive display) may work satisfactorily with single-touch rather than multi-touch functionality. In these cases, the underlying OS may send unnecessary or excessive touch data (e.g., multi-touch data) to software components associated with views that are intended to be operated only by single touches (e.g., single touches or taps on soft buttons). Because a software component may need to process such data more, it may need to characterize all of the complexity of a software application that handles multi-touch, even though it is associated with a view that is only relevant to a single touch. This can increase software development costs for the device, as software components that have traditionally been easily programmed in a mouse interface environment (i.e., various buttons, etc.) can be much more complex in a multi-touch environment.
To reduce the complexity of recognizing complex touch-based gestures, according to some embodiments, the proxy may be used to control the behavior of the event recognizer. As described below, for example, the delegate can determine whether a corresponding event recognizer (or gesture recognizer) can receive event (e.g., touch) information; whether a corresponding event recognizer (or gesture recognizer) can transition from an initial state (e.g., event possible state) to another state of the state machine; and/or whether the corresponding event recognizer (or gesture recognizer) can simultaneously recognize an event (e.g., touch) as a corresponding gesture without obstructing other event recognizers (or gesture recognizers) from recognizing the event or being obstructed by other event recognizers (or gesture recognizers) that recognize the event.
However, it should be understood that the foregoing discussion regarding the complexity of evaluating and processing user touches on touch sensitive surfaces also applies to all forms of user inputs to operate the electronic device 102 with the input device 128, where not all user inputs begin on the touch screen, e.g., coordinating mouse movement and mouse button presses with or without single or multiple keyboard presses or holds, device rotations or other movements, user movements on the touch pad such as tapping, dragging, scrolling, etc., stylus inputs, verbal indications, detected eye movements, biometric inputs, detected user physiological changes, and/or any combination thereof, which may be used as inputs corresponding to events and/or sub-events defining the event to be recognized.
Turning to event information flow, FIG. 3F is a block diagram illustrating event information flow according to some embodiments. The event scheduler module 315 (e.g., in the operating system 118 or application software 124) receives the event information and sends the event information to one or more applications (e.g., 133-1 and 133-2). In some embodiments, application 133-1 includes multiple views in view hierarchy 506 (e.g., corresponding to 508, 510, and 512 of view 317 in FIG. 3D) and multiple gesture recognizers (516-1 through 516-3) in the multiple views. The application program 133-1 also includes one or more gesture processors 550 corresponding to target values in target-action pairs (e.g., 552-1 and 552-2). In some embodiments, the event scheduler module 315 receives click view information from the click view determination module 314 and sends the event information to the click view (e.g., 512) or to event recognizers (e.g., 516-1 and 516-2) attached to the click view. Additionally or alternatively, the event scheduler module 315 receives click level information from the click level determination module 352 and sends the event information to the applications in the click level (e.g., 133-1 and 133-2) or one or more event recognizers (e.g., 516-4) in the click level applications. In some embodiments, one of the applications that receives the event information is the default application (e.g., 133-2 may be the default application). In some embodiments, only a subset of the gesture recognizers in each receiving application is allowed (or configured) to receive the event information. For example, gesture recognizer 516-3 in application 133-1 does not receive event information. The gesture recognizer that receives event information is referred to herein as a receiving gesture recognizer. In FIG. 3F, the receiving gesture recognizers 516-1, 516-2, and 5164 receive the event information and compare the received event information to respective gesture definitions 3037 (FIG. 3D) in the receiving gesture recognizers. In FIG. 3F, gesture recognizers 516-1 and 5164 have respective gesture definitions that match the received event information and send respective action messages (e.g., 518-1 and 518-2) to corresponding gesture processors (e.g., 552-1 and 552-3).
FIG. 4A depicts an event recognizer state machine 400 that includes four states. By managing state transitions in event recognizer state machine 400 based on receiving sub-events, the event recognizer effectively expresses event definitions. For example, a tap gesture may be effectively defined by a sequence of two or, alternatively, three sub-events. First, a touch should be detected, and this will be sub-event 1. For example, a touch sub-event may be a user's finger touching a touch-sensitive surface in a view that includes an event recognizer with state machine 400. Second, an optional measured delay in the absence of substantial movement of the touch in any given direction (e.g., any movement of the touch location is less than a predetermined threshold, which may be measured as a distance (e.g., 5mm) or number of pixels (e.g., 5 pixels) on the display) will be used as sub-event 2, where the delay is sufficiently short. Finally, the termination of the touch (e.g., the user's finger lifted off the touch-sensitive surface) will serve as sub-event 3. Event recognizer state machine 400 effectively expresses a tap gesture event definition by encoding event recognizer state machine 400 to transition between states based on these sub-events being received. It should be noted, however, that the states shown in fig. 4A are exemplary states, and that event recognizer state machine 400 may contain more or fewer states, and/or that each state in event recognizer state machine 400 may correspond to one of the states shown or any other state.
In some embodiments, regardless of the event type, event recognizer state machine 400 begins at event recognition start state 405 and may proceed to any remaining state depending on what sub-events are received. For ease of discussion of the event recognizer state machine 400, a direct path from the event recognition start state 405 to the event recognition state 415, the event possible state 410, and the event impossible state 420 will be discussed, followed by a description of the path leading from the event possible state 410.
From event identification start state 405, event identifier state machine 400 will transition to event identification state 415 if the received sub-event itself contains an event definition for the event.
From event identification start state 405, if the received sub-event is not the first sub-event of the event definition, event recognizer state machine 400 will transition to event impossible state 420.
From event identification start state 405, if the received sub-event is the first sub-event defined by a given event and not the last sub-event, event recognizer state machine 400 will transition to event possible state 410. Event recognizer state machine 400 will remain in event possible state 410 if the next sub-event received is the second sub-event defined by the given event and not the last sub-event. Event recognizer state machine 400 may remain in event possible state 410 as long as the received sequence of sub-events continues to be part of the event definition. If at any time that event recognizer state machine 400 is in event possible state 410, event recognizer state machine 400 receives a sub-event that is not part of the event definition, it will transition to event impossible state 420, thereby determining that the current event, if any, is not the event type corresponding to this event recognizer (i.e., the event recognizer corresponding to state 400). On the other hand, if event recognizer state machine 400 is in event possible state 410 and event recognizer state machine 400 receives the last sub-event in the event definition, it will transition to event recognition state 415, thereby completing a successful event recognition.
Fig. 4B depicts an embodiment of an input source process 440 having a finite state machine that represents how views receive information about respective inputs. It should be noted that when multiple touches are present on the touch sensitive surface of the device, each of the touches is a separate input source with its own finite state machine. In this embodiment, the input source processing 440 includes four states: input sequence start 445, input sequence continue 450, input sequence end 455, and input sequence cancel 460. The input source processes 440 may be used by the respective event recognizers, for example, when input is to be communicated to an application, but only after detecting that the input sequence is complete. The input source process 440 may be used with applications that cannot cancel or undo changes made in response to input sequences transmitted to the application. It should be noted that the states shown in fig. 4B are exemplary states, that input source processes 440 may contain more or fewer states, and/or that each state in input source processes 440 may correspond to one of the states shown or any other state.
From the input sequence start 445, if the received input completes an input sequence by itself, the input source process 440 will transition to the input sequence end 455.
From the input sequence start 445, if the received input indicates that the input sequence is terminated, the input source processing 440 will transition to input sequence cancel 460.
From the start of the input sequence 445, if the input received is the first input in the input sequence and not the last input, then the input source process 440 will transition to the input sequence continuation state 450. If the next input received is the second input in the input sequence, the input source process 440 will remain in the input sequence continuation state 450. The input source process 440 may remain in the input sequence continuation state 450 as long as the sequence of sub-events being transmitted continues to be part of a given input sequence. If at any time the input source processing process 440 is in the input sequence continuation state 450, and the input source processing process 440 receives an input that is not part of the input sequence, it will transition to the input sequence cancel state 460. On the other hand, if the input source processing 440 is in the input sequence continuation 450, and the input source processing 440 receives the last input in the given input definition, it will transition to the input sequence end 455, thereby successfully receiving a set of sub-events.
In some embodiments, the input source processing 440 may be implemented for a particular view or program level. In this case, some of the sub-event sequences may result in a transition to the input canceled state 460.
By way of example, consider FIG. 4C, which assumes a view of the active references, which is represented only by the active reference view input source processor 480 (hereinafter "view 480"). The view 480 includes a vertical swipe event recognizer, which is represented by only the vertical swipe event recognizer 468 (hereinafter "recognizer 468") as one of its event recognizers. In this case, the identifier 468 may need to detect as part of its definition: 1) put down 465-1 finger; 2) optional short delay 465-2; 3) a vertical swipe of at least N pixels 465-3; and 4) finger lift off 465-4.
For this example, the identifier 468 also sets its delayed touch start flag 328 and touch cancel flag 332. Now consider the transfer of the following sequence of sub-events to the recognizer 468 and the view 480:
sequence of sub-events 465-1: detecting finger drop that corresponds to an event definition of the identifier 468
Sequence of sub-events 465-2: measuring a delay corresponding to the event definition of the identifier 468
Sequence of sub-events 465-3: the finger performs a vertical swipe motion that is compatible with vertical scrolling, but less than N pixels, and thus does not correspond to the event definition of the identifier 468
Sequence of sub-events 465-4: detecting a finger lift-off corresponding to an event definition of the identifier 468
Here, the identifier 468 will successfully identify sub-events 1 and 2 as part of its event definition, and accordingly will be in the event possible state 472 unless it was prior to the transmission of sub-event 3. Since the identifier 468 has set its delayed touch start flag 328, the initial touch sub-event is not sent to the click view. Accordingly, the input source process 440 of view 480 will still be in the input sequence starting state just prior to the transmission of sub-event 3.
Once the communication of sub-event 3 to the recognizer 468 is complete, the state of the recognizer 468 transitions to event possible 476, and importantly, the recognizer 468 has now determined that the sequence of sub-events does not correspond to its particular vertical swipe gesture event type (i.e., it has decided that the event is not a vertical swipe. The input source processing system 440 for the view input source processor 480 will also update its state. In some embodiments, when the event recognizer sends state information indicating that it has begun recognizing events, the state of the view input source processor 480 will proceed from the input sequence start state 482 to the input sequence continue state 484. When the touch or input ends and there is no recognized event because the touch cancel flag 322 of the event recognizer has been set, the view input source processor 480 proceeds to the input sequence cancel state 488. Alternatively, if the touch cancel flag 322 of the event recognizer is not set, the view input source processor 480 proceeds to the input sequence end state 486 when the touch or input ends.
Since the touch cancel flag 332 of the event recognizer 468 is set, when the event recognizer 468 transitions to the event impossible state 476, the recognizer will send a touch cancel sub-event or message to the click view corresponding to the event recognizer. As a result, the view input source processor 480 will transition to the input sequence canceled state 488.
In some embodiments, the transmission of sub-event 465-4 is not germane to the event recognition decision made by recognizer 468, although other event recognizers of view input source processor 480 (if any) may continue to analyze the sequence of sub-events.
The following table presents the processing of this example sub-event sequence 465 in relation to the state of the event recognizer 468 described above, as well as the state of the view input source processor 480, in the form of a generalized list. In this example, since the touch cancel flag 332 of the recognizer 468 is set, the state of the view input source processor 480 proceeds from the input sequence start 445 to the input sequence cancel 488:
sub-event sequence 465 state: status of identifier 468: view 480
Event identification start before transfer start 470
Detecting finger down 465-1 event possible 472 input sequence start 482
Measuring delay 465-2 event possible 472 input sequence continuation 484
Detecting a finger vertical swipe 465-3 event makes it impossible 476 to enter the sequence to continue 484
Detecting finger lift-off 465-4 events unlikely 476 input sequence cancellation 488
Turning to FIG. 5A, attention is directed to an example of a sequence of sub-events 520, the sequence of sub-events 520 being received by a view that includes a plurality of event recognizers. For this example, two event recognizers are shown in FIG. 5A, a scroll event recognizer 580 and a tap event recognizer 590. For illustrative purposes, the view search results panel 304 in FIG. 3A will be associated with the receipt of the sequence of sub-events 520, and the states in the scroll event recognizer 580 and the tap event recognizer 590 transition. Note that in this example, the sequence of sub-events 520 defines a tap finger gesture on a touch-sensitive display or trackpad, but the same event recognition techniques may be applied in embodiments that use program hierarchies at the program level and/or in a large number of contexts (e.g., detecting a mouse button press).
Before the first sub-event is passed to view search results panel 304, event recognizers 580 and 590 are in event recognition start states 582 and 592, respectively. Touch 301 is then passed as a detect finger-down sub-event 521-1 to the event recognizer of the active reference for the view search results panel 304 as touch sub-event 301-2 (and to the event recognizer of the active reference for the map view 305 as touch sub-event 301-3), the scroll event recognizer 580 transitions to an event possible state 584, and similarly, the tap event recognizer 590 transitions to an event possible state 594. This is because the event definitions for tapping and scrolling are both initiated with a touch (e.g., detecting a finger drop on the touch-sensitive surface).
Some definitions of tap and scroll gestures may optionally include a delay between the initial touch and any next step in the event definition. In all examples discussed herein, the example event definition for both the tap and scroll gestures defines a delay sub-event after the recognition of the first touch sub-event (detection of finger drop).
Accordingly, when the measure delay sub-event 521-2 is passed to the event recognizers 580 and 590, both remain in the event possible states 584 and 594, respectively.
Finally, the detect finger up ion event 521-3 is passed to event recognizers 580 and 590. In this case, the state transitions for the event recognizers 580 and 590 are different because the event definitions for tapping and scrolling are different. In the case of the scroll event recognizer 580, the next sub-event to remain in the event possible state would be to detect movement. However, since the transmitted sub-event is the detection of finger lift 521-3, the scroll event recognizer 580 transitions to the event not possible state 588. And the tap event definition ends with a finger lift ion event. Thus, after transmitting the detect finger up event 521-3, the tap event recognizer 590 transitions to the event recognized state 596.
Note that in some embodiments, as discussed above with respect to fig. 4B and 4C, the input source processing 440 discussed in fig. 4B may be used at the view level for various purposes. The following table presents the transmission of the sequence of sub-events 520 associated with the event recognizers 580, 590 and the input source process 440 in the form of a generalized list:
turning to FIG. 5B, attention is directed to another example event sequence 530, where a sub-event sequence 530 is received by a view that includes multiple event recognizers. For this example, two event recognizers, a scroll event recognizer 580 and a tap event recognizer 590, are shown in FIG. 5B. For illustrative purposes, the view search results panel 304 in FIG. 3A will be associated with the receipt of the sequence of sub-events 530, and the states of the scroll event recognizer 580 and the tap event recognizer 590 transition. Note that in this example, the sequence of sub-events 530 defines a scrolling finger gesture on the touch-sensitive display, but the same event definition techniques may be applied in embodiments that use program-level program hierarchies and/or in a large number of contexts (e.g., detect mouse button presses, mouse movements, and mouse button releases).
Event recognizers 580 and 590 are in event recognition start states 582 and 592, respectively, before the first sub-event passes to the actively involved event recognizer for view search results panel 304. Following the transmission of the sub-event corresponding to touch 301 (as discussed above), the scroll event recognizer 580 transitions to an event possible state 584, and similarly, the tap event recognizer 590 transitions to an event possible state 594.
When the measure delay sub-event 531-2 passes to the event recognizers 580 and 590, both transition to the event possible states 584 and 594, respectively.
Next, the detection finger movement sub-event 531-3 is transmitted to the event recognizers 580 and 590. In this case, the state transitions for the event recognizers 580 and 590 are different because the event definitions for tapping and scrolling are different. In the case of the scroll event recognizer 580, the next sub-event remaining in the event possible state is a detected movement, so the scroll event recognizer 580 remains in the event possible state 584 when the scroll event recognizer 580 receives the detected finger movement sub-event 531-3. However, as discussed above, the definition for tapping ends with a finger lift ion event, so the tap event recognizer 590 transitions to the event not possible state 598.
Finally, the detect finger lift ion event 531-4 is passed to event recognizers 580 and 590. The tap event recognizer is already in the event not possible state 598 so no state transition occurs. The event definition of the scroll event recognizer 580 is to detect the end of finger lift-off. Since the transmitted sub-event is the detection of finger lift-off 531-4, scroll event recognizer 580 transitions to event recognition state 586. Note that finger movement on the touch sensitive surface may produce multiple movement sub-events, so scrolling may be recognized before lift-off or continue to be recognized until lift-off.
The following table presents the transmission of the sequence of sub-events 530 associated with the event recognizers 580, 590 and the input source processing 440 in the form of a generalized list:
turning to FIG. 5C, attention is directed to yet another example event sequence 540, a sub-event sequence 540 being received by a view that includes multiple event recognizers. For this example, two event recognizers, a double tap event recognizer 570 and a tap event recognizer 590, are shown in fig. 5C. For illustrative purposes, the map view 305 in fig. 3A will correlate to the receipt of the sub-event sequence 540, and the state transitions in the double tap event recognizer 570 and the tap event recognizer 590. Note that in this example, the sequence of sub-events 540 defines a double tap gesture on the touch-sensitive display, but the same event recognition techniques may be applied in embodiments that use program-level program hierarchies and/or in a large number of contexts (e.g., detect mouse double taps).
Event recognizers 570 and 590 are in event recognition start states 572 and 592, respectively, before the first sub-event is passed to the actively involved event recognizer for map view 305. The sub-events associated with the touch sub-event 301 are then passed to the map view 304 (as described above), and the double tap event recognizer 570 and the tap event recognizer 590 transition to the event possible states 574 and 594, respectively. This is because the tap and double tap event definitions are both initiated with a touch (e.g., detecting a finger drop on the touch sensitive surface 541-1).
When the measurement delay sub-event 541-2 is passed to event recognizers 570 and 590, both transition to event possible states 574 and 594, respectively.
Next, the detect finger up ion event 541-3 is communicated to event recognizers 570 and 590. In this case, the state transitions of event recognizers 580 and 590 are different because the exemplary event definitions for tap and double tap are different. In the case of the tap event recognizer 590, the last sub-event in the event definition is to detect finger lift-off, so the tap event recognizer 590 transitions to the event recognized state 596.
However, regardless of what the user may ultimately do, double tap event recognizer 570 remains in event probable state 574 because a delay has already begun. However, a complete event recognition definition for double tap requires another delay followed by a complete sequence of tap events. This creates an ambiguity between the tap event recognizer 590, which is already in the event recognized state 576, and the double tap event recognizer 570, which is still in the event possible state 574.
Accordingly, in some embodiments, the event recognizer may implement an exclusive flag as well as an exclusive exception list, as discussed above with respect to fig. 3B and 3C. Here, an exclusive flag 324 for tapping event recognizer 590 will be set, and in addition, an exclusive exception list 326 for tapping event recognizer 590 is configured to continue to allow sub-events to pass to some event recognizers (e.g., double tap event recognizer 570) after tapping event recognizer 590 enters event recognition state 596.
While tap event recognizer 590 remains in event recognized state 596, sub-event sequence 540 continues to double tap event recognizer 570 where measure delay sub-event 541-4, detect finger down sub-event 541-5, and measure delay sub-event 541-6 keep double tap event recognizer 570 in event possible state 574; the transmission of the last sub-event of sequence 540, i.e., detecting finger lift-off 541-7, transitions double tap event recognizer 570 to event recognition state 576.
At this point, map view 305 acquires double tap events identified by event recognizer 570, rather than single tap events identified by tap event recognizer 590. This decision to acquire a double tap event is made based on the exclusive flag 324 of the tap event recognizer 590 being set, the exclusive exception list 326 of the tap event recognizer 590 comprising double tap events, and the fact that both the tap event recognizer 590 and the double tap event recognizer 570 successfully recognized their respective event types.
The following table presents the transmission of the sequence of sub-events 540 associated with the event recognizers 570 and 590 and the sub-event handling process 440 in the form of a generalized list:
in another embodiment, in the event scenario of fig. 5C, a single tap gesture is not recognized because the single tap event recognizer has a waitlist that asserts a double tap event recognizer. As a result, the single tap gesture is not recognized until (if possible) the double tap event recognizer enters an event impossible state. In this example, a double tap gesture is recognized, and the single tap event recognizer will continue to be in the event probable state until a double tap gesture is recognized, at which point the single tap event recognizer will transition to the event unlikely state.
Turning now to fig. 6A and 6B, fig. 6A and 6B are flow diagrams illustrating an event recognition method according to some embodiments. The method 600 is performed in an electronic device, which may be the electronic device 102 in some embodiments, as discussed above. In some embodiments, the electronic device may include a touch-sensitive surface configured to detect multi-touch gestures. Alternatively, the electronic device may include a touch screen configured to detect multi-touch gestures.
The method 600 is configured to execute software that includes a view hierarchy having a plurality of views. Method 600 displays 608 one or more views in a view hierarchy and executes 610 one or more software elements. Each software element is associated with a particular view that includes one or more event recognizers, such as those described in fig. 3B and 3C as event recognizer structures 320 and 360, respectively.
Each event recognizer generally includes an event definition based on one or more sub-events, where the event definition may be implemented as a state machine, see, e.g., state machine 340 in FIG. 3B. The event recognizer also generally includes an event handler, where the event handler specifies an action for the target and is configured to send the action to the target in response to the event recognizer detecting an event corresponding to the event definition.
In some embodiments, at least one of the plurality of event recognizers is a gesture recognizer having a gesture definition and a gesture processor, as indicated at step 612 of FIG. 6A.
In some embodiments, the event definition defines a user gesture, as indicated by step 614 of FIG. 6A.
Instead, the event recognizer has a set of event recognition states 616. These event recognition states may include at least an event possible state, an event impossible state, and an event recognized state.
In some embodiments, if the event recognizer enters the event possible state, the event handler begins its preparation 618 for the corresponding action to be delivered to the target. As discussed above with respect to the examples in FIGS. 4A and 5A-5C, the state machine implemented for each event recognizer generally includes an initial state, e.g., event recognition start state 405. Receiving a sub-event forming an initial part of the event definition triggers a state change to the event possible state 410. Accordingly, in some embodiments, as the event recognizer transitions from the event recognition start state 405 to the event possible state 410, the event handler of the event recognizer may begin preparing its particular actions for delivery to the target of the event recognizer after the event is successfully recognized.
On the other hand, in some embodiments, if the event recognizer enters the event not possible state 420, the event handler may terminate preparation of its corresponding action 620. In some embodiments, terminating the corresponding action includes undoing any preparation of the corresponding action by the event handler.
The example of figure 5B provides information for this embodiment because the tap event recognizer 590 may have already begun preparation 618 for its action, but then once the detection finger movement sub-event 531-3 is communicated to the tap event recognizer 590, the recognizer 590 will transition to the event not possible state 598, 578. At this point, tapping the event recognizer 590 may terminate preparation 620 for the action that has begun preparation 618.
In some embodiments, if the event recognizer enters the event recognition state, the event handler completes its preparation 622 for the corresponding action to be communicated to the target. The example of FIG. 5C illustrates this embodiment because double taps are recognized by the event recognizer actively involved for the map view 305, which in some embodiments will be events that are tied to selecting and/or executing the search results shown by the map view 305. Here, after double tap event recognizer 570 successfully recognizes the double tap event composed of sub-event sequence 540, the event handler of map view 305 completes preparation 622 for its action, i.e., indicating that it has received an activate command.
In some embodiments, the event handler transmits 624 its corresponding action to a target associated with the event recognizer. Continuing with the example of FIG. 5C, the act of preparing, i.e., the activation command of the map view 305, will be communicated to the specific target associated with the map view 305, which may be any suitable procedural method or object.
Alternatively, multiple event recognizers may independently process 626 the sequence of one or more sub-events in parallel.
In some embodiments, one or more event recognizers may be configured as an exclusive event recognizer 628, as with the exclusive flags 324 and 364 discussed above with respect to fig. 3B and 3C, respectively. When the event recognizer is configured as an exclusive event recognizer, the event delivery system prevents any other event recognizers in the view hierarchy for the views effectively involved (except those listed in the exception lists 326, 366 of the event recognizer that recognizes the event) from receiving subsequent sub-events (of the same sequence of sub-events) after the exclusive event recognizer recognizes the event. Further, when a non-exclusive event recognizer recognizes an event, the event delivery system prevents any exclusive event recognizer in the view hierarchy for the view actively involved from receiving subsequent sub-events except those listed in the exception list 326, 366 of the event recognizer that recognized the event, if any.
In some embodiments, the exclusive event identifier may include 630 an event exception list, such as the exclusive exception lists 326 and 366 discussed above with respect to fig. 3B and 3C, respectively. Note that as discussed above with respect to fig. 5C, the exclusive exception list of event recognizers may be used to allow the event recognizers to continue event recognition even when the sub-event sequences that make up their respective event definitions overlap. Accordingly, in some embodiments, the event exception list includes events 632 whose corresponding event definitions have repeated sub-events, such as the single tap/double tap event example of fig. 5C.
Alternatively, the event definition may define the user input operation 634.
In some embodiments, one or more event recognizers may be adapted to delay the transmission of each sub-event in the sequence of sub-events until after the event is recognized.
The method 600 detects 636 a sequence of one or more sub-events, which may include a native (primary) touch event 638 in some embodiments. The primitive touch events may include, but are not limited to, fundamental components of touch-based gestures on the touch-sensitive surface, such as data related to an initial finger or stylus touch down, data related to a multi-finger or stylus start moving across the touch-sensitive surface, a double-finger reverse movement, a lift-off of the stylus from the touch-sensitive surface, and so forth.
Sub-events in the sequence of one or more sub-events may include a variety of forms including, but not limited to, a key press hold, a key release, a button press hold, a button press release, a joystick movement, a mouse button press, a mouse button release, a stylus touch, a stylus movement, a stylus release, a verbal indication, a detected eye motion, a biometric input, a detected physiological change of the user, and others.
The method 600 identifies 640 one of the views of the view hierarchy as a click view. Clicking on the views establishes which views in the view hierarchy are the ones that are actively involved. An example is shown in FIG. 3A, where the actively involved view 303 includes a search results panel 304 and a map view 305, because the touch sub-event 301 contacts an area associated with the map view 305.
In some embodiments, the first actively involved view in the view hierarchy may be configured 642 to prevent the respective sub-event from being communicated to the event recognizer associated with the first actively involved view. This behavior may implement the skip feature (330 and 370, respectively) discussed above with respect to fig. 3B and 3C. When the skip property is set for an event recognizer, the transfer of the prepared sub-event is still performed for the event recognizer associated with the other actively involved view in the view hierarchy.
Alternatively, the first actively involved view in the view hierarchy may be configured 644 to prevent the respective sub-event from being transmitted to the event recognizer associated with the first actively involved view unless the first actively involved view is a click view. This behavior may implement the conditional skip feature discussed above with respect to fig. 3B and 3C (332 and 372, respectively).
In some embodiments, the second actively involved view in the view hierarchy is configured 646 to prevent respective sub-events from being transmitted to the event identifier associated with the second actively involved view and the event identifier associated with the ancestor of the second actively involved view. This behavior may implement the stop feature (328 and 368, respectively) discussed above with respect to fig. 3B and 3C.
The method 600 communicates 648 respective sub-events to the event recognizer for each actively involved view in the view hierarchy. In some embodiments, the event recognizer for the view actively involved in the view hierarchy processes the respective sub-event before processing the next sub-event in the sequence of sub-events. Instead, event recognizers for views actively involved in the view hierarchy make their sub-event recognition decisions when processing the respective sub-events.
In some embodiments, an event recognizer for a view actively involved in a view hierarchy may process a sequence 650 of one or more sub-events simultaneously; alternatively, an event recognizer for a view that is actively involved in the view hierarchy may process a sequence of one or more sub-events in parallel.
In some embodiments, one or more event recognizers may be adapted to delay transmitting 652 one or more sub-events of the sequence of sub-events until after the event recognizer recognizes the event. This behavior reflects a delayed event. For example, consider a single tap gesture in a view for which multiple tap gestures are also possible. In this case, the tap event becomes the "tap + delay" identifier. Essentially, when the event recognizer implements this behavior, the event recognizer will delay the event recognition until it confirms that the sequence of sub-events does in fact correspond to its event definition. This behavior may be appropriate when the receiving view cannot properly respond to the event of cancellation. In some embodiments, the event recognizer will delay updating its event recognition state to its respective actively involved view until the event recognizer confirms that the sequence of sub-events does not correspond to its event definition. As discussed above with respect to fig. 3B and 3C, providing delayed touch start flags 328, 368, delayed touch end flags 330, 370, and touch cancel flags 332, 372 adapts the sub-event delivery technique and event recognizer and view state information updates to specific needs.
7A-7S illustrate example user interfaces and user inputs recognized by an event recognizer for navigating through concurrently open applications, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 8A-8B, 9A-9C, and 10A-10B.
Although many of the examples below will be given with reference to input on a touch screen display 156 (in which a touch sensitive surface and a display are combined), in some embodiments, the device detects input on a touch sensitive surface (e.g., a touchpad or trackpad) that is independent of the display. In some embodiments, a major axis of the touch-sensitive surface corresponds to a major axis on the display. According to these embodiments, the device detects contact with the touch sensitive surface at locations corresponding to respective locations on the display. In this way, user input detected by the device on the touch-sensitive surface is used by the device to manipulate a user interface on a display of the electronic device when the touch-sensitive surface and the display are separated. It should be understood that similar methods may be used for the other user interfaces described herein.
Fig. 7A illustrates an example user interface ("home screen" 708) on the electronic device 102, according to some embodiments. A similar user interface may be implemented on the electronic device 102. In some embodiments, the home screen 708 is displayed by an application launcher software application, sometimes referred to as a starting point (springboard). In some embodiments, the user interface on the touch screen 156 includes the following elements, or a subset or superset thereof:
signal strength indicators 702 of wireless communication, such as cellular and Wi-Fi signals;
time 704; and
a battery status indicator 706.
The example user interface includes a plurality of application icons 5002 (e.g., 5002-25 through 5002-38). From the home screen 708, the finger gesture can be used to launch the application. For example, tapping the finger gesture 701 at a location corresponding to the application icon 5002-36 begins launching the email application.
In FIG. 7B, in response to detecting the finger gesture 701 on the application icon 5002-36, the email application is launched and the email application view 712-1 is displayed on the touch screen 156. The user may launch other applications in a similar manner. For example, the user can press the home button 710 to return from any of the application views 712 to the home screen 708 (fig. 7A) and launch other applications using finger gestures on the respective application icons 5002 on the home screen 708.
7C-7G illustrate the sequential launching of respective applications and the sequential display of respective user interfaces (i.e., respective application views) in response to detection of respective finger gestures at locations corresponding to respective application icons 5002 on the home screen 708. In particular, FIG. 7C illustrates the display of the media library application view 712-2 in response to a finger gesture on the application icon 5002-32. In FIG. 7D, notepad application view 712-3 is displayed in response to a finger gesture on application icon 5002-30. FIG. 7E illustrates the display of the map application view 712-4 in response to a finger gesture on the application icon 5002-27. In FIG. 7F, the weather application view 712-5 is displayed in response to a finger gesture on the application icon 5002-28. FIG. 7G illustrates the display of a web browser application view 712-6 in response to a finger gesture on the application icon 5002-37. In some embodiments, the sequence of open applications corresponds to the launching of an email application, a media library application, a notepad application, a map application, a weather application, and a web browser application.
FIG. 7G also illustrates a finger gesture 703 (e.g., a tap gesture) on a user interface object (e.g., a bookmark icon). In some embodiments, in response to detecting the finger gesture 703 on the bookmark icon, the web browser application displays a list of bookmarks on the touch screen 156. Similarly, the user may interact with the displayed application (e.g., a web browser application) with other gestures (e.g., a tap gesture on an address user interface object that allows the user to enter a new address or modify a displayed address, typically using an on-screen keyboard, a tap gesture on any link in the displayed web page that begins navigating to the web page corresponding to the selected link, etc.).
In fig. 7G, a first predetermined input (e.g., double-click 705 on start button 710) is detected. Instead, a multi-tap gesture (e.g., a three-point up-tap gesture, as exemplified by movement with finger contacts 707, 709, and 711) is detected on the touch screen 156.
FIG. 7H illustrates a portion of a web browser application view 712-6 and an application icon area 716 being simultaneously displayed in response to detecting a first predetermined input (e.g., a double tap 705 or a multi-tap gesture including finger contacts 707, 709, and 711). In some embodiments, in response to detecting the first predetermined input, the device enters an application view selection mode for selecting one of the concurrently open applications, and the portion of the web browser application view 712-6 and the application icon area 716 are concurrently displayed as part of the application view selection mode. Application icon area 716 includes a set of open application icons corresponding to at least some of the plurality of concurrently open applications. In this example, the portable electronic device has multiple applications (e.g., an email application, a media library application, a notepad application, a map application, a weather application, and a web browser application) that are open at the same time, although they are not all displayed at the same time. As illustrated in fig. 7H, application icon area 716 includes application icons (e.g., 5004-2, 5004-4, 5004-6, and 5004-8) for a weather application, a map application, a notepad application, and a media library application (i.e., four applications that are immediately adjacent to the currently displayed application, i.e., the web browser application, in the sequence of open applications). In some embodiments, the sequence or order of open application icons displayed in application icons area 716 corresponds to the sequence of open applications in a predetermined sequence (e.g., weather, map, notepad, and media library applications).
FIG. 7H also illustrates that a gesture 713 (e.g., a tap gesture) is detected on the open application icon 5004-8. In some embodiments, in response to detecting gesture 713, a corresponding application view (e.g., media library application view 712-2, FIG. 7C) is displayed.
FIG. 7H illustrates that a left swipe gesture 715 is detected at a location corresponding to application icon area 716. In FIG. 7I, in response to detecting the left swipe gesture 715, the application icons (e.g., 5004-2, 5004-4, 5004-6, and 5004-8) in the application icon area 716 are scrolled. As a result of the scrolling, application icons 5004-12 for the email application are displayed in the application icon area 506 in place of the previously displayed application icons (e.g., 5004-2, 5004-4, 5004-6, and 5004-8).
In FIG. 7J, a first type of gesture (e.g., a multi-finger left-swipe gesture including movement of finger contacts 717, 719, and 721) is detected on web browser application view 712-6. FIG. 7K illustrates that in response to detecting the first type of gesture, a weather application view 712-5 is displayed on the touch screen 156. It should be noted that the weather application immediately follows the web browser application in the sequence of open applications.
FIG. 7K also illustrates that a second gesture of the first type (e.g., a multi-finger left-swipe gesture including movement of finger contacts 723, 725, and 727) is detected on weather application view 712-5. FIG. 7L illustrates that in response to detecting the first type of second gesture, a map application view 712-4 is displayed on the touch screen 156. It should be noted that the map application immediately follows the weather application in the sequence of open applications.
FIG. 7L also illustrates that a third gesture of the first type (e.g., a multi-finger left-swipe gesture that includes movement of finger contacts 729, 731, and 733) is detected on map application view 712-4. FIG. 7M illustrates that in response to detecting the third gesture of the first type, a notepad application view 712-3 is displayed on the touch screen 156. It should be noted that the notepad application immediately follows the map application in the sequence of open applications.
FIG. 7M also illustrates that a fourth gesture of the first type (e.g., a multi-finger left-swipe gesture that includes movement of finger contacts 735, 737, and 739) is detected on notepad application view 712-3. FIG. 7N illustrates that in response to detecting the fourth gesture of the first type, a media library application view 712-2 is displayed on the touch screen 156. It should be noted that the media library application immediately follows the notepad application in the sequence of open applications.
FIG. 7N also illustrates that a fifth gesture of the first type (e.g., a multi-finger left-swipe gesture that includes movement of finger contacts 741, 743, and 745) is detected on media library application view 712-2. FIG. 7O illustrates that in response to detecting the fifth gesture of the first type, an email application view 712-1 is displayed on the touch screen 156. It should be noted that the email application immediately follows the media library application in the sequence of open applications.
FIG. 7O also illustrates that a sixth gesture of the first type (e.g., a multi-finger left-swipe gesture that includes movement of finger contacts 747, 749, and 751) is detected on email application view 712-1. FIG. 7P depicts the web browser application view 712-6 being displayed on the touch screen 156 in response to detecting the sixth gesture of the first type. It should be noted that the web browser application is at one end of the sequence of open applications, while the email application is at the other end of the sequence of open applications.
FIG. 7P also illustrates that a second type of gesture (e.g., a multi-finger right swipe gesture including movement of finger contacts 753, 755, and 757) is detected on the web browser application view 712-6. FIG. 7Q illustrates that, in some embodiments, in response to detecting the second type of gesture, an email application view 712-1 is displayed on the touch screen 156.
Referring to FIG. 7R, a multi-finger gesture (e.g., a five-finger pinch gesture including movement of finger contacts 759, 761, 763, 765, and 767) is detected on web browser application view 712-6. FIG. 7S illustrates that the web browser application view 712-6 and at least a portion of the home screen 708 are displayed simultaneously when a multi-finger gesture is detected on the touch screen 156. As illustrated, web browser application view 712-6 is displayed in a reduced scale. When a multi-finger gesture is detected on the touch screen 156, the zoom-out scale is adjusted according to the multi-finger gesture. For example, the zoom-out scale decreases as the finger contacts 759, 761, 763, 765, and 767 are further pinched (i.e., web browser application view 712-6 is displayed in a smaller scale). Instead, the zoom out scale increases as the finger contacts 759, 761, 763, 765, and 767 are spread out (i.e., web browser application view 712-6 is displayed at a larger scale than before).
In some embodiments, when the multi-finger gesture ceases to be detected, the web browser application view 712-6 ceases to be displayed and the entire home screen 708 is displayed. Instead, when the multi-finger gesture ceases to be detected, a determination is made as to whether the home screen 708 or the web browser application view 712-6 is to be displayed at full screen scale. In some embodiments, a determination is made based on the zoom-out scale when the multi-finger gesture ceases to be displayed (e.g., if the application view is displayed at a scale less than a predetermined threshold when the multi-finger gesture ceases to be detected, the entire home screen 708 is displayed; if the application view is displayed at a scale greater than the predetermined threshold when the multi-finger gesture ceases to be detected, the application view is displayed at a full screen scale without the home screen 708 being displayed). In some embodiments, the determination is also made based on the velocity of the multi-finger gesture.
Fig. 8A and 8B are a flow diagram illustrating an event recognition method 800 according to some embodiments. Method 800 is performed 802 in an electronic device (e.g., device 102, fig. 1B) having a touch-sensitive display. The electronic device is configured to execute at least a first software application and a second software application. The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers (e.g., application 133-2 has gesture recognizer 516-4 and application 133-1 has gesture recognizers 516-1 through 516-3 and views 508, 510, and 512, fig. 3F). The respective gesture recognizers have corresponding gesture processors (e.g., gesture processor 552-1 corresponds to gesture recognizer 516-1 and gesture processor 552-3 corresponds to gesture recognizer 516-4). The first set of one or more gesture recognizers is generally different from the second set of one or more gesture recognizers.
Method 800 allows a user to use gestures to control hidden open applications (e.g., a first software application) that are not currently displayed on a display of an electronic device, such as a background application, a suspended application, or a dormant application. Thus, the user may perform an operation that is not provided by the application currently displayed on the display of the electronic device (e.g., the second software application) but is provided by one of the currently open applications (e.g., using a gesture to display a home screen or switch to the next software application for a hidden application launcher software application).
In some embodiments, the first software application (804) is an application launcher (e.g., a start point). For example, as shown in fig. 7A, the application launcher displays a plurality of application icons 5002 corresponding to a plurality of applications. The application launcher receives a user selection of the application icon 5002 (e.g., based on a finger gesture on the touch screen 156) and, in response to receiving the user selection, launches an application corresponding to the selected application icon 5002.
The second software application is typically a software application that is launched by an application launcher. As illustrated in fig. 7A and 7B, the application launcher receives information about the tap gesture 701 on the email application icon 5002-36 and launches the email application. In response, the email application displays the email application view 712-1 on the touch screen 156. The second software application may be any application corresponding to application icon 5002 (fig. 7A) or any other application that may be launched by an application launcher (e.g., media library application, fig. 7C, notepad application, fig. 7D, map application, fig. 7E, weather application, fig. 7F, web browser application, fig. 7G, etc.). In the following description of method 800, an application launcher is used as an exemplary first software application and a web browser application is used as an exemplary second software application.
In some embodiments, the electronic device has only two software applications in the program hierarchy: an application launcher and one other software application (typically a software application corresponding to one or more views displayed on the touch screen 156 of the electronic device 102).
In some embodiments, the first software application (806) is an operating system application. Operating system applications, as used herein, refer to applications that integrate the operating system 118 (FIGS. 1A-1C). Operating system applications typically reside in the core OS layer 208 or the operating system API software 206 of FIG. 2. Operating system applications typically cannot be removed by a user, while other applications typically can be installed or removed by a user. In some embodiments, the operating system application includes an application launcher. In some embodiments, the operating system applications include a settings application (e.g., an application for displaying/modifying one or more values of system settings or device/global internal state 134, FIG. 1C). In some embodiments, the operating system applications include auxiliary modules 127. In some embodiments, the electronic device has only three software applications in the program hierarchy: an application launcher, a setup application, and one other application (typically a software application corresponding to one or more views displayed on the touch screen 156 of the electronic device 102).
The electronic device displays (808) at least a subset of the one or more views of the second software application (e.g., web browser application view 712-6, fig. 7G).
In some embodiments, displaying comprises (810) displaying at least a subset of the one or more views of the second software application without displaying any of the views of the first software application. For example, in FIG. 7G, the view of the application launcher (e.g., the home screen 708) is not displayed.
According to some embodiments, displaying comprises (812) displaying at least a subset of the one or more views of the second software application without displaying views of any other applications. For example, in FIG. 7G, only one or more views of a web browser application are displayed.
While displaying at least a subset of the one or more views of the second software application, the electronic device detects (814) a sequence of touch inputs on the touch-sensitive display (e.g., gesture 703, which includes a touch-down event and a touch-up event, or another gesture, which includes touch-down of finger contacts 707, 709, and 711, movement of finger contacts 707, 709, and 711 across the touch screen 156, and lift-off of finger contacts 707, 709, and 711). The sequence of touch inputs includes a first portion of the one or more touch inputs and a second portion of the one or more touch inputs following the first portion. As used herein, the term "sequence" refers to a sequence in which one or more touch events occur. For example, in a touch input sequence that includes finger contacts 707, 709, and 711, a first portion may include touch-down of finger contacts 707, 709, and 711, while a second portion may include movement of finger contacts 707, 709, and 711 and lift-off of finger contacts 707, 709, and 711.
In some embodiments, the detecting occurs (816) when a touch input in the first portion of the one or more touch inputs at least partially overlaps at least one of the displayed views of the second software application. In some embodiments, the first software application receives a first portion of the one or more touch inputs despite the touch inputs at least partially overlapping at least one of the displayed views of the second software application. For example, the application launcher receives a first portion of a touch input on a displayed view of a web browser (fig. 7G), although the application launcher is not displayed.
During a first phase of detecting a touch input sequence (818), the electronic device transmits (820) a first portion of the one or more touch inputs to the first software application and the second software application (e.g., using the event scheduler module 315, fig. 3D), identifies (822) one or more matching gesture recognizers from the gesture recognizers in the first set that recognize the first portion of the one or more touch inputs (e.g., using the event comparator 3033, fig. 3D in each gesture recognizer (typically, each receiving gesture recognizer) in the first set), and processes (824) the first portion of the one or more touch inputs (e.g., activating the corresponding event processor 319, fig. 3D) with the one or more gesture processors corresponding to the one or more matching gesture recognizers.
In some embodiments, the first phase of detecting the sequence of touch inputs is a phase of detecting a first portion of the one or more touch inputs.
With respect to the transmitting operation (820), in some embodiments, the first software application transmits the first portion of the one or more touch inputs to at least a subset of the gesture recognizers in the first set after receiving the first portion of the one or more touch inputs, and the second software application transmits the first portion of the one or more touch inputs to at least a subset of the gesture recognizers in the second set after receiving the first portion of the one or more touch inputs. In some embodiments, the electronic device or an event scheduler module (e.g., 315, FIG. 3D) in the electronic device transmits a first portion of the one or more touch inputs to a subset of gesture recognizers in at least the first and second sets (e.g., event scheduler module 315 transmits a first portion of the one or more touch inputs to gesture recognizers 516-1, 516-2, and 516-4, FIG. 3F).
For example, when a finger gesture including finger contacts 707, 709, and 711 is detected on the touch screen 156 (FIG. 7G), a touch down event is passed to one or more gesture recognizers of the application launcher and one or more gesture recognizers of the web browser application. In another example, a touch down event of tap gesture 703 (FIG. 7G) is communicated to one or more gesture recognizers of the application launcher and one or more gesture recognizers of the web browser application.
In some embodiments, when none of the gesture recognizers in the first set identifies the first portion of the one or more touch inputs (e.g., a mismatch between the detected event and the gesture definition or the gesture is not complete), processing the first portion of the one or more touch inputs includes performing a null operation (e.g., the device does not update the displayed user interface).
In some embodiments, the electronic device identifies one or more matching gesture recognizers from the gesture recognizers in the second set that recognize the first portion of the one or more touch inputs. The electronic device processes a first portion of the one or more touch inputs using one or more gesture processors corresponding to the one or more matched gesture recognizers. For example, in response to a tap gesture 703 (fig. 7G) communicated to one or more gesture recognizers of the web browser application, a matching gesture recognizer in the web browser application (e.g., gesture recognizer that recognizes a tap gesture on a bookmark icon, fig. 7G) processes the tap gesture 703 by displaying a list of bookmarks on the touch screen 156.
In some embodiments, after the first phase, during a second phase of detecting the sequence of touch inputs, the electronic device transmits (826, fig. 8B) the second portion of the one or more touch inputs to the first software application without transmitting the second portion of the one or more touch inputs to the second software application (e.g., using the event scheduler module 315, fig. 3D); identifying a second matching gesture recognizer that recognizes the touch input sequence from the one or more matching gesture recognizers (e.g., using event comparator 3033 in each matching gesture recognizer, FIG. 3D); and processing the sequence of touch inputs using gesture processors corresponding to the respective matched gesture recognizers. In some embodiments, the second phase of detecting the sequence of touch inputs is the phase of detecting the second portion of the one or more touch inputs.
For example, when a finger gesture including finger contacts 707, 709, and 711 is detected on the touch screen 156 (FIG. 7G), the touch move and lift-off events are transmitted to one or more gesture recognizers of the application launcher, without transmitting the touch events to the web browser application. The electronic device asserts a matching gesture recognizer (e.g., a three-finger up-swipe gesture recognizer) of the application launcher and processes the sequence of touch inputs using a gesture processor corresponding to the three-finger up-swipe gesture recognizer.
During the second phase, the second software application does not receive the second portion of the one or more touch inputs, typically because the first software application has priority (e.g., in the program hierarchy) over the second software application. Thus, in some embodiments, when the gesture recognizer in the first software application recognizes the first portion of the one or more touch inputs, the one or more gesture recognizers in the first software application exclusively receive the second subsequent portion of the one or more touch inputs. Additionally, during the second stage, the second software application may not receive the second portion of the one or more touch inputs because no gesture recognizer in the second software application matches the first portion of the one or more touch inputs.
In some embodiments, processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers includes (834) displaying in a first predetermined area of the touch-sensitive display at least a set of open application icons corresponding to at least some of the plurality of concurrently open applications and concurrently displaying at least a subset of the one or more views of the second software application. For example, in fig. 7H, the application icon 5004 in the predetermined area 716 corresponds to an application that is simultaneously open for the electronic device. In some embodiments, the application icons 5004 in the predetermined area 716 are displayed according to the sequence of open applications. In FIG. 7H, the electronic device simultaneously displays the predefined area 716 and a subset of the web browser application view 712-6.
In some embodiments, processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers includes (828) displaying one or more views of the first software application. For example, in response to the multi-finger pinch gesture (fig. 7R), the electronic device displays a home screen 708 (fig. 7A). In some embodiments, displaying one or more views of the first software application includes displaying one or more views of the first software application without simultaneously displaying views corresponding to any other software application (e.g., fig. 7A).
In some embodiments, processing the sequence of touch inputs using the gesture processors corresponding to the respective matched gesture recognizers includes (830) replacing the display of the one or more views of the second software application with the display of the one or more views of the first software application (e.g., displaying the home screen 708, fig. 7A). Thus, after displaying the one or more views of the first software application, the display of the one or more views of the second software application is stopped. In some embodiments, replacing the display of the one or more views of the second software application with the display of the one or more views of the first software application includes displaying the one or more views of the first software application without simultaneously displaying a view corresponding to any other software application (fig. 7A).
In some embodiments, the electronic device executes (832) the first software application, the second software application, and the third software application simultaneously. In some embodiments, processing the sequence of touch inputs using the gesture processors corresponding to the respective matched gesture recognizers includes replacing one or more displayed views of the second software application with one or more views of the third software application. For example, in response to the multi-tap gesture, the electronic device replaces the display of the web browser application view 712-6 with the display of the weather application view 712-5 (FIGS. 7J-7K). In some applications, replacing the one or more displayed views of the second software application with the one or more views of the third software application includes displaying the one or more views of the third software application without simultaneously displaying views corresponding to any other software applications. In some embodiments, the third software application immediately follows the second software application in the sequence of open applications.
In some embodiments, processing the touch input sequence using gesture processors corresponding to respective matched gesture recognizers includes launching a setup application. For example, in response to a ten-finger tap gesture, the electronic device launches a setup application.
Note that the details of the above-described process with respect to method 800 also apply in a similar manner to method 900 described below. For the sake of brevity, these details will not be repeated below.
9A-9C are flow diagrams illustrating an event recognition method 900 according to some embodiments. Method 900 is performed in an electronic device with a touch-sensitive display (902). The electronic device is configured to execute at least a first software application and a second software application. The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. Each gesture recognizer has a corresponding gesture processor. In some embodiments, the first set of one or more gesture recognizers is different from the second set of one or more gesture recognizers.
Method 900 allows a user to use gestures to control hidden open applications (e.g., a first software application) that are not currently displayed on a display of an electronic device, such as a background application, a suspended application, or a dormant application. Thus, the user may perform an operation that is not provided by the application currently displayed on the display of the electronic device (e.g., the second software application) but is provided by one of the currently open applications (e.g., using a gesture to display a home screen or switch to the next software application for a hidden application launcher software application).
In some embodiments, the first software application (904) is an application launcher (e.g., a start point). In some embodiments, the first software application is (906) an operating system application. In the following description of method 900, the application launcher serves as an exemplary first software application and the web browser application serves as an exemplary second software application.
The electronic device displays (908) a first set of one or more views (e.g., web browser application view 712-6, fig. 7G). The first set of one or more views includes at least a subset of the one or more views of the second software application. For example, the second software application may have a plurality of application views (e.g., application view 317 of application 133-1, FIG. 3D), and the electronic device displays at least one of the plurality of application views. In some embodiments, the subset includes all of the one or more views of the second software application.
In some embodiments, displaying the first set of one or more views includes (910) displaying the first set of one or more views without displaying any views of the first software application (e.g., web browser application view 712-6, fig. 7G).
According to some embodiments, displaying the first set of one or more views includes (912) displaying the first set of one or more views without displaying views of any other software applications. For example, in FIG. 7G, only one or more views of a web browser application are displayed.
While displaying the first set of one or more views, the electronic device detects (914) a sequence of touch inputs on the touch-sensitive display and determines (920) whether at least one gesture recognizer of the first set of one or more gesture recognizers recognizes a first portion of the one or more touch inputs. For example, while displaying web browser application view 712-6 (FIG. 7G), the device determines whether a gesture recognizer for the application launcher recognizes the first portion of the touch input. The sequence of touch inputs includes a first portion of the one or more touch inputs and a second portion of the one or more touch inputs following the first portion (i.e., the second portion follows the first portion).
In some embodiments, the sequence of touch inputs at least partially overlaps (916) at least one of the one or more displayed views of the second software application. For example, the application launcher receives a first portion of touch input on the web browser application view 712-6 (FIG. 7G), although the application launcher is not displayed.
In some embodiments, prior to determining that at least one gesture recognizer of the first set of one or more gesture recognizers recognizes the first portion of the one or more touch inputs, the electronic device simultaneously transmits (918) the first portion of the one or more touch inputs to the first software application and the second software application. For example, both the application launcher and the web browser application receive touch-down events of finger contacts 707, 709, and 711 (FIG. 7G) before determining that at least one gesture recognizer in the application launcher recognizes the touch-down event.
In accordance with a determination (922, fig. 9B) that at least one gesture recognizer of the first set of one or more gesture recognizers recognized a first portion of one or more touch inputs, the electronic device transmits (924) a touch input sequence to the first software application without transmitting the touch input sequence to the second software application, determines (926) whether the at least one gesture recognizer of the first set of one or more gesture recognizers recognized the touch input sequence, and processes (928) the touch input sequence using the at least one gesture recognizer of the first set of one or more gesture recognizers that recognized the touch input sequence in accordance with the determination that the at least one gesture recognizer of the first set of one or more gesture recognizers recognized the touch input sequence.
For example, when touch down and touch movement of the three finger contacts 707, 709, and 711 are detected on the touch screen 156 (FIG. 7G), the electronic device recognizes a touch input by at least the three-finger up-stroke gesture recognizer of the application launcher. Thereafter, the electronic device transmits subsequent touch events (e.g., liftoff of finger contacts 707, 709, and 711) to the application launcher without transmitting the subsequent touch events to the web browser application. The electronic device further recognizes the three-finger up-stroke gesture recognizer to identify a sequence of touch inputs and processes the sequence of touch inputs using a gesture processor corresponding to the three-finger up-stroke gesture recognizer.
In some embodiments, processing the sequence of touch inputs using at least one gesture recognizer of the first set of one or more gesture recognizers includes (930) displaying one or more views of the first software application. For example, in response to detecting the multi-finger pinch gesture (fig. 7R), the electronic device displays the home screen 708 (fig. 7A).
In some embodiments, processing the touch input sequence using at least one gesture recognizer of the first set of one or more gesture recognizers includes (932) replacing the display of the first set of one or more views with the display of the one or more views of the first software application (e.g., displaying the home screen 708, fig. 7A, the home screen 708 being part of the application launcher software application).
In some embodiments, the electronic device executes the first software application, the second software application, and the third software application simultaneously; and processing the sequence of touch inputs using at least one gesture recognizer of the first set of one or more gesture recognizers includes (934) replacing the first set of one or more views with one or more views of the third software application. In some embodiments, replacing the first set of one or more views with the one or more views of the third software application includes displaying the one or more views of the third software application without simultaneously displaying views corresponding to any other software applications. For example, in response to the multi-tap gesture, the electronic device replaces the display of the web browser application view 712-6 with the display of the weather application view 712-5 (FIGS. 7J-7K).
In some embodiments, processing the sequence of touch inputs using at least one gesture recognizer of the first set of one or more gesture recognizers includes (936), displaying a set of open application icons in a first predetermined area of the touch-sensitive display that corresponds to at least some of the plurality of concurrently open applications, and concurrently displaying at least a subset of the first set of one or more views. For example, in fig. 7H, the application icon 5004 in the predetermined area 716 corresponds to an application that is simultaneously open for the electronic device. In some embodiments, the application icons 5004 in the predetermined area 716 are displayed according to the sequence of open applications. In FIG. 7H, the electronic device simultaneously displays the predefined area 716 and a subset of the web browser application view 712-6.
In accordance with a determination (938, fig. 9C) that none of the first set of one or more gesture recognizers recognize the first portion of the one or more touch inputs, the electronic device transmits (940) a touch input sequence to the second software application, determines (942) whether at least one of the second set of one or more gesture recognizers recognizes the touch input sequence, and processes (944) the touch input sequence using at least one of the second set of one or more gesture recognizers that recognizes the touch input sequence in accordance with the determination that the at least one of the second set of one or more gesture recognizers recognizes the touch input sequence.
For example, when the first portion of the one or more touch inputs is a tap gesture (e.g., 703, fig. 7G) and no gesture recognizer in the application launcher recognizes the tap gesture, the electronic device transmits the tap gesture to the web browser application and determines whether at least one gesture recognizer of the web browser application recognizes the tap gesture. When the web browser application (or a gesture recognizer of the web browser application) recognizes the tap gesture 703 on the bookmark icon, the electronic device processes the tap gesture 703 using a corresponding gesture handler.
10A-10B are flow diagrams illustrating an event recognition method according to some embodiments. Note that the details of the above-described processes with respect to methods 600, 800, and 900 also apply in a similar manner to method 1000 described below. For the sake of brevity, these details will not be repeated below.
Method 1000 is performed in an electronic device having an internal state (e.g., device/global internal state 134, fig. 1C) (1002). The electronic device is configured to execute software that includes a view hierarchy having a plurality of views.
In method 1000, at least one gesture recognizer has a plurality of gesture definitions. This helps the gesture recognizer to work in distinct modes of operation. For example, the device may have a normal operating mode and an auxiliary operating mode. In a normal operating mode, the next application gesture is used to move between applications, and the next application gesture is defined as a three-finger left-swipe gesture. In the secondary mode of operation, the three-finger left-swipe gesture is used to perform different functions. Thus, a gesture other than a three-finger left swipe is required in the secondary mode of operation to correspond to the next application gesture (e.g., a four-finger left swipe gesture in the secondary mode of operation). By associating multiple gesture definitions to a next application gesture, the device is able to select one of the gesture definitions for the next application gesture based on the current operating mode. This provides flexibility in using the gesture recognizer in different modes of operation. In some embodiments, a plurality of gesture recognizers with a plurality of gesture definitions is adjusted based on the operating mode (e.g., a gesture performed by three fingers in a normal operating mode is performed by four fingers in an auxiliary operating mode).
In some embodiments, the internal state includes (1016) one or more settings for the secondary mode of operation (e.g., the internal state indicates whether the device is operating in the secondary mode of operation).
In some embodiments, the software is (1018) or includes an application launcher (e.g., start point).
In some embodiments, the software is (1020) or includes an operating system application (e.g., an operating system-integrated application of the device).
The electronic device displays (1004) one or more views in a hierarchy of views.
The electronic device executes (1006) the one or more software elements. Each software element is associated with a particular view (e.g., application 133-1 has one or more application views 317, fig. 3D), and each particular view includes one or more event recognizers (e.g., event recognizer 325, fig. 3D). Each event recognizer has one or more event definitions based on one or more sub-events and an event handler (e.g., gesture definition 3035, and a reference to a corresponding event handler in event delivery information 3039, fig. 3D). The event handler specifies an action for the target and is configured to send the action to the target in response to the event recognizer detecting an event corresponding to a particular event definition of the one or more event definitions (e.g., an event definition selected from the one or more event definitions when the event recognizer has multiple event definitions or a unique event definition when the event recognizer has only one event definition).
The electronic device detects (1008) one or more sub-event sequences.
The electronic device identifies (1010) one of the views of the view hierarchy as a click view. The click-through view establishes which views in the view hierarchy are actively involved views.
The electronic device transmits (1012) the respective sub-event to an event recognizer for each actively involved view in the view hierarchy. In some embodiments, one or more actively involved views in the view hierarchy include click-through views. In some embodiments, one or more actively involved views in the view hierarchy include a default view (e.g., the application launcher's home screen 708).
At least one event recognizer for views actively involved in the view hierarchy has (1014) a plurality of event definitions and selects one of the plurality of event definitions based on an internal state of the electronic device. For example, event recognizer 325-1 has multiple gesture definitions (e.g., 3037-1 and 3037-2, FIG. 3D). In some embodiments, event recognizer 325-1 selects one of the plurality of gesture definitions in event recognizer 325-1 based on one or more values in device/global internal state 134 (FIG. 1C). Then, at least one event recognizer processes the respective sub-event before processing the next sub-event in the sequence of sub-events according to the selected event definition. In some embodiments, each of the two or more event recognizers for a view actively involved in the view hierarchy has a plurality of event definitions, and one of the plurality of event definitions is selected based on the internal state of the electronic device. In such embodiments, at least one of the two or more event recognizers processes the respective sub-event before processing the next sub-event in the sequence of sub-events according to the selected event definition.
7J-7K illustrate a next application gesture to begin displaying an application view of a next application. In some embodiments, the application launcher includes a next application gesture recognizer that includes a gesture definition that matches a three-finger left-swipe gesture. For purposes of this example, assume that the next application gesture recognizer also includes a gesture definition that lists gestures corresponding to a four-finger left-swipe gesture. When one or more values in the device/global internal state 134 are set to default values, the next application gesture recognizer uses a three-finger left-swipe gesture definition, rather than a four-finger left-swipe gesture definition. When one or more values in the device/global internal state 134 are modified (e.g., by using the auxiliary module 127, fig. 1C), the next application gesture recognizer uses the four-finger left-swipe gesture definition, rather than the three-finger left-swipe gesture definition. Thus, in this example, a four-finger left-swipe gesture begins to display the application view of the next application when one or more values in the device/global internal state 134 are modified.
7R-7S illustrate that, in response to detecting the pinch gesture, the home screen gesture begins displaying the web browser application view 712-6 at a reduced scale and displaying at least a portion of the home screen 708. Based on the device/global internal state 134 and the gesture definition in the home screen gesture recognizer, a four-finger pinch gesture, a three-finger pinch gesture, or any other suitable gesture may be used to initiate display of the web browser application view 712-6 at a reduced scale and display of at least a portion of the home screen 708.
In some embodiments, the plurality of event definitions includes (1020) a first event definition corresponding to a first swipe gesture having a first number of fingers and a second event definition corresponding to a second swipe gesture having a second number of fingers different from the first number of fingers. For example, the plurality of event definitions for the respective gesture recognizer can include a three conductor tap gesture and a four conductor tap gesture.
In some embodiments, the plurality of event definitions includes a first event definition corresponding to a first gesture of a first type having a first number of fingers and a second event definition corresponding to a second gesture of the first type having a second number of fingers different from the first number of fingers (e.g., a one-finger tap gesture and a two-finger tap gesture, a two-finger pinch gesture and a three-finger pinch gesture, etc.).
In some embodiments, the plurality of event definitions includes a first event definition corresponding to a first gesture and a second event definition corresponding to a second gesture different from the first gesture (e.g., a swipe gesture and a pinch gesture, a swipe gesture and a tap gesture, etc.).
In some embodiments, respective ones of the plurality of event definitions are selected (1022) for respective event recognizers based on an internal state of the electronic device and a determination (made by the electronic device) that the respective event definitions do not correspond to event definitions of any event recognizers other than the respective event recognizer for the view actively involved.
For example, a respective gesture recognizer may have two event definitions: a first event definition corresponding to a three-finger left-swipe gesture typically used for the normal operating mode, and a second event definition corresponding to a four-finger left-swipe gesture typically used for the secondary operating mode. When the internal state of the electronic device is set in a manner that causes the electronic device to operate in the assistance mode, the electronic equipment determines whether the four-finger left-swipe gesture defined for the second event is used by any other event recognizer for the actively involved view. If any other event recognizer for the actively involved view does not use a four-finger left swipe gesture, then the four-finger left swipe gesture is selected for the respective gesture recognizer in the secondary mode of operation. On the other hand, if any other event recognizer for the views actively involved uses a four-finger left-swipe gesture, then a three-finger left-swipe gesture is used for the respective gesture recognizer even in the secondary mode of operation. This prevents two or more gesture recognizers from responding undesirably to the same gesture.
In some embodiments, a respective event definition of the plurality of event definitions is selected for a respective event recognizer based on an internal state of the electronic device and a determination (made by the electronic device) that the respective event definition does not correspond to an event definition of any event recognizer other than the respective event recognizer, including event recognizers for the views actively involved and any other views.
In some embodiments, each of the two or more event recognizers for the views actively involved in the view hierarchy has (1024) a respective plurality of event definitions, one respective event definition of the respective plurality of event definitions being selected for one respective event recognizer in accordance with the internal state of the electronic device and a determination (made by the electronic device) that the respective event definition does not correspond to any event definition selected for any event recognizer having two or more event definitions other than the respective event recognizer.
For example, a view that is actively involved may have a first gesture recognizer and a second gesture recognizer. In this example, the first gesture recognizer has: a first event definition corresponding to a three-finger left-swipe gesture typically used for the normal operating mode, and a second event definition corresponding to a four-finger left-swipe gesture typically used for the secondary operating mode. The second gesture recognizer has: a third event definition corresponding to a two-finger left swipe gesture normally used for the normal operating mode, and a fourth event definition corresponding to a four-finger left swipe gesture normally used for the secondary operating mode. When the internal state of the electronic device is set in a manner that causes the electronic device to operate in the assistance mode, the electronic device determines whether a four-finger left-swipe gesture satisfying a second event definition is selected for any other event recognizer having two or more event definitions (e.g., a second event gesture recognizer). If a four-finger left-swipe gesture is not selected for any other event recognizer having two or more event definitions, then the four-finger left-swipe gesture is selected for the first gesture recognizer in the secondary mode of operation. As a result, no four-finger left-swipe gesture is selected for the second gesture recognizer because a four-finger left-swipe gesture has been selected for the first gesture recognizer. Instead, a two-finger left-swipe gesture is selected for the second gesture recognizer because no two-finger left-swipe gesture is selected for any other gesture recognizer having two or more event definitions, including the first gesture recognizer. In another example, the actively involved view has the first gesture recognizer and the third gesture recognizer but no second gesture recognizer. The third gesture recognizer has a third event definition (corresponding to a two-finger left swipe gesture) that is typically used for the normal operating mode and a fifth event definition that corresponds to a three-finger left swipe gesture that is typically used for the secondary operating mode. In the secondary mode of operation, a three-finger left-swipe gesture can be selected for the third gesture recognizer because the three-finger left-swipe gesture does not have any other gesture recognizer selections for which there are two or more event definitions.
Although the above examples are described with respect to a multi-finger left-swipe gesture, the above methods are applicable to any direction of swipe gesture (e.g., a right-swipe gesture, an up-swipe gesture, a down-swipe gesture, and/or any oblique-swipe gesture) or any other kind of gesture (e.g., a tap gesture, a pinch gesture, a fan-out gesture, etc.).
In some embodiments, processing the respective sub-event according to the selected event definition includes (1026) displaying one or more views of the first software application that are different from the software that includes the view hierarchy (e.g., simultaneously displaying at least a portion of the user interface 712-6 that includes the one or more views of the software and a portion of the start screen 708, FIG. 7S).
In some embodiments, at least one event recognizer processes (1028) the respective sub-events by replacing the display of one or more views of the view hierarchy with the display of one or more views of a first software application (e.g., the start screen 708, fig. 7A) that is different from the software comprising the view hierarchy.
In some embodiments, at least one event recognizer processes (1030) respective sub-events by: displaying a set of open application icons corresponding to at least some of the plurality of concurrently open applications in a first predetermined area of a display in the electronic device; and simultaneously displaying at least a subset of one or more views of the view hierarchy (e.g., open application icon 5004 and at least a portion of user interface 712-6, figure 7H). For example, in response to a three-finger up-swipe gesture in the normal operating mode and a four-finger up-swipe gesture in the auxiliary operating mode, the electronic device simultaneously displays the set of open application icons and at least a subset of one or more views of the view hierarchy.
Fig. 11 illustrates a functional block diagram of an electronic device 1100 configured in accordance with the inventive principles described above, in accordance with some embodiments. The functional blocks of the device can be implemented by hardware, software or a combination of software and hardware to carry out the principles of the invention. Those skilled in the art will appreciate that the functional blocks depicted in FIG. 11 may be combined or divided into sub-modules to implement the principles of the present invention as described above. Thus, the description herein may support any possible combination, division, or further definition of the functional blocks described herein.
As shown in fig. 11, electronic device 1100 includes a touch-sensitive display unit 1102 configured to receive touch input; and a processing unit 1106 coupled to the touch-sensitive display unit 1102. In some embodiments, processing unit 1106 includes execution unit 1108, display enabling unit 1110, detection unit 1112, transmission unit 1114, determination unit 1116, and touch input processing unit 1118.
The processing unit 1106 is configured to: at least a first software application and a second software application are executed (e.g., using execution unit 1108). The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. Each gesture recognizer has a corresponding gesture processor. The processing unit 1106 is configured to enable display of at least a subset of the one or more views of the second software application (e.g., on the touch-sensitive display unit 1102 using the display enabling unit 1110). The processing unit 1106 is configured to: while displaying at least a subset of the one or more views of the second software application: a sequence of touch inputs on the touch-sensitive display unit 1102 is detected (e.g., using detection unit 1112). The sequence of touch inputs includes a first portion of the one or more touch inputs and a second portion of the one or more touch inputs following the first portion. The processing unit 1106 is configured to, during a first phase of detecting a sequence of touch inputs: transmitting a first portion of the one or more touch inputs to the first software application and the second software application (e.g., using the transmitting unit 1114); asserting (e.g., using assertion unit 1116) one or more matching gesture recognizers that recognize a first portion of the one or more touch inputs from gesture recognizers in the first set; and processing the first portion of the one or more touch inputs with one or more gesture processors corresponding to the one or more matched gesture recognizers (e.g., using touch input processing unit 1118).
In some embodiments, the processing unit 1106 is configured to detect a sequence of touch inputs (e.g., using the detection unit 1112) when a touch input in the first portion of the one or more touch inputs at least partially overlaps at least one of the displayed views of the second software application.
In some embodiments, the processing unit 1106 is configured to enable display of at least a subset of the one or more views of the second software application without displaying any view of the first software application (e.g., on the touch-sensitive display unit 1102 using the display enabling unit 1110).
In some embodiments, the processing unit 1106 is configured to enable display of at least a subset of the one or more views of the second software application without displaying views of any other applications (e.g., on the touch-sensitive display unit 1102 using the display enabling unit 1110).
In some embodiments, the processing unit 1106 is configured to, after the first phase, during a second phase of detecting the sequence of touch inputs: transmitting a second portion of the one or more touch inputs to the first software application without transmitting the second portion of the one or more touch inputs to the second software application (e.g., using the transmitting unit 1114); asserting, from the one or more matching gesture recognizers, a second matching gesture recognizer that recognized the touch input sequence (e.g., using assertion unit 1116); and processing the sequence of touch inputs using the gesture processors corresponding to the respective matched gesture recognizers (e.g., using touch input processing unit 1118).
In some embodiments, the processing unit 1106 is configured to process a sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers by enabling display of one or more views of the first software application (e.g., on the touch-sensitive display unit 1102 using the display enabling unit 1110).
In some embodiments, the processing unit 1106 is configured to process a sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers by replacing the display of the one or more views of the second software application with the display of the one or more views of the first software application (e.g., on the touch-sensitive display unit 1102 using the display enabling unit 1110).
In some embodiments, the processing unit 1106 is configured to: concurrently executing the first software application, the second software application, and the third software application (e.g., using execution unit 1108); and processing the touch input sequence using gesture processors corresponding to the respective matched gesture recognizers by replacing the one or more displayed views of the second software application with one or more views of the third software application (e.g., on the touch-sensitive display unit 1102 using the display enabling unit 1110).
In some embodiments, the processing unit 1106 is configured to: enable display of a set of open application icons (e.g., using display enabling unit 1110) in a first predetermined area of touch-sensitive display unit 1102 corresponding to at least some of a plurality of concurrently open applications; and enables display of at least a subset of the one or more views of the second software application (e.g., using display enabling unit 1110).
In some embodiments, the first software application is an application launcher.
In some embodiments, the first software application is an operating system application.
Fig. 12 illustrates a functional block diagram of an electronic device 1200 configured in accordance with the principles of the invention described above, in accordance with some embodiments. The functional blocks of the device can be implemented by hardware, software or a combination of software and hardware to carry out the principles of the invention. Those skilled in the art will appreciate that the functional blocks depicted in fig. 12 may be combined or divided into sub-modules to implement the principles of the present invention as described above. Thus, the description herein may support any possible combination, division, or further definition of the functional blocks described herein.
As shown in fig. 12, electronic device 1200 includes a touch-sensitive display unit 1202 configured to receive touch input; and a processing unit 1206 coupled to the touch-sensitive display unit 1202. In some embodiments, the processing unit 1206 includes an execution unit 1208, a display enabling unit 1210, a detection unit 1212, a determination unit 1214, a transmission unit 1216, and a touch input processing unit 1218.
The processing unit 1206 is configured to execute at least a first software application and a second software application (e.g., using execution unit 1208). The first software application includes a first set of one or more gesture recognizers and the second software application includes one or more views and a second set of one or more gesture recognizers. Each gesture recognizer has a corresponding gesture processor. The processing unit 1206 is configured to enable display of a first set of one or more views (e.g., using the display enabling unit 1210). The first set of one or more views includes at least a subset of the one or more views of the second software application. The processing unit 1206 is configured to detect a sequence of touch inputs on the touch-sensitive display unit (e.g., using the detection unit 1212) while displaying the first set of one or more views. The sequence of touch inputs includes a first portion of the one or more touch inputs and a second portion of the one or more touch inputs following the first portion. The processing unit 1206 is configured to determine whether at least one gesture recognizer of the first set of one or more gesture recognizers recognizes a first portion of the one or more touch inputs (e.g., using the determining unit 1214). The processing unit 1206 is configured to, in accordance with a determination that at least one gesture recognizer of the first set of one or more gesture recognizers recognized the first portion of the one or more touch inputs: transmitting the touch input sequence to the first software application without transmitting the touch input sequence to the second software application (e.g., using the transmitting unit 1216); it is determined whether at least one gesture recognizer of the first set of one or more gesture recognizers recognizes the touch input sequence (e.g., using determining unit 1214). The processing unit 1206 is configured to process the touch input sequence using at least one gesture recognizer of the first set of one or more gesture recognizers that recognizes the touch input sequence (e.g., using the touch input processing unit 1218) in accordance with the determination that the at least one gesture recognizer of the first set of one or more gesture recognizers recognized the touch input sequence. The processing unit 1206 is configured to, in accordance with a determination that none of the first set of one or more gesture recognizers recognizes the first portion of the one or more touch inputs: transmit the sequence of touch inputs to the second software application (e.g., using the transmit unit 1216); and determine whether at least one gesture recognizer of the second set of one or more gesture recognizers recognizes the touch input sequence (e.g., using determining unit 1214). Processing unit 1206 is configured to, in accordance with a determination that at least one gesture recognizer of the second set of one or more gesture recognizers recognized the touch input sequence, process the touch input sequence using the at least one gesture recognizer of the second set of one or more gesture recognizers that recognized the touch input sequence (e.g., using touch input processing unit 1218).
In some embodiments, the sequence of touch inputs at least partially overlaps at least one of the one or more displayed views of the second software application.
In some embodiments, the processing unit 1206 is configured to enable display of the first set of one or more views without displaying any view of the first software application (e.g., on the touch-sensitive display unit 1202 using the display enabling unit 1210).
In some embodiments, the processing unit 1206 is configured to enable display of the first set of one or more views without displaying views of any other software applications (e.g., on the touch-sensitive display unit 1202 using the display enabling unit 1210).
In some embodiments, prior to determining that at least one gesture recognizer of the first set of one or more gesture recognizers recognizes the first portion of the one or more touch inputs, the processing unit 1206 is configured to simultaneously transmit the first portion of the one or more touch inputs to the first software application and the second software application (e.g., using the transmitting unit 1216).
In some embodiments, the first software application is an application launcher.
In some embodiments, the first software application is an operating system application.
In some embodiments, the processing unit 1206 is configured to process the sequence of touch inputs using at least one gesture recognizer of the first set of one or more gesture recognizers by enabling display of one or more views of the first software application (e.g., on the touch-sensitive display unit 1202 using the display enabling unit 1208).
In some embodiments, the processing unit 1206 is configured to process the sequence of touch inputs using at least one gesture recognizer of the first set of one or more gesture recognizers by replacing the display of the first set of one or more views with the display of the one or more views of the first software application (e.g., on the touch-sensitive display unit 1202 using the display enabling unit 1208).
In some embodiments, the processing unit 1206 is configured to execute the first software application, the second software application, and the third software application simultaneously (e.g., using the execution unit 1208). The processing unit 1206 is configured to process the sequence of touch inputs using at least one gesture recognizer of the first set of one or more gesture recognizers by replacing the first set of one or more views with one or more views of a third software application (e.g., on the touch-sensitive display unit 1202 using the display enabling unit 1210).
In some embodiments, the processing unit 1206 is configured to: enable display of a set of open application icons (e.g., using display enabling unit 1210) in a first predetermined area of touch-sensitive display unit 1202 corresponding to at least some of the plurality of concurrently open applications; and simultaneously display at least a subset of the first set of one or more views (e.g., using display enabling unit 1210).
Fig. 13 illustrates a functional block diagram of an electronic device 1300 configured in accordance with the principles of the invention described above, in accordance with some embodiments. The functional blocks of the device can be implemented by hardware, software or a combination of software and hardware to carry out the principles of the invention. Those skilled in the art will appreciate that the functional blocks depicted in FIG. 13 may be combined or divided into sub-modules to implement the principles of the present invention as described above. Thus, the description herein may support any possible combination, division, or further definition of the functional blocks described herein.
As shown in fig. 13, the electronic device 1300 includes a display unit 1302 configured to display one or more views; a memory cell 1304 configured to store an internal state; and a processing unit 1306 coupled to the display unit 1302 and the memory unit 1304. In some embodiments, the processing unit 1306 includes an execution unit 1308, a display enabling unit 1310, a detection unit 1312, an identification unit 1314, a transmission unit 1316, and an event/sub-event processing unit 1318. In some embodiments, processing unit 1306 includes memory unit 1304.
The processing unit 1306 is configured to: executing software that includes a view hierarchy having a plurality of views (e.g., using execution unit 1308); enable display of one or more views in the view hierarchy (e.g., on the display unit 1302 using the display enabling unit 1310); and executing one or more software elements (e.g., using execution unit 1308). Each software element is associated with a particular view, and each particular view includes one or more event recognizers. Each event recognizer has: one or more event definitions based on the one or more sub-events, and an event handler. The event handler specifies an action for the target and is configured to send the action to the target in response to the event recognizer detecting an event corresponding to a particular event definition of the one or more event definitions. The processing unit 1306 is configured to: detecting a sequence of one or more sub-events (e.g., using detection unit 1312); and identify one view in the view hierarchy as the click view (e.g., using identification unit 1314). Clicking on the views establishes which views in the view hierarchy are the ones that are actively involved. The processing unit 1306 is configured to transmit respective sub-events to the event recognizer for each actively involved view in the view hierarchy (e.g., using the transmitting unit 1316). At least one event recognizer for a view actively involved in the view hierarchy has a plurality of event definitions and selects one of the plurality of event definitions based on an internal state of the electronic device, and according to the selected event definition, the at least one event recognizer processes the respective sub-event before processing a next sub-event in the sequence of sub-events (e.g., using event/sub-event processing unit 1318).
In some embodiments, the plurality of event definitions includes a first event definition corresponding to a first swipe gesture having a first number of fingers and a second event definition corresponding to a second swipe gesture having a second number of fingers different from the first number of fingers.
In some embodiments, the internal state includes one or more settings for the secondary mode of operation.
In some embodiments, a respective one of the plurality of event definitions is selected for a respective one of the event recognizers based on the internal state of the electronic device and a determination that the respective event definition does not correspond to an event definition of any event recognizer other than the respective event recognizer for the view actively involved.
In some embodiments, each of the two or more event recognizers for the views actively involved in the view hierarchy has a respective plurality of event definitions, one respective event definition of the respective plurality of event definitions being selected for one respective event recognizer in dependence on the internal state of the electronic device and a determination that the respective event definition does not correspond to any event definition selected for any event recognizer having two or more event definitions other than the respective event recognizer.
In some embodiments, the processing unit 1306 is configured to process the respective sub-events according to the selected event definition by enabling display of one or more views of a first software application (e.g., on the display unit 1302 using the display enabling unit 1310) that is different from the software comprising the view hierarchy.
In some embodiments, the processing unit 1306 is configured to process the respective sub-events by replacing the display of the one or more views of the view hierarchy with the display of the one or more views of the first software application (e.g., on the display unit 1302 using the display enabling unit 1310) that is different from the software comprising the view hierarchy.
In some embodiments, the processing unit 1306 is configured to process the respective sub-events by: enabling display of a set of open application icons (e.g., using the display enabling unit 1310) corresponding to at least some of the plurality of concurrently open applications in a first predetermined area of the display unit 1302; and enabling simultaneous display of at least a subset of one or more views in the view hierarchy (e.g., using the display enabling unit 1310).
In some embodiments, the software is an application launcher.
In some embodiments, the software is an operating system application.
The foregoing description, for purpose of explanation, has been presented with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
Claims (40)
1. An electronic device, comprising:
a touch-sensitive display configured to receive touch input; and
a processor coupled to the touch-sensitive display and executing at least a first software application and a second software application, the first software application including a first set of one or more gesture recognizers and the second software application including one or more views and a second set of one or more gesture recognizers and wherein each gesture recognizer has a corresponding gesture processor, the processor further configured to:
enabling display of at least a subset of the one or more views of the second software application without displaying any views of the first software application; and
when at least a subset of the one or more views of the second software application are displayed without displaying any views of the first software application:
detecting a sequence of touch inputs on the touch-sensitive display, the sequence of touch inputs including a first portion of one or more touch inputs and a second portion of the one or more touch inputs following the first portion, wherein each of the first portion and the second portion defines one or more changes to the one or more touch inputs; and
during a first phase of detecting the sequence of touch inputs:
transmitting the first portion of one or more touch inputs to the first software application and the second software application;
identifying, from the first set of gesture recognizers, one or more matching gesture recognizers that recognize the first portion of one or more touch inputs; and
processing the first portion of one or more touch inputs using one or more gesture processors corresponding to the one or more matched gesture recognizers.
2. The electronic device of claim 1, wherein the processor is configured to detect the sequence of touch inputs when a touch input in the first portion of one or more touch inputs at least partially overlaps at least one of the displayed views of the second software application.
3. The electronic device of claim 1, wherein the processor is configured to enable display of at least a subset of the one or more views of the second software application without displaying views of any other applications.
4. The electronic device of claim 1, wherein the processor is further configured to:
after the first phase, during a second phase of detecting the sequence of touch inputs:
transmitting the second portion of one or more touch inputs to the first software application without transmitting the second portion of one or more touch inputs to the second software application;
identifying, from the one or more matched gesture recognizers, a second matched gesture recognizer that recognized the sequence of touch inputs; and
processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers.
5. The electronic device of claim 4, wherein the processor is configured to process the sequence of touch inputs using gesture processors corresponding to respective matching gesture recognizers by enabling display of one or more views of the first software application.
6. The electronic device of claim 4, wherein the processor is configured to process the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers by replacing the display of the one or more views of the second software application with the display of the one or more views of the first software application.
7. The electronic device of claim 4, wherein the processor is configured to execute the first software application, the second software application, and a third software application simultaneously; and processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers by replacing one or more displayed views of the second software application with one or more views of the third software application.
8. The electronic device of claim 4, wherein the processor is configured to:
enabling display of a set of open application icons in a first predetermined area of the touch-sensitive display corresponding to at least some of a plurality of concurrently open applications; and
enabling simultaneous display of at least a subset of the one or more views of the second software application.
9. The electronic device of claim 1, wherein the first software application is an application launcher.
10. The electronic device of claim 1, wherein the first software application is an operating system application.
11. An electronic device comprises
A touch-sensitive display configured to receive touch input;
a display enabling unit for displaying at least a subset of the one or more views of a second software application without displaying any view of the first software application, the second software application comprising a second set of one or more gesture recognizers and wherein the respective gesture recognizers have corresponding gesture processors; and
a detection unit to detect a sequence of touch inputs on the touch-sensitive display when at least a subset of one or more views of the second software application are displayed without displaying any view of the first software application, the sequence of touch inputs including a first portion of one or more touch inputs and a second portion of the one or more touch inputs following the first portion, wherein each of the first portion and the second portion defines one or more changes to the one or more touch inputs; and
a transmitting unit to transmit the first portion of one or more touch inputs to the first software application and the second software application during a first phase of detecting the sequence of touch inputs, the first software application including a first set of one or more gesture recognizers;
an asserting unit to assert, from among the gesture recognizers of the first set, one or more matching gesture recognizers that recognize the first portion of one or more touch inputs during the first phase of detecting the sequence of touch inputs; and
a touch input processing unit to process the first portion of one or more touch inputs using one or more gesture processors corresponding to the one or more matched gesture recognizers during the first stage of detecting the sequence of touch inputs.
12. The electronic device of claim 11, wherein the detection unit comprises means for detecting the sequence of touch inputs when a touch input in the first portion of one or more touch inputs at least partially overlaps at least one of the displayed views of the second software application.
13. The electronic device of claim 11, comprising:
means for, after the first phase, during a second phase of detecting the sequence of touch inputs, transferring the second portion of one or more touch inputs to the first software application without transferring the second portion of one or more touch inputs to the second software application;
means for identifying, after the first phase, during the second phase of detecting the touch input sequence, a second matching gesture recognizer that recognizes the touch input sequence from the one or more matching gesture recognizers; and
means for processing the sequence of touch inputs, after the first phase, during the second phase of detecting the sequence of touch inputs, using gesture processors corresponding to respective matched gesture recognizers.
14. The electronic device of claim 11, wherein the touch input processing unit comprises:
means for enabling display of a set of open application icons corresponding to at least some of the plurality of concurrently open applications in a first predetermined area of the touch-sensitive display; and
means for enabling simultaneous display of at least a subset of the one or more views of the second software application.
15. The electronic device of claim 11, wherein the display enabling unit comprises means for enabling display of at least a subset of the one or more views of the second software application without displaying views of any other applications.
16. The electronic device of claim 13, wherein the means for processing the sequence of touch inputs using gesture processors corresponding to respective matching gesture recognizers during detection of the second phase of the sequence of touch inputs after the first phase comprises means for enabling display of one or more views of the first software application.
17. The electronic device of claim 13, wherein the means for processing the touch input sequence using gesture processors corresponding to respective matching gesture recognizers during the detection of the second phase of the touch input sequence after the first phase comprises means for replacing display of one or more views of the second software application with display of one or more views of the first software application.
18. The electronic device of claim 13, wherein the first software application, the second software application, and a third software application execute concurrently; and wherein the means for processing the sequence of touch inputs during the second stage of detecting the sequence of touch inputs after the first stage using gesture processors corresponding to respective matched gesture recognizers comprises means for replacing one or more displayed views of the second software application with one or more views of the third software application.
19. The electronic device of claim 11, wherein the first software application is an application launcher.
20. The electronic device of claim 11, wherein the first software application is an operating system application.
21. An information processing apparatus for an electronic device with a touch-sensitive display, comprising:
a display enabling unit for displaying at least a subset of the one or more views of a second software application without displaying any view of the first software application, the second software application comprising a second set of one or more gesture recognizers and wherein the respective gesture recognizers have corresponding gesture processors;
a detection unit to detect a sequence of touch inputs on the touch-sensitive display when at least a subset of one or more views of the second software application are displayed without displaying any view of the first software application, the sequence of touch inputs including a first portion of one or more touch inputs and a second portion of the one or more touch inputs following the first portion, wherein each of the first portion and the second portion defines one or more changes to the one or more touch inputs;
a transmitting unit to transmit the first portion of one or more touch inputs to the first software application and the second software application during a first phase of detecting the sequence of touch inputs, the first software application including a first set of one or more gesture recognizers;
an asserting unit to assert, from among the gesture recognizers of the first set, one or more matching gesture recognizers that recognize the first portion of one or more touch inputs during the first phase of detecting the sequence of touch inputs; and
a touch input processing unit to process the first portion of one or more touch inputs using one or more gesture processors corresponding to the one or more matched gesture recognizers during a first phase of detecting the sequence of touch inputs.
22. The information processing apparatus of claim 21, wherein the detection unit comprises means for detecting the sequence of touch inputs when a touch input in the first portion of one or more touch inputs at least partially overlaps at least one of the displayed views of the second software application.
23. The information processing apparatus according to claim 21, comprising:
means for, after the first phase, during a second phase of detecting the sequence of touch inputs, transferring the second portion of one or more touch inputs to the first software application without transferring the second portion of one or more touch inputs to the second software application;
means for identifying, after the first phase, during the second phase of detecting the touch input sequence, a second matching gesture recognizer that recognizes the touch input sequence from the one or more matching gesture recognizers; and
means for processing the sequence of touch inputs, after the first phase, during the second phase of detecting the sequence of touch inputs, using gesture processors corresponding to respective matched gesture recognizers.
24. The information processing apparatus as claimed in claim 21, wherein the touch input processing unit includes:
means for enabling display of a set of open application icons corresponding to at least some of a plurality of concurrently open applications in a first predetermined area of the touch-sensitive display; and
means for enabling simultaneous display of at least a subset of the one or more views of the second software application.
25. The information processing apparatus of claim 21, wherein the display enabling unit comprises means for enabling display of at least a subset of the one or more views of the second software application without displaying views of any other applications.
26. The information processing apparatus of claim 23, wherein the means for processing the sequence of touch inputs using a gesture processor corresponding to a gesture recognizer that is self-matching during detection of the second phase of the sequence of touch inputs after the first phase comprises means for enabling display of one or more views of the first software application.
27. The information processing apparatus of claim 23, wherein the means for processing the sequence of touch inputs using gesture processors corresponding to respective matching gesture recognizers during the detection of the second phase of the sequence of touch inputs after the first phase comprises means for replacing display of one or more views of the second software application with display of one or more views of the first software application.
28. The information processing apparatus according to claim 23, wherein the first software application, the second software application, and a third software application are executed simultaneously; and wherein the means for processing the sequence of touch inputs during the second stage of detecting the sequence of touch inputs after the first stage using gesture processors corresponding to respective matched gesture recognizers comprises means for replacing one or more displayed views of the second software application with one or more views of the third software application.
29. The information processing apparatus of claim 21, wherein the first software application is an application launcher.
30. The information processing apparatus of claim 21, wherein the first software application is an operating system application.
31. A method performed in an electronic device with a touch-sensitive display, the electronic device configured to execute at least a first software application and a second software application, the first software application comprising a first set of one or more gesture recognizers and the second software application comprising one or more views and a second set of one or more gesture recognizers, wherein each gesture recognizer has a corresponding gesture processor, the method comprising:
displaying at least a subset of the one or more views of the second software application without displaying any views of the first software application; and
when at least a subset of the one or more views of the second software application are displayed without displaying any views of the first software application:
detecting a sequence of touch inputs on the touch-sensitive display, the sequence of touch inputs including a first portion of one or more touch inputs and a second portion of the one or more touch inputs following the first portion, wherein each of the first portion and the second portion defines one or more changes to the one or more touch inputs; and
during a first phase of detecting the sequence of touch inputs:
transmitting the first portion of one or more touch inputs to the first software application and the second software application;
identifying, from the first set of gesture recognizers, one or more matching gesture recognizers that recognize the first portion of one or more touch inputs; and
processing the first portion of one or more touch inputs using one or more gesture processors corresponding to the one or more matched gesture recognizers.
32. The method of claim 31, wherein the detecting occurs when a touch input in the first portion of one or more touch inputs at least partially overlaps at least one of the displayed views of the second software application.
33. The method of claim 31, wherein the displaying comprises: displaying at least a subset of the one or more views of the second software application without displaying views of any other applications.
34. The method of claim 31, comprising:
after the first phase, during a second phase of detecting the sequence of touch inputs:
transmitting the second portion of one or more touch inputs to the first software application without transmitting the second portion of one or more touch inputs to the second software application;
identifying, from the one or more matched gesture recognizers, a second matched gesture recognizer that recognizes the touch input sequence; and
processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers.
35. The method of claim 34, wherein processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers comprises: displaying one or more views of the first software application.
36. The method of claim 34, wherein processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers comprises: replacing the display of the one or more views of the second software application with the display of the one or more views of the first software application.
37. The method of claim 34, wherein the electronic device executes the first software application, the second software application, and a third software application simultaneously; and processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers comprises: replacing the one or more displayed views of the second software application with the one or more views of the third software application.
38. The method of claim 34, wherein processing the sequence of touch inputs using gesture processors corresponding to respective matched gesture recognizers comprises:
displaying a set of open application icons corresponding to at least some of a plurality of concurrently open applications in a first predetermined area of the touch-sensitive display; and
concurrently displaying at least a subset of the one or more views of the second software application.
39. The method of claim 31, wherein the first software application is an application launcher.
40. The method of claim 31, wherein the first software application is an operating system application.
Applications Claiming Priority (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201061425222P | 2010-12-20 | 2010-12-20 | |
| US61/425,222 | 2010-12-20 | ||
| US13/077,931 | 2011-03-31 | ||
| US13/077,931 US9311112B2 (en) | 2009-03-16 | 2011-03-31 | Event recognition |
| US13/077,927 US8566045B2 (en) | 2009-03-16 | 2011-03-31 | Event recognition |
| US13/077,524 US9244606B2 (en) | 2010-12-20 | 2011-03-31 | Device, method, and graphical user interface for navigation of concurrently open software applications |
| US13/077,927 | 2011-03-31 | ||
| US13/077,524 | 2011-03-31 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1177519A1 HK1177519A1 (en) | 2013-08-23 |
| HK1177519B true HK1177519B (en) | 2017-08-04 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110362414B (en) | Proxy gesture recognizer | |
| JP6785082B2 (en) | Event recognition | |
| CN102768608B (en) | event recognition | |
| AU2021290380B2 (en) | Event recognition | |
| HK1177519B (en) | Event recognition | |
| HK1191118A (en) | Event recognition | |
| HK1191118B (en) | Event recognition |