HK1195642A - Devices, methods, and graphical user interfaces for navigating and editing text - Google Patents
Devices, methods, and graphical user interfaces for navigating and editing text Download PDFInfo
- Publication number
- HK1195642A HK1195642A HK14108800.7A HK14108800A HK1195642A HK 1195642 A HK1195642 A HK 1195642A HK 14108800 A HK14108800 A HK 14108800A HK 1195642 A HK1195642 A HK 1195642A
- Authority
- HK
- Hong Kong
- Prior art keywords
- gesture
- predefined conditions
- text
- predefined
- location
- Prior art date
Links
Description
Technical Field
The present invention relates generally to electronic devices with touch-sensitive surfaces, including but not limited to electronic devices with touch-sensitive surfaces that display and edit electronic documents.
Background
In recent years, the use of touch sensitive surfaces as input devices for computers and other electronic computing devices has increased dramatically. Exemplary touch sensitive surfaces include touch pads and touch screen displays. Such surfaces are widely used to interact with electronic documents on a display.
Exemplary interactions include navigating and editing an electronic document. For example, users often need to scroll or pan an electronic document to text that needs to be edited. The user also needs to locate or reposition the insertion marker in the text to be edited and then enter additional text (e.g., via a keyboard). These document navigation and editing operations are typically performed multiple times while processing an electronic document. These interactions may be performed in any application that includes text entry capabilities (e.g., drawing applications, presentation applications (e.g., Keynote, Inc. of Apple Inc. of Cuttino, Calif.), word processing applications (e.g., Pages, Inc. of Aple, Calif., Cuttino), website creation applications (e.g., iWeb, Inc. of Aple, Calif.), or spreadsheet applications (e.g., Numbers, Inc. of Aple, Calif.).
However, existing methods for navigating and editing documents via touch-sensitive surfaces are often cumbersome and inefficient. For example, dragging an insertion marker with a finger that is moving across the touch screen to position the insertion marker requires careful hand-eye coordination and a steady finger to properly position the insertion marker at the desired location. In addition, heuristics for eliminating whether a finger gesture is attempting to reposition an insertion marker (rather than moving a document) or whether a finger gesture is attempting to move an entire document (rather than repositioning an insertion marker) make repositioning an insertion marker a slow and cumbersome process, thereby frustrating and wasting energy for users. This latter consideration is particularly important in battery powered devices.
Disclosure of Invention
Accordingly, there is a need to provide a faster, more efficient method and interface to electronic devices for navigating and editing electronic documents via touch sensitive surfaces. Such methods and interfaces may supplement or replace conventional methods for navigating and editing electronic documents via touch-sensitive surfaces. Such methods and interfaces may reduce the cognitive burden on the user and produce a more efficient human-machine interface. For battery-powered devices, such methods and interfaces conserve power and increase the time between battery charges.
The above-described deficiencies and other problems associated with user interfaces for electronic devices having touch-sensitive surfaces may be reduced or eliminated by the disclosed devices. In some embodiments, the device is a desktop computer. In some implementations, the device is a portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the device has a touch pad. In some embodiments, the device has a touch-sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the device has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs or sets of instructions stored in the memory for performing various functions. In some implementations, the user interacts with the GUI primarily through finger contacts and gestures on the touch-sensitive surface. In some embodiments, the functions may include image editing, drawing, presentation, word processing, website creation, disc authoring, spreadsheet making, game playing, telephone dialing, video conferencing, email transmission, instant messaging, exercise support (workbutsupport), digital photography, digital video, web browsing, digital music playing, and/or digital video playing. Executable instructions for performing these functions may be included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
In accordance with some implementations, a method is performed on an electronic device with a display and a touch-sensitive surface. The method comprises the following steps: displaying text of an electronic document on the display; displaying an insertion marker at a first location in the text of the electronic document; detecting a first horizontal gesture on the touch-sensitive surface; in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions: translating the electronic document on the display according to the direction of the first horizontal gesture and maintaining the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text by one character in the text according to the direction of the first horizontal gesture.
In accordance with certain implementations, an electronic device includes a display, a touch-sensitive surface, one or more processors, memory, and one or more programs. The one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for: displaying text of an electronic document on the display; displaying an insertion marker at a first location in the text of the electronic document; detecting a first horizontal gesture on the touch-sensitive surface; in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions: translating the electronic document on the display according to the direction of the first horizontal gesture and maintaining the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text by one character in the text according to the direction of the first horizontal gesture.
In accordance with certain implementations, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device with a display and a touch-sensitive surface, cause the device to: displaying text of an electronic document on the display; displaying an insertion marker at a first location in the text of the electronic document; detecting a first horizontal gesture on the touch-sensitive surface; in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions: translating the electronic document on the display according to the direction of the first horizontal gesture and maintaining the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text by one character in the text according to the direction of the first horizontal gesture.
In accordance with some embodiments, a graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory comprises: the method includes the steps of providing text of an electronic document, and inserting a marker at a first location in the text of the electronic document. A first horizontal gesture is detected on the touch-sensitive surface. In response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions, pan the electronic document on the display according to the direction of the first horizontal gesture and maintain the insertion marker at the first location in the text. In response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text in one character in the text according to the direction of the first horizontal gesture.
According to some embodiments, an electronic device comprises: a display; a touch-sensitive surface; means for displaying text of an electronic document on the display; means for displaying an insertion marker at a first location in the text of the electronic document; means for detecting a first horizontal gesture on the touch-sensitive surface; in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions: means for translating the electronic document on the display according to the direction of the first horizontal gesture, and means for holding the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, for moving the insertion marker from the first location to a second location in the text by one character in the text in accordance with the direction of the first horizontal gesture.
In accordance with some implementations, an information processing apparatus for use in an electronic device with a display and a touch-sensitive surface includes: means for displaying text of an electronic document on the display; means for displaying an insertion marker at a first location in the text of the electronic document; means for detecting a first horizontal gesture on the touch-sensitive surface; in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions: means for translating the electronic document on the display according to the direction of the first horizontal gesture, and means for holding the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, for moving the insertion marker from the first location to a second location in the text in one character in the text according to the direction of the first horizontal gesture.
According to some embodiments, an electronic device includes a display unit configured to display text of an electronic document, and an insertion marker at a first position in the text of the electronic document; a touch-sensitive surface unit configured to receive a gesture; and a processing unit coupled to the display unit and the touch-sensitive surface unit. The processing unit is configured to: detecting a first horizontal gesture on the touch-sensitive surface; in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions: translating the electronic document on the display according to the direction of the first horizontal gesture and maintaining the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text by one character in the text according to the direction of the first horizontal gesture.
Accordingly, electronic devices having displays and touch surfaces are provided with faster and more efficient methods and interfaces for navigating and editing text, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may supplement or replace conventional methods for navigating and editing text via a touch-sensitive surface.
Drawings
For a better understanding of the various embodiments of the present invention mentioned above, as well as additional embodiments thereof, reference is made to the description of the various embodiments below, in conjunction with the accompanying drawings, wherein like reference numerals refer to corresponding parts in the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
FIG. 4A illustrates an exemplary user interface for an application menu on a portable multifunction device, in accordance with certain embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
Fig. 5A-5P illustrate exemplary user interfaces for navigating and editing text via a touch-sensitive surface, according to some embodiments.
6A-6F are flow diagrams illustrating methods for navigating and editing text via a touch-sensitive surface, according to some embodiments.
FIG. 7 is a functional block diagram of an electronic device according to some embodiments.
Detailed Description
Many electronic devices with touch-sensitive surfaces include applications with document text editing capabilities. Users often need to scroll or pan an electronic document to the text that needs to be edited. The user also needs to locate or reposition the insertion marker in the text to be edited and then enter additional text (e.g., via a keyboard). These document navigation and editing operations are typically performed multiple times while processing an electronic document. Existing methods typically locate the insertion marker by dragging it with a finger that is moving across the touch screen, which requires careful hand-eye coordination and a steady finger to correctly position the insertion marker at the desired location. In addition, the heuristics for eliminating whether a finger gesture is attempting to reposition an insertion marker (rather than moving a document), or whether a finger gesture is attempting to move an entire document (rather than repositioning an insertion marker), make existing methods for repositioning insertion markers slow and cumbersome.
The devices and methods described below overcome these problems by moving an insertion marker a predefined amount (e.g., moving a character, a word, a sentence, a line, or a segment) using a quick finger swipe (swipe) gesture, and by improved heuristics for disambiguating whether the gesture represents repositioning of the insertion marker or panning of the electronic document.
For example, when the user performs a horizontal swipe gesture, if a fast swipe is detected (e.g., based on the initial velocity of the gesture), the device repositions the insertion marker; but if a slower, more intentional swipe gesture is detected, the document is translated. If the gesture is determined to be a gesture that moves the insertion marker, the insertion marker is typically moved in the direction of the gesture by an amount based on the number of fingers in the gesture. For example, a single-finger horizontal swipe gesture moves the insertion marker by one character, a two-finger horizontal swipe gesture moves the insertion marker by one word, and a three-finger horizontal swipe gesture moves the insertion marker to the beginning/end of the current line of text.
Thus, a fast, imprecise finger swipe gesture may be used to move the insertion marker by exactly a desired amount, whereas a slower, intentional gesture may be used to navigate (e.g., scroll or pan) the document. This makes navigating and editing text via the touch sensitive surface faster and more efficient.
Fig. 1A-1B, 2, 3, and 7 provide a description of exemplary devices, below. Fig. 4A-4B and 5A-5P illustrate exemplary user interfaces for navigating and editing text via a touch-sensitive surface. Fig. 6A-6F are flow diagrams illustrating a method for navigating and editing text via a touch-sensitive surface. The user interfaces in fig. 5A to 5P are used to illustrate the processes in fig. 6A to 6F.
Exemplary device
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These designations are only used to distinguish one element from another. For example, a first contact may be termed a second contact, and, similarly, a second contact may be termed a first contact, without departing from the scope of the present invention. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" may be interpreted to mean "when.. or" once.. then "or" in response to a determination "or" in response to a detection, "depending on the context. Similarly, the phrase "if it is determined" or "if [ stated condition or event ] is detected" may be interpreted, depending on the context, to mean "once determined," or "in response to a determination" or "after detecting [ stated condition or event ] or" in response to detecting [ stated condition or event ] ".
Embodiments of an electronic device, a user interface for such a device, and an associated process for using such a device are described. In some embodiments, the device is a portable communication device (such as a mobile phone) that also contains other functionality, such as PDA and/or music player functionality. Exemplary embodiments of the portable multifunction device include, but are not limited to: from Apple Inc. of cupertino, californiaiPodAndan apparatus. Other portable electronic devices, such as laptop computers or tablet computers with touch sensitive surfaces (e.g., touch screen displays and/or touch pads) may also be used. It should also be understood that in some embodiments, the device is not a portable communication device, but is a station with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad)A computer.
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device may include one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device may be adjusted and/or changed from one application to the next and/or within the respective application. In this way, a common physical architecture of the device (such as a touch-sensitive surface) can support various applications through a user interface that is intuitive, transparent to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience, and may also be considered or referred to as a touch-sensitive display system. Device 100 may include memory 102 (which may include one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, an input/output (I/O) subsystem 106, other input or control devices 116, and an external port 124. The device 100 may include one or more optical sensors 164. These components may communicate over one or more communication buses or signal lines 103.
It should be understood that device 100 is only one example of a portable multifunction device, and that device 100 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of components. The various components shown in fig. 1A may be implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
The memory 102 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 102 by other components of the device 100, such as the CPU120 and the peripheral interface 118, may be controlled by a memory controller 122.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to CPU120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 for performing various functions for the device 100 and for processing data.
In some embodiments, peripherals interface 118, CPU120, and memory controller 122 may be implemented on a single chip, such as chip 104. In some other embodiments, peripheral interface 118, CPU120, and memory controller 122 may be implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 may include known circuitry for performing these functions, including but not limited to: an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a Subscriber Identity Module (SIM) card, memory, etc. RF circuitry 108 may communicate with, among other devices, the internet, such as also known as the World Wide Web (WWW), intranets, and/or wireless networks, such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), via wireless communications. The wireless communication may use any of a variety of communication standards, protocols, and technologies, including but not limited to: global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., ieee802.11a, ieee802.11b, ieee802.11g, and/or ieee802.11n), voice over internet protocol (VoIP), Wi-MAX, protocols for e-mail (e.g., Internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and field balance extensions (SIMPLE), instant messaging and field service (IMPS)) and/or Short Message Service (SMS)), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to the speaker 111. The speaker 111 converts the electrical signal into a sound wave audible to a human. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuit 110 converts the electrical signal to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data may be retrieved from memory 102 and/or RF circuitry 108 and/or transmitted to memory 102 and/or RF circuitry 108 through peripherals interface 118. In some implementations, the audio circuit 110 also includes a headphone jack (e.g., 212 in fig. 2). The headphone jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as output-only headphones or headphones that can both output (e.g., monaural or binaural headphones) and input (e.g., microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch screen 112 and other input control devices 116, to a peripheral interface 118. The I/O subsystem 106 may include a display controller 156 and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from/transmit electrical signals to other input or control devices 116. Other input or control devices 116 may include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In certain alternative embodiments, input controller(s) 160 may be coupled to any of the following (or none): a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) may include up/down buttons for volume control of speaker 111 and/or microphone 113. The one or more buttons may include a push button (e.g., 206 in fig. 2).
Touch-sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch screen 112 and/or transmits electrical signals to touch screen 112. Touch screen 112 displays visual output to a user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some implementations, some or all of the visual output can correspond to user interface objects.
The touch screen 112 has a touch-sensitive surface, sensor or set of sensors that accept input from a user based on tactile (haptic) and/or haptic (tactile) contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or blockage of the contact) on touch screen 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a finger of the user.
The touch screen 112 may use LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies may be used. Touch screen 112 and display controller 156 may detect contact and any movement or breaking of contact using any of a variety of touch sensing technologies now known or later developed, including but not limited to: capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In one exemplary embodiment, a system is used such as that available from Apple Inc. of Cuttino, CalifiPodAndprojected mutual capacitance sensing techniques are found in.
The touch screen 112 may have a video resolution in excess of 100 dpi. In some implementations, the touch screen has a video resolution of approximately 168 dpi. The user may make contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which are less accurate than stylus-based inputs due to the larger contact area of the finger on the touch screen. In some implementations, the device translates the coarse finger-based input into a precise pointer/cursor position or command for performing an action desired by the user.
In some embodiments, in addition to a touch screen, device 100 may include a touch pad (not shown) for activating or deactivating particular functions. In some implementations, the touchpad is a touch-sensitive area of the device that does not display visual output, unlike a touch screen. The touchpad may be a touch-sensitive surface separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 that powers the various components. Power system 162 may include a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components related to the generation, management, and distribution of power in a portable device.
The device 100 may also include one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 may include a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light from the environment projected through one or more lenses and converts the light into data representing an image. In conjunction with the imaging module 143 (also referred to as a camera module), the optical sensor 164 may capture still images or video. In some embodiments, the optical sensor is located on the back of the device 100, opposite the touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still and/or video image acquisition. In some implementations, another optical sensor is located on the front of the device so that user images can be acquired for the video conference while the user views other video conference participants on the touch screen display.
The device 100 may also include one or more proximity sensors 166. FIG. 1A shows a proximity sensor 166 coupled to peripheral interface 118. Alternatively, the proximity sensor 166 may be coupled to the input controller 160 in the I/O subsystem 106. In some implementations, the proximity sensor turns off and disables the touch screen 112 when the multifunction device is near the user's ear (e.g., when the user is making a phone call).
The device 100 may also include one or more accelerometers 168. FIG. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 may be coupled to input controller 160 in I/O subsystem 106. In some implementations, information is displayed on the touch screen display in a portrait view or a landscape view based on analysis of data received from one or more accelerometers. In addition to the accelerometer(s) 168, the device 100 may optionally include a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information regarding the position and orientation (e.g., portrait or landscape) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application (or set of instructions) 136. Further, as shown in fig. 1A and 3, in some embodiments, memory 102 stores device/global internal state 157. Device/global internal state 157 includes one or more of the following: an active application state indicating the currently active application (if any); display status indicating applications, views, and other information occupying various areas of touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information relating to the location and/or attitude of the device.
The operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between the various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124, and also includes various software components for processing data received through the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted to couple to other devices either directly or indirectly through a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, similar to, and/or compatible with the 30-pin connectors used on iPod (trademark of Apple inc.) devices.
Contact/motion module 130 may detect contact with touch screen 112 (in conjunction with display controller 156) and other touch sensitive devices (e.g., a touchpad or a physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to the detection of contact, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the contact (which is represented by a series of contact data) may include determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the contact. These operations may be applied to a single contact (e.g., one finger contact), or multiple simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some implementations, the contact/motion module 130 and the display controller 156 detect contact on the touch panel.
The contact/motion module 130 may detect gestures input by the user. Different gestures on the touch-sensitive surface have different contact patterns. Thus, gestures may be detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes: a finger-down event is detected followed by a finger-up (e.g., lift) event at the same location (or substantially the same location) as the finger-down event (e.g., at the icon location). As another example, detecting a finger swipe gesture on the touch surface includes: detecting a finger-down event, followed by detecting one or more finger drag events, followed by detecting a finger-up (e.g., lift) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the brightness of displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including but not limited to: text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations, and the like.
In some implementations, the graphics module 132 stores data representing graphics to be used. Each graphic may be assigned a corresponding code. The graphics module 132 receives one or more codes specifying graphics to be displayed, along with coordinate data and other graphics attribute data, if needed, from an application or the like, and then generates screen image data for output to the display controller 156.
Text input module 134 (which may be a component of graphics module 132) provides a soft keyboard for entering text into various applications (e.g., contacts 137, email 140, IM141, browser 147, and any other application that requires text input).
The GPS module 135 determines the location of the device and provides this information for use by various applications (e.g., to the phone 138 for use in location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services such as weather widgets, local yellow pages widgets, and map/navigation widgets).
The applications 136 may include the following modules (or sets of instructions), or a subset or superset thereof:
● contact module 137 (sometimes referred to as an address book or contact list);
● a telephone module 138;
● video conferencing module 139;
● E-mail client module 140
● Instant Messaging (IM) module 141;
● exercise support module 142;
● camera module 143 for still and/or video images;
● an image management module 144;
● browser module 147;
● calendar module 148;
● widget module 149, which may include one or more of the following: a weather widget 149-1, a stock widget 149-2, a calculator widget 149-3, an alarm widget 149-4, a dictionary widget 149-5, and other widgets obtained by the user, and a user-created widget 149-6;
● a widget creator module 150 for making a user-created widget 149-6;
● search module 151;
● video and music player module 152, which may consist of a video player module and a music player module;
● memo module 153;
● map module 154; and/or
● online video module 155.
Examples of other applications 136 that may be stored in memory 102 include other word processing applications (e.g., word processing module 384), other image editing applications, drawing applications (e.g., drawing module 380), presentation applications (presentation module 382), spreadsheet applications (e.g., spreadsheet module 390), website creation applications (e.g., website creation module 386), JAVA enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, contacts module 137 may be used to manage an address book or contact list (e.g., stored in memory 102 or in application internal state 192 of contacts module 137 in memory 370), including: adding name(s) to an address book; delete name(s) from the address book; associating phone number(s), email address (es), physical address (es), or other information with the name; associating the image with a name; classifying and ordering names; a telephone number or email address is provided to initiate and/or facilitate communications over telephone 138, video conference 139, email 140, or IM141, among others.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, telephone module 138 may be used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book 137, modify an already-entered telephone number, dial a corresponding telephone number, conduct a conversation, and disconnect or hang up when the conversation is complete. As described above, wireless communication may use any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephone module 138, video conference module 139 includes executable instructions for initiating, conducting, and terminating a video conference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send emails with still or video images captured with the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant message transmission module 141 includes executable instructions for entering a sequence of characters corresponding to an instant message, for modifying previously entered characters, for transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for a telephone-based instant message, or using XMPP, SIMPLE, or IMPS for an internet-based instant message), for receiving an instant message, and for viewing a received instant message. In some embodiments, the transmitted and/or received instant messages may include graphics, photos, audio files, video files, and/or other attachments, as supported by MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant messaging" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module 146, workout support module 142 includes functionality for creating workouts (e.g., with time, distance, and/or calorie burn goals); communicating with an exercise sensor (sports device); receiving exercise sensor data; calibrating a sensor for monitoring exercise; selecting and playing music for exercise; and executable instructions to display, store, and transmit the workout data.
In conjunction with touch screen 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for capturing still images or video (including video streams) and storing them in memory 102, modifying characteristics of the still images or video, or deleting the still images or video from memory 102.
In conjunction with touch screen 112, display controller 156, contact module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or manipulating, marking, deleting, presenting (e.g., in a digital slide show or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet (including searching, linking, receiving and displaying web pages or portions of web pages, and attachments and other files linked to web pages) according to user instructions.
In conjunction with the RF circuitry 108, the touch screen 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, the email client module 140, and the browser module 147, the calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, widget module 149 is a small application (e.g., weather widget 149-1, stock widget 149-2, calculator widget 149-3, alarm widget 149-4, and dictionary widget 149-5) that may be downloaded and used by a user, or a small application created by a user (e.g., user-created widget 149-6). In some embodiments, the widgets include HTML (HyperText markup language) files, CSS (cascading Style sheets) files, and JavaScript files. In some embodiments, the widgets include XML (extensible markup language) files and JavaScript files (e.g., Yahoo! widgets).
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, widget creator module 150 may be used by a user to create a widget (e.g., to turn a user-specified portion of a web page into a widget).
In conjunction with touch screen 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching for text, music, sound, images, videos, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) based on user indications.
In conjunction with touch screen 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on a display connected externally via external port 124). In some embodiments, the device 100 may include the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
In conjunction with touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, memo module 153 includes executable instructions to create and manage a memo, calendar, etc. according to the user's instructions.
In conjunction with the RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, the map module 154 may be used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions; data about stores and other points of interest at or near a particular location; and other location-based data) as directed by a user.
In conjunction with touch screen 112, display system controller 156, contact module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back particular online videos (e.g., on the touch screen or on a display connected externally via external port 124), send emails with particular online video links, and manage online videos in one or more file formats, such as h.264. In some implementations, the instant messaging module 141, rather than the email client module 140, is used to send a link to a particular online video.
Each of the above modules and applications corresponds to a set of instructions for performing one or more of the functions described above as well as the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 may store a subset of the modules and data structures described above. In addition, memory 102 may store other modules and data structures not described above.
In some implementations, device 100 is a device that performs the operation of a predefined set of functions on the device exclusively through a touch screen and/or a touchpad. By using a touch screen and/or touch pad as the primary input control device for operating the device 100, the number of physical input control devices (such as push buttons, dials, etc.) on the device 100 may be reduced.
The predefined set of functions that may be performed exclusively through the touch screen and/or the touchpad include navigation between user interfaces. In some implementations, the touchpad, when touched by a user, navigates the device 100 from any user interface that may be displayed on the device 100 to a main menu, home screen, or root menu. In such embodiments, the touchpad may be referred to as a "menu button". In some other implementations, the menu button may be a physical push button or other physical input control device rather than a touchpad.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some implementations, the memory 102 (in FIGS. 1A and 1B) or 370 (FIG. 3) includes the event classifier 170 (e.g., in the operating system 126) and the corresponding application 136-1 (e.g., any of the aforementioned applications 137 and 151, 155, 380 and 390).
Event classifier 170 receives the event information and determines application 136-1 and the application view 191 of application 136-1 to which the event information is delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher (event dispatcher) module 174. In some implementations, the application 136-1 includes an application internal state 192 that indicates the current application view(s) displayed on the touch-sensitive display 112 when the application is active or executing. In some implementations, device/global content state 157 is used by event classifier 170 to determine the application(s) currently active, and application internal state 192 is used by event classifier 170 to determine application view 191 to which to deliver event information.
In some implementations, the application internal state 192 includes additional information, such as one or more of the following: resume information to be used when the application 136-1 resumes execution, user interface state information indicating information that the application 136-1 is displaying or is ready to display, a state queue that enables a user to return to a previous state or view of the application 136-1, and a redo/undo queue of previous actions performed by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on touch-sensitive display 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 sends event information. In other embodiments, peripheral interface 118 transmits event information only upon the occurrence of a significant event (e.g., receiving an input that exceeds a predetermined noise threshold and/or is longer than a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
Hit view determination module 172 provides a software program for determining where in one or more views a sub-event occurs when touch-sensitive display 112 displays more than one view. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected may correspond to a program level in a program or view hierarchy of the application. For example, the lowest hierarchical view in which a touch is detected may be referred to as a hit view, and the set of events identified as correct inputs may be determined based at least in part on the hit view of the initial touch that began the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies as a hit view the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that forms an event or potential event) occurs. Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source that caused it to be identified as the hit view.
Active event recognizer determination module 173 determines the view or views in the view hierarchy that should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively involved (active _ involve) views, and thus determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is fully confined to the area associated with a particular view, the higher views in the hierarchy will remain as the views that are effectively involved.
Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments that include active event recognizer determination module 173, event dispatcher module 174 delivers the event information to the event recognizer determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores the event information in an event queue for retrieval by the corresponding event receiver module 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In other embodiments, the event classifier 170 is a separate module or is part of another module stored in the memory 102, such as the contact/motion module 130.
In some implementations, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the corresponding application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more event recognizers 180 are part of a separate module, such as a user interface suite (not shown), or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, each event handler 190 includes one or more of the following: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 may utilize or call data updater 176, object updater 177, or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events based on the event information. The event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of: metadata 183, and event delivery instructions 188 (which may include sub-event delivery instructions).
The event receiver 182 receives event information from the event classifier 170. The event information includes information about a sub-event (e.g., touch or touch movement). Depending on the sub-event, the event information also includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information may also include the speed and direction of the sub-event. In some implementations, the event includes rotation of the device from one orientation to another (e.g., rotation from portrait to landscape, or vice versa) and the event information includes corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. The event definition 186 contains definitions of events (e.g., predefined sub-event sequences), e.g., event 1(187-1), event 2(187-2), and so on. In some implementations, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(187-1) is a double tap on the display object. The double tap includes, for example, a first touch (touch start) to the display object for a predetermined phase, a first lift-off (touch end) to the predetermined phase, a second touch (touch start) to the display object for the predetermined phase, and a second lift-off (touch end) to the predetermined phase. In another example, the definition of event 2(187-2) is a drag on the display object. The drag includes, for example, a predetermined phase of touch (or contact) to the display object, movement of the touch on the touch-sensitive display 112, and lift-off of the touch (touch-off). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test for determining a user interface object associated with a sub-event. For example, in an application view in which three user interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test for determining which, if any, of the three user interface objects is associated with the touch (sub-event). If each display object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the sub-event and object that triggered the hit test.
In some embodiments, the definition of each event 187 further includes a delay action that delays the delivery of event information until it has been determined whether the sequence of sub-events corresponds to an event type of the event identifier.
When each event recognizer 180 determines that a sequence of sub-events does not match any event in the event definitions 186, the corresponding event recognizer 180 enters an event not possible, event failed, or event ended state, after which the corresponding event recognizer 180 disregards subsequent sub-events of the touch-based gesture. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the touch-based gesture in progress.
In certain embodiments, each event recognizer 180 includes metadata 183 with configurable attributes, flags (flags), and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively involved event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers may interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to a different level in the view or program hierarchy.
In some embodiments, each event recognizer 180 activates an event handler 190 associated with an event when one or more sub-events of the event are recognized. In some embodiments, each event recognizer 180 delivers event information associated with an event to event handler 190. Activating the event handler 190 is different from sending (or delaying the sending) of sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 throws a flag associated with recognizing the event, and the event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with a series of sub-events or views that are actively involved. An event handler associated with a series of sub-events or views actively involved receives the event information and performs a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates a phone number used in contacts module 137 or stores a video file used in video player module 145. In some embodiments, object updater 177 creates and updates data used in application 136-1. For example, the object updater 177 creates a new user interface object or updates the location of a user interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler(s) 190 include or have access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, data updater 176, object updater 177, and GUI updater 178 are included in two or more software modules.
It should be understood that the foregoing discussion of event processing with respect to user touches on a touch-sensitive display is also applicable to other forms of user inputs to operate multifunction device 100 having an input device, where not all user inputs are initiated on a touch screen, e.g., mouse movements and mouse button presses with or without coordination with single or multiple keyboard presses or holds, user movements, taps, drags, scrolls, etc. on a touch pad, stylus inputs, movements of the device, verbal instructions, detected eye movements, biometric inputs, and/or any combination of the foregoing, may be used as inputs corresponding to defining sub-events to be recognized.
FIG. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen may display one or more graphics within the User Interface (UI) 200. In this embodiment, as well as others described below, a user may select one or more graphics by gesturing the graphics, such as by one or more fingers 202 (not drawn to scale in the figure) or one or more styluses (not drawn to scale in the figure). In some embodiments, the selection of one or more graphics occurs when the user blocks contact with the one or more graphics. In some implementations, the gesture can include one or more taps, one or more swipes (left-to-right, right-to-left, upward, and/or downward), and/or a rotation of a finger (right-to-left, left-to-right, upward, and/or downward) that has made contact with the device 100. In some embodiments, inadvertent contact with the graphic may not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that sweeps across the application icon may not select the corresponding application.
Device 100 may also include one or more physical buttons, such as a "home" or menu button 204. As previously described, the menu button 204 may be used to navigate to any application 136 in a set of applications that may be executed on the device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In one embodiment, device 100 includes touch screen 112, menu button 204, push buttons 206 for turning the device on/off and locking the device, and volume adjustment button(s) 208, Subscriber Identity Module (SIM) card slot 210, headset jack 212, and docking/charging external port 124. Pressing the button 206 may be used to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before a predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In an alternative embodiment, device 100 may also accept verbal input through microphone 113 for activating or deactivating certain functions.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some implementations, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming device, or a control device (e.g., a home or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 may include circuitry (sometimes referred to as a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output interface 330 including a display 340, which is typically a touch screen display. The input/output interface 330 may also include a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355. Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 may optionally include one or more storage devices remote from CPU(s) 310. In some implementations, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1), or a subset thereof. In addition, memory 370 may store additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 may store drawing module 380, presentation module 382, word processing module 384, website creation module 386, disc authoring module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1) may or may not store these modules.
Each of the above-described elements in fig. 3 may be stored in one or more of the aforementioned memory devices. Each of the above modules corresponds to a set of instructions for performing a function as described above. The modules or programs (i.e., sets of instructions) described above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or rearranged in various embodiments. In some embodiments, memory 370 may store a subset of the modules and data structures described above. In addition, memory 370 may store additional modules and data structures not described above.
Attention is now directed to embodiments of a user interface ("UI") that may be implemented on portable multifunction device 100.
FIG. 4A illustrates an exemplary user interface for an application menu on portable multifunction device 100 according to some embodiments. A similar user interface may be implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
● signal strength indicator(s) 402 for wireless communication(s), such as cellular signals and Wi-Fi signals;
● time 404;
● Bluetooth designator 405;
● battery status indicator 406;
● have a tray 408 of icons for the following frequently used applications, such as:
a phone 138, which may include an indicator 414 of the number of missed calls or voicemail messages;
the email client 140, which may include an indicator 410 of the number of unread email messages;
o browser 147; and
a video and music player 152, also known as an iPod (trademark of Apple inc.) module 152; and
● icons for other applications such as:
○IM141;
the image management 144;
the O camera 143;
weather 149-1;
the stocks 149-2;
exercise support 142
O-calendar 148;
clock 149-4
O map 154;
o memos 153;
setting 412, which provides access to settings of the device 100 and its various applications 136;
an online video module 155, also known as YouTube (trademark of Google corporation) module 155;
o word processor 384;
plot 380 is good;
spreadsheet 390; and
demonstration 382.
Fig. 4B illustrates an exemplary user interface for a device (e.g., device 300) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 of fig. 3) separate from a display 450 (e.g., touchscreen display 112), according to some embodiments. Although many of the examples below will be given for input on touch screen display 112 (where the touch-sensitive surface is combined with the display), in some implementations, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in FIG. 4B) has a primary axis (e.g., 452 in FIG. 4B) that corresponds to a primary axis (e.g., 453 in FIG. 4B) on the display (e.g., 450). According to these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations corresponding to respective locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). In this manner, user inputs (e.g., contact 460 and contact 462, and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in FIG. 4B) are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) of the multifunction device when the touch-sensitive surface is separated from the display. It should be understood that similar methods may be used for the other user interfaces described herein.
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented on an electronic device having a display and a touch-sensitive surface, such as device 300 or portable multifunction device 100.
Fig. 5A-5P illustrate exemplary user interfaces for navigating and editing text via a touch-sensitive surface, according to some embodiments. The user interface in these figures is used to illustrate the process in figures 6A-6F.
Fig. 5A illustrates an electronic document 500 displayed on the touch screen 112 of the device 100. Electronic document 500 includes text 502. In some implementations, the electronic document is a plain text document, a word processing document, a presentation document with text, a spreadsheet with text, or a drawing document with text.
In FIG. 5A, text 502-1 in the first page is displayed on touch screen 112. As shown in FIG. 5A, an insertion marker 504 is displayed in the text 502-1 just before the word "bag-field" in the sentence "We are met a great field of the same. For ease of explanation, as shown in FIG. 5A (i.e., just before the word "bag-field" in the sentence "We are met on a great bag-field of that war.), the position of the insertion marker 504 associated with the text 502-1 will be referred to hereinafter as" position 1 ".
In some implementations, the electronic document 500 is displayed in an editing mode, and the insertion marker 504 is displayed in the text 502 while the document 500 is displayed in the editing mode. When in edit mode, a virtual keyboard 501 for receiving user text entries may be displayed on touch screen 112.
Also illustrated in FIG. 5A is a gesture 506 detected on touch screen 112. Gesture 506 includes a finger contact 506-A on touch screen 112 and a movement 508 of finger contact 506-A. The movement 508 is horizontal or substantially horizontal (e.g., movement 508 within 10, 20, or 30 degrees of deviation from true horizontal) and to the right.
The action performed by device 100 in response to detecting gesture 506 depends on whether gesture 506 is determined to satisfy one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition on an initial velocity of the gesture 506. For example, if the initial velocity of gesture 506 is less than a predefined threshold velocity, document 500 (including text 502) is translated (e.g., by pushing or pulling or scrolling) on touch screen 112 according to the direction of movement 508 of gesture 506. For example, as shown in FIG. 5B, document 500 (including text 502-1) is translated 512 to the right according to the direction of movement 508 of gesture 506. The insertion marker 504 is moved to the right along with the text 502-1 to maintain its position at position 1. In some embodiments, the predefined threshold speed is 250, 333, or 500 points per second.
As shown in FIG. 5C, if the initial velocity of gesture 506 is greater than the predefined threshold velocity, insertion marker 504 moves from position 1 with one character in text 502-1 according to the rightward direction of movement 508 of gesture 506. In FIG. 5C, one character to the right (depending on the direction of movement 508) from position 1 places the insertion marker 504 at a position between "b" and "a" in the word "battle-field" in the same sentence as position 1. For ease of illustration, the position of the insertion marker 504 associated with the text 502-1 (i.e., between "b" and "a" in the word "bag-field" in the sentence "We are met a great field of that ware.") as shown in FIG. 5C will be referred to hereinafter as "position 2".
In some implementations, the predefined condition includes a distance moved by the contact 506-A during the movement 508 within a predefined initial time (e.g., 0.05, 0.10, 0.15, 0.20, or 0.25 seconds) from the detection of the contact 506-A on the touch screen 112. As described above with reference to fig. 5B, if the distance moved is less than a predefined initial movement threshold (e.g., a movement of 25, 50, or 75 points, where 1 point = 1/72 inches), document 500 is panned. As described above with reference to fig. 5C, if the distance moved is greater than the predefined initial movement threshold, the insertion marker 504 is moved one character to position 2.
In some implementations, the condition associated with moving insertion marker 504 to position 2 in one character further includes a condition that the detected gesture is a single-finger gesture (which is satisfied by gesture 506). That is, if the initial velocity or initial movement condition is satisfied and the gesture 506 is a single-finger gesture, the insertion marker 504 is moved by one character. In some implementations, the conditions associated with translating the document 500 vary with respect to whether the detected gesture is a single-finger or multi-finger gesture.
Returning to FIG. 5C, when insertion marker 504 is at position 2, gesture 514 is detected on touch screen 112. Gesture 514 includes finger contact 514-A and movement 516 of contact 514-A. Movement 516 is vertical or substantially vertical (e.g., movement 516 within 10, 20, or 30 degrees of true vertical) and downward.
The action performed by device 100 in response to detecting gesture 514 depends on whether gesture 514 is determined to satisfy one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition on an initial velocity of the gesture 514. For example, if the initial velocity of gesture 514 is less than a predefined threshold velocity, document 500 (including text 502) is translated (e.g., by pushing or scrolling) on touch screen 112 according to the direction of movement 516 of gesture 514. For example, as shown in FIG. 5D, document 500 (including text 502-1) is scrolled 520 downward according to the direction of movement 516 of gesture 514. The insertion marker 504 is moved down along with the text 502-1 to maintain its position at position 2. In some embodiments, the predefined threshold speed is 250, 333, or 500 points per second.
As shown in FIG. 5E, if the initial velocity of the gesture 514 is greater than the predefined threshold velocity, the insertion marker 504 moves from position 2 in one vertically adjacent line of text 502-1 according to the direction of movement 516 of the gesture 514. In FIG. 5E, the insertion marker 504 is placed in a position between "a" and "t" in the word "that" in the partial statement "com to destination a port of that field, as a refining place for" one adjacent line down (according to the direction of movement 516) from the line of text containing position 2. For ease of illustration, the position of the insertion marker 504 associated with the text 502-1 (between "a" and "t" in the word "that" in the partial sentence "com to derivative a portion of field, as a refining place for") as shown in FIG. 5E will be referred to hereinafter as "position 3".
In some embodiments, the movement of the insertion marker 504 by one adjacent row is vertical or substantially vertical. In some embodiments, the position to which insertion marker 504 is moved from position 1 is the position in the adjacent row that is closest to the dashed vertical line that intersects position 1.
In some other implementations, the predefined condition includes a distance moved by the contact 514-A during the movement 516 within a predefined initial time (e.g., 0.05, 0.10, 0.15, 0.20, or 0.25 seconds) from the detection of the contact 514-A on the touch screen 112. As described above with reference to fig. 5D, if the distance moved is less than a predefined initial movement threshold (e.g., a movement of 25, 50, or 75 points, where 1 point = 1/72 inches), document 500 is panned. As described above with reference to fig. 5E, if the distance moved is greater than the predefined initial movement threshold, the insertion marker 504 is moved one adjacent row to position 3.
In some embodiments, the condition associated with moving insertion marker 504 to position 3 in one adjacent row further includes a condition that the detected gesture is a single-finger gesture (which is satisfied by gesture 514). That is, if the initial velocity or initial movement condition is met and the gesture 514 is a single-finger gesture, the insertion marker 504 is moved one adjacent row. In some implementations, the conditions associated with translating the document 500 vary with respect to whether the detected gesture is a single-finger or multi-finger gesture.
FIG. 5F illustrates a gesture 522 detected on touch screen 112. Gesture 522 is a two finger gesture; gesture 522 includes finger contacts 522-A and 522-B and movement 524 of finger contacts 522-A and 522-B. The movement 524 is horizontal or substantially horizontal and to the right.
As shown in FIG. 5G, in response to detecting the gesture 522, the insertion marker 504 moves a word to the right (i.e., according to the direction of movement 524 of the gesture 522) from position 1 to a position just before the word "of" in the sentence "We are met a great field of the field of that war. For ease of explanation, this new position is hereinafter referred to as "position 4".
In some implementations, moving "one word" from a current position in the current word (e.g., a position where the current word begins after a space and before a first character in the current word; a position between middle characters of the current word; or a position where the current word ends after a last character in the current word and before a space) includes moving to the beginning of the next word after the current word (e.g., for a horizontal right gesture). In some implementations, moving "one word" from the current position in the current word includes (a) moving to the beginning of the current word if the current position is in the middle of the current word or at the end of the current word, and (b) moving to the beginning of the word immediately before the current word if the current position is at the beginning of the current word (e.g., for a left horizontal gesture). In some implementations, moving "one word" from the current position in the current word includes moving to the beginning of the word immediately before the current word (e.g., for a left horizontal gesture) if the current position is at the beginning, middle, or end of the current word.
In some implementations, moving the insertion marker 504 one word in response to detecting the gesture 522 is in response to determining that the gesture 522 satisfies one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition that the detected gesture is a multi-finger gesture (e.g., a two-finger or three-finger gesture). In some implementations, the multi-finger gesture condition requires that the detected gesture be a two-finger gesture (which is satisfied by gesture 522).
FIG. 5H illustrates a gesture 526 detected on touch screen 112. Gesture 526 is a three finger gesture; the gesture 526 includes finger contacts 526-A, 526-B, and 526-C, and a movement 528 of the finger contacts 526-A through 526-C. The movement 528 is horizontal or substantially horizontal and is to the right.
In some embodiments, as shown in FIG. 5I, in response to detecting the gesture 526, the insertion marker 504 moves from position 1 to the end of the line of text at position 1 according to the direction of movement 528 of the gesture 526. For ease of explanation, this new position is hereinafter referred to as "position 5".
In some embodiments, as shown in FIG. 5J, in response to detecting gesture 526, insertion marker 504 moves from location 1 to the beginning of the statement "We have come to destination appointment," which is after the statement "We are met on a grease field of that ware" at location 1. The movement of the marker 504 is according to the direction of the movement 528 of the gesture 526. For ease of explanation, the new position of the insertion marker 504 shown in FIG. 5J is referred to hereinafter as "position 6".
In some implementations, moving the insertion marker 504 to the end of the row (or, in some implementations, to the beginning of the next sentence) in response to detecting the gesture 526 is in response to determining that the gesture 526 satisfies one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition that the detected gesture is a multi-finger gesture (e.g., a two-finger or three-finger gesture). In some implementations, the multi-finger gesture condition requires that the detected gesture be a three-finger gesture (which is satisfied by gesture 526).
FIG. 5K illustrates text 502-1 displayed on touch screen 112 with a selection range 530 covering a portion of text 502-1. Selection scope 530 selects the statement "We are met great-field of that war", starting just before the letter "W" in the word "We" and ending just after the period that ended the statement. As is the case in fig. 5K, in some embodiments, when the selection range 530 is displayed, the insertion marker 504 is not displayed.
FIG. 5K also illustrates a gesture 532 detected on touch screen 112. Gesture 532 includes contact 532-A and movement 534 of contact 532-A. The movement 534 is horizontal or substantially horizontal and is to the right.
As shown in FIG. 5L, in response to detecting gesture 532, insertion marker 504 is placed at a location corresponding to the end of selection range 530 (depending on the direction of movement 534 of gesture 532), just after the period at the end of the statement "We are met a great bag-field of the that was. This position is hereinafter referred to as "position 7". Also in response to detecting gesture 532, the text selected by selection scope 530 is deselected (deselect) (and selection scope 530 ceases to be displayed).
In some implementations, placing the insertion marker 504 at the end of the selection range 530 in response to detecting the gesture 532 is in response to determining that the gesture 532 satisfies one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition that the detected gesture is a single-finger gesture (which is satisfied by gesture 532).
FIG. 5M illustrates a gesture 536 detected on the touch screen 112 when the insertion marker 504 is displayed at position 2 in the text 502-1. Gesture 536 includes contacts 536-A and 536-B and movement 538 of contacts 536-A and 536-B. The movement 538 is vertical or substantially vertical and is downward.
As shown in FIG. 5N, in response to detecting gesture 536, insertion marker 504 is placed at a location corresponding to the beginning of the paragraph following the paragraph at location 2. The new position, hereinafter referred to as "position 8", is from position 2 down in the text 502-1; insertion marker 504 moves from position 2 to position 8 in a downward direction according to movement 538.
In some implementations, moving the insertion marker 504 from position 2 to position 8 in response to detecting the gesture 536 is in response to determining that the gesture 536 satisfies one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition that the detected gesture is a multi-finger gesture. In some implementations, the condition is that the detected gesture is a two-finger gesture (which is satisfied by gesture 536).
FIG. 50 illustrates a gesture 536 detected on the touch screen 112 when the insertion marker 504 is displayed at position 2 in the text 502-1. Gesture 536 includes contacts 536-A, 536-B, and 536-C, and movement 542 of contacts 536-A through 536-C. Movement 542 is vertical or substantially vertical and is downward.
As shown in FIG. 5P, in response to detecting gesture 540, insertion marker 504 is placed at a location corresponding to the beginning of text 502-2 in the page after the page containing text 502-1 at location 2. Assuming that the new position, hereinafter referred to as "position 9," is after position 2 of text 502, position 9 is down from position 2 in text 502-1; insertion marker 504 moves from position 2 to position 9 in accordance with the downward direction of movement 542.
In some implementations, moving the insertion marker 504 from position 2 to position 9 in response to detecting the gesture 540 is in response to determining that the gesture 540 satisfies one or more particular predefined conditions. In some implementations, the one or more predefined conditions include a condition that the detected gesture is a multi-finger gesture. In some implementations, the condition is that the detected gesture is a three-finger gesture (which is satisfied by gesture 540).
It should be appreciated that the gestures described above with reference to fig. 5A-5P may be performed in any suitable sequence and repeated for moving the insertion marker 504 or translating the document by any desired amount.
6A-6F are flow diagrams illustrating a method 600 for navigating and editing text via a touch-sensitive surface, according to some embodiments. Method 600 is performed on an electronic device (e.g., device 300 of FIG. 3 or portable multifunction device 100 of FIG. 1) having a display and a touch-sensitive surface. In some implementations, the display is a touch screen display and the touch-sensitive surface is located on the display. In some implementations, the display is separate from the touch-sensitive surface. Certain operations in method 600 may be combined and/or the order of certain operations may be changed.
As described below, method 600 provides an intuitive way to place insertion markers within a document. A fast, imprecise finger swipe gesture may be used to move the insertion marker by exactly a desired amount, whereas a slower, intentional gesture may be used to navigate (e.g., scroll or pan) the document. The method reduces the cognitive burden on the user when navigating and editing text via a touch sensitive surface, thereby creating a more efficient human machine interface. For battery-powered electronic devices, a user is enabled to place insertion markers within a document more quickly, thereby saving power more effectively and increasing the time between battery charges.
The device displays text of an electronic document on a display (602). For example, in FIG. 5A, text 502-1 of electronic document 500 is displayed on touch screen 112. In some implementations, the display is a touch screen display and the touch-sensitive surface is on the display (604).
The device displays an insertion marker (e.g., an I-beam, underline, rectangle, or other text cursor) at a first location in text of the electronic document (e.g., in an editing mode of the electronic document) (606). For example, in FIG. 5A, insertion marker 504 is shown at position 1 in text 502-1.
The device detects a first horizontal (or substantially horizontal, e.g., within 10, 20, or 30 degrees of horizontal) gesture on the touch-sensitive surface (608). For example, in FIG. 5A, gesture 506 is detected on touch screen 112. Gesture 506 has movement 508 that is horizontal.
In response to determining (612) that the first horizontal gesture satisfies a first set of one or more predefined conditions, the device translates (e.g., pushes or scrolls) the electronic document on the display according to the direction of the first horizontal gesture (614), and maintains (616) the first location of the insertion marker in the text. For example, if device 100 determines that gesture 506 satisfies a first set of one or more predefined conditions, then in response to the determination, as shown in FIG. 5B, document 500, which includes text 502-1, is translated in direction 512 (which is according to the direction of movement 508). As shown in FIG. 5B, during the pushing and pulling of document 500, insertion marker 504 remains at position 1 of text 502-1.
In response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions that is different from the first set of one or more predefined conditions, the device moves the insertion marker from the first location to a second location in the text in one character of the text according to the direction of the first horizontal gesture (622). For example, if device 100 determines that gesture 506 satisfies a second set of one or more predefined conditions that is different from the first set of predefined conditions, then in response to the determination, insertion marker 504 moves from position 1 in fig. 5A to position 2 in fig. 5C. Position 2 is one character in text 502-1 that is forward from position 1 according to the direction of movement 508 (e.g., the direction of movement 508 points in a forward direction for text 502-1).
In some implementations, the first set of one or more predefined conditions includes that the initial velocity of the first horizontal gesture is less than a predefined threshold velocity (e.g., 250, 333, or 500 points/second) (618) and the second set of one or more predefined conditions includes that the initial velocity of the first horizontal gesture is greater than the predefined threshold velocity (624). For example, device 100 detects an initial velocity of gesture 506. If the initial velocity of gesture 506 is less than the predefined threshold velocity, document 500 is translated and insertion marker 504 remains at position 1, as shown in FIG. 5B. If the initial velocity of gesture 506 is greater than the predefined threshold velocity, then insertion marker 504 is moved to position 2 as shown in FIG. 5C.
In some implementations, the first horizontal gesture includes a finger contact and movement of the finger contact (610), the first set of one or more predefined conditions includes: within a predefined initial time (e.g., 0.05, 0.10, 0.15, 0.20, or 0.25 seconds) from detection of the finger contact, an initial movement of the finger contact (e.g., a movement of 25, 50, or 75 points, where 1 point = 1/72 inches) is less than a predefined initial movement threshold (620), and the second set of one or more predefined conditions includes: within a predefined initial time from detecting the finger contact, the initial movement of the finger contact is greater than a predefined initial movement threshold (626). For example, gesture 506 includes contact 506-A and movement 508 of contact 506-A. If the distance that the contact 506-A has moved within the predefined initial time since the contact 506-A was detected is less than the predefined initial movement threshold, the document 500 is translated and the insertion marker 504 remains at position 1, as shown in FIG. 5B. If the distance that the contact 506-A has moved within the predefined initial time since the contact 506-A was detected is greater than the predefined initial movement threshold, the insertion marker 504 is moved to position 2 as shown in FIG. 5C.
In some implementations, the second set of one or more predefined conditions includes that the first horizontal gesture is a single-finger gesture (628). For example, in FIG. 5C, insertion marker 504 moves from position 1 to position 2 in response to gesture 506 of a single finger (i.e., one finger contact 506-A). Thus, in some implementations, a fast, horizontal swipe gesture with a single finger moves the insertion marker one character in the direction of the gesture, whereas a slower, horizontal gesture with a single finger translates the document in the direction of the gesture. In some implementations, if a horizontally moving multi-finger gesture is detected instead of gesture 506, other operations may be performed or gesture 506 may be ignored.
In some implementations, while the insertion marker is displayed at the second location in the text of the electronic document (630), the device detects a vertical (or substantially vertical, e.g., within 10, 20, or 30 degrees of vertical) gesture on the touch-sensitive surface (632). For example, in FIG. 5C, when insertion marker 504 is displayed at position 2, gesture 514 is detected on touch screen 112. Gesture 514 includes downward vertical movement 516.
In response to determining that the vertical gesture satisfies the third set of one or more predefined conditions (636), the device translates (e.g., scrolls) the electronic document on the display according to the direction of the vertical gesture (638) and maintains the second position of the insertion marker in the text (640). For example, if device 100 determines that gesture 514 satisfies the third set of one or more predefined conditions, in response to the determination, document 500, which includes text 502-1, as shown in FIG. 5D, scrolls in direction 520 (according to the direction of movement 516). During scrolling of document 500, insertion marker 504 remains at position 2 of text 502-1, as shown in FIG. 5D.
In response to determining that the vertical gesture satisfies a fourth set of one or more predefined conditions that is different from the third set of one or more predefined conditions, the device moves the insertion marker from a row in the text containing the second location to a vertically adjacent row in the text according to the direction of the vertical gesture (646). For example, if device 100 determines that gesture 514 satisfies a fourth set of one or more predefined conditions that is different from the third set of predefined conditions, then insertion marker 504 moves from position 2 in fig. 5C to position 3 in fig. 5E. Position 3 is one line down in text 502-1 from position 2 in accordance with the downward direction of movement 516.
In some implementations, the third set of one or more predefined conditions includes that the initial velocity of the vertical gesture is less than a predefined threshold velocity (e.g., 250, 333, or 500 points/second) (642) and the fourth set of one or more predefined conditions includes that the initial velocity of the vertical gesture is greater than the predefined threshold velocity (648). For example, device 100 detects an initial velocity of gesture 514. If the initial velocity of gesture 514 is less than the predefined threshold velocity, document 500 is scrolled and insertion marker 504 remains at position 2, as shown in FIG. 5D. For example, device 100 detects an initial velocity of gesture 514. If the initial velocity of the gesture 514 is greater than the predefined threshold velocity, the insertion marker 504 is moved to position 3 as shown in FIG. 5E.
In some embodiments, the vertical gesture includes a finger contact and movement of the finger contact (634), the third set of one or more predefined conditions includes an initial movement of the finger contact (e.g., movement of 25, 50, or 75 points, where 1 point = 1/72 inches) within a predefined initial time (e.g., 0.05, 0.10, 0.15, 0.20, or 0.25 seconds) since the finger contact was detected being less than a predefined initial movement threshold (644), and the fourth set of one or more predefined conditions includes an initial movement of the finger contact greater than a predefined initial movement threshold (650) within a predefined initial time since the finger contact was detected. For example, gesture 514 (FIG. 5C) includes contact 514-A and movement 516 of contact 514-A. If the distance that the contact 514-A has moved within the predefined initial time since the contact 514-A was detected is less than the predefined initial movement threshold, the document 500 is scrolled and the insertion marker 504 remains at position 2, as shown in FIG. 5D. If the contact 514-A moves a distance greater than the predefined initial movement threshold within the predefined initial time from the detection of the contact 514-A, the insertion marker 504 is moved to position 3 as shown in FIG. 5E.
In some implementations, the fourth set of one or more predefined conditions includes that the vertical gesture is a single-finger gesture (652). For example, in FIG. 5E, insertion marker 504 moves from position 2 to position 3 in response to gesture 514 of a single finger (i.e., one finger contact 514-A). Thus, in some implementations, a fast, vertical swipe gesture with a single finger moves the insertion marker one line in the direction of the gesture, whereas a slower, vertical gesture with a single finger translates the document in the direction of the gesture. In some implementations, if a vertically moving multi-finger gesture is detected instead of gesture 514, other operations may be performed or gesture 514 may be ignored.
In some implementations, in response to determining that the first horizontal gesture satisfies a fifth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, the device moves the insertion marker (662) in one word of text from the first position according to the direction of the first horizontal gesture. For example, in FIG. 5F, gesture 522 is detected instead of gesture 506 (FIG. 5A). Gesture 522 includes horizontal movement 524. If device 100 determines that gesture 522 satisfies a fifth set of one or more predefined conditions that is different from the first set or the second set of predefined conditions, then in response to the determination, insertion marker 504 moves from position 1 in FIG. 5F to position 4 in FIG. 5G according to the direction of movement 524 of gesture 522. Position 4 is at the beginning of the next word (in the direction of movement 524 of gesture 522) from the word at the beginning of position 1.
In some implementations, the fifth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture (e.g., a two-finger or three-finger swipe gesture) (664). For example, in fig. 5F-5G, the insertion marker 504 moves from position 1 to position 4 (instead of position 2) in response to the multi-finger gesture 522. In some implementations, if the horizontal gesture is a two-finger gesture, the insertion marker 504 moves by one word, and if the horizontal gesture is a three-finger gesture, the insertion marker 504 moves by a different amount than one character or one word. In some implementations, in addition to the gesture being a multi-finger gesture, the fifth set of predefined conditions includes the initial velocity of the multi-finger gesture being greater than a predefined threshold velocity (e.g., 250, 333, or 500 points/second).
In some implementations, in response to determining that the first horizontal gesture satisfies a sixth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, the device moves the insertion marker 504 to the beginning or end of the line of text containing the first location according to the direction of the first horizontal gesture (666). For example, in FIG. 5H, gesture 526 is detected instead of gesture 506 (FIG. 5A). The gesture 526 includes a horizontal movement 528. If device 100 determines that gesture 526 satisfies a sixth set of one or more predefined conditions that are different from the first set or the second set of predefined conditions, then in response to the determination, insertion marker 504 moves from position 1 in FIG. 5H to position 5 in FIG. 5I according to the direction of movement 528 of gesture 526. Position 5 is at the end of the line of text where position 1 is located.
In some embodiments, the insertion marker is moved to the beginning of the next line of text after the line of text containing the first position, rather than to the end of the line of text containing the first position. For example, the insertion marker may be moved to the beginning of the subsequent line after the line at position 1, rather than being moved to position 5.
In some implementations, the sixth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture (e.g., a two-finger or three-finger swipe gesture) (668). For example, in fig. 5H-5I, the insertion marker 504 moves from position 1 to position 5 (instead of position 2) in response to the multi-finger gesture 526. In some implementations, the insertion marker 504 is moved to the beginning or end of the line of text at the first location if the gesture is a three-finger gesture, and is moved by a different amount than one character or to the beginning/end of the line if the gesture is a two-finger gesture. In some implementations, in addition to the gesture being a multi-finger gesture, the sixth set of predefined conditions includes the initial velocity of the multi-finger gesture being greater than a predefined threshold velocity (e.g., 250, 333, or 500 points/second).
In some implementations, in response to determining that the first horizontal gesture satisfies a seventh set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, the device moves the insertion marker 504 to the beginning of the sentence containing the first location or the beginning of the next sentence after the sentence containing the first location, depending on the direction of the first horizontal gesture (670). For example, in FIG. 5H, gesture 526 is detected instead of gesture 506 (FIG. 5A). The gesture 526 includes a horizontal movement 528. If device 100 determines that gesture 526 satisfies a seventh set of one or more predefined conditions that is different from the first set or the second set of predefined conditions, then in response to the determination, insertion marker 504 moves from position 1 in FIG. 5H to position 6 in FIG. 5J according to the direction of movement 528 of gesture 526. Position 6 is located at the beginning of the next sentence after the sentence at position 1.
In some embodiments, the insertion marker is moved to the end of the statement containing the first position, rather than to the beginning of the next statement after the statement containing the first position. For example, the insertion marker may be moved to the end of the statement at position 1, rather than to position 6.
In some implementations, the seventh set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture (e.g., a two-finger or three-finger swipe gesture) (672). For example, in fig. 5H-5J, the insertion marker 504 moves from position 1 to position 6 (instead of position 2) in response to the multi-finger gesture 526. In some embodiments, if the gesture is a three finger gesture, the insertion marker 504 is moved to the sentence containing the first location or the beginning of the next sentence after the sentence containing the first location, and if the gesture is a two finger gesture, is moved a different amount than one character or is moved to the beginning of the current/next sentence. In some implementations, in addition to the gesture being a multi-finger gesture, the seventh set of predefined conditions includes the initial velocity of the multi-finger gesture being greater than a predefined threshold velocity (e.g., 250, 333, or 500 points/second).
In some implementations, when the selected text range is displayed in text of the electronic document (674), the device detects a second horizontal (or substantially horizontal, e.g., within 10, 20, or 30 degrees of horizontal) gesture on the touch-sensitive surface (676). For example, in FIG. 5K, when a selection range 530 is displayed on touch screen 112 that covers selected text in text 502-1, a gesture 532 is detected on touch screen 112. The gesture 532 includes a horizontal movement 534.
In response to determining that the second horizontal gesture satisfies an eighth set of one or more predefined conditions (678), the device places an insertion marker at the beginning or end of the selected text range according to the direction of the second horizontal gesture (680), and deselects the selected text range (682). For example, as shown in FIG. 5L, in response to determining that gesture 532 satisfies the eighth set of one or more predefined conditions, insertion marker 504 is placed at position 7, which is at the end of selection range 530, and the text selected by selection range 530 is deselected (i.e., selection range 530 ceases to be displayed).
In some implementations, the eighth set of one or more predefined conditions includes that the second horizontal gesture is a single-finger gesture (684). For example, in response to detecting gesture 532, insertion marker 504 is placed at position 7 and the text selected by selection scope 530 is deselected. In some implementations, if the second horizontal gesture is a multi-finger gesture, the device expands the selected text range according to the direction of the second horizontal gesture (not shown). For example, a rightward slide of two fingers expands the end of the range by one character (or one word), while a leftward slide of two fingers contracts the end of the expanded range by one character (or one word).
In some implementations, in response to determining that the vertical gesture satisfies a ninth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, the device moves the insertion marker from the second location to the beginning of the paragraph containing the second location or the beginning of the next paragraph after the paragraph containing the second location according to the direction of the vertical gesture (654). For example, in FIG. 5M, gesture 536 is detected instead of gesture 514 (FIG. 5C). Gesture 536 includes vertical movement 538. If device 100 determines that gesture 536 satisfies a ninth set of one or more predefined conditions that is different from the third set or the fourth set of predefined conditions, then in response to the determination, insertion marker 504 moves from position 2 in FIG. 5M to position 8 in FIG. 5N. Position 8 is located at the beginning of the next paragraph after the paragraph containing position 2, which is in a downward direction according to movement 538 of gesture 536.
In some embodiments, the insertion marker is moved to the end of the segment containing the second position, rather than to the beginning of the next segment after the segment containing the second position.
In some implementations, the ninth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture (e.g., a two-finger or three-finger swipe gesture) (656). For example, in fig. 5M-5N, insertion marker 504 moves from position 1 to position 8 (instead of position 3) in response to multi-finger gesture 536. In some implementations, the insertion marker 504 is moved to the beginning of the current or next paragraph if the gesture is a two-finger gesture, and is moved by a different amount if the gesture is a three-finger gesture. In some implementations, the ninth set of predefined conditions includes that the initial velocity of the multi-finger gesture is greater than a predefined threshold velocity (e.g., 250, 333, or 500 points/second).
In some embodiments, in response to determining that the vertical gesture satisfies a tenth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, the insertion marker is moved from the second location to the beginning of the page containing the second location or the beginning of the next page after the page containing the second location according to the direction of the vertical gesture (658). For example, in FIG. 5O, gesture 540 is detected instead of gesture 514 (FIG. 5C). Gesture 540 includes vertical movement 542. If device 100 determines that gesture 540 satisfies a tenth set of one or more predefined conditions that is different from the third set or the fourth set of predefined conditions, then in response to the determination, insertion marker 504 moves from position 2 in FIG. 5O to position 9 in FIG. 5P. Position 9 is at the beginning of the next page (containing text 502-2) after the page containing position 2, which is in a downward direction (corresponding to a forward direction in text 502) according to movement 542 of gesture 540. In some embodiments, the insertion marker is moved to the end of the page containing the second position, rather than to the beginning of the next page after the page containing the second position.
In some implementations, the tenth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture (e.g., a two-finger or three-finger swipe gesture) (660). For example, in fig. 5O-5P, insertion marker 504 moves from position 2 to position 9 (instead of position 3) in response to multi-finger gesture 536. In some implementations, the insertion marker 504 is moved to the beginning of the current page or the next page if the gesture is a three-finger gesture, and is moved by a different amount if the gesture is a two-finger gesture. In some implementations, the tenth set of predefined conditions includes that the initial velocity of the multi-finger gesture is greater than a predefined threshold velocity (e.g., 250, 333, or 500 points/second).
Fig. 7 is a functional block diagram of an electronic device 700 configured according to the principles of the present invention described above, according to some embodiments. The functional blocks of the device may be implemented by hardware, software or a combination of hardware and software to carry out the principles of the invention. Those skilled in the art will appreciate that the functional blocks described in fig. 7 may be combined or separated into sub-blocks to implement the principles of the present invention described above. Thus, the description herein may support any possible combination or separation or further definition of the functional blocks described herein.
As shown in fig. 7, the electronic device 700 includes a display unit 702 configured to display text of an electronic document and to display an insertion marker at a first position in the text of the electronic document; a touch-sensitive surface unit 704 configured to receive a gesture; and a processing unit 706 coupled to the display unit 702 and the touch-sensitive surface unit 704. In some embodiments, the processing unit 706 includes: a detection unit 708, a translation unit 710, a holding unit 712, a moving unit 714, a placing unit 716, and a deselection unit 718.
The processing unit 706 is configured to detect a first horizontal gesture on the touch-sensitive surface unit 704 (e.g., by the detection unit 708); in response to determining that the first horizontal gesture satisfies the first set of one or more predefined conditions, translating the electronic document on the display unit 702 (e.g., by the translation unit 710) according to the direction of the first horizontal gesture, and holding the insertion marker at a first location in the text (e.g., by the holding unit 712); and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions that is different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text (e.g., by moving unit 714) in one character in the text according to the direction of the first horizontal gesture.
In some implementations, the first set of one or more predefined conditions includes that the initial velocity of the first horizontal gesture is less than a predefined threshold velocity, and the second set of one or more predefined conditions includes that the initial velocity of the first horizontal gesture is greater than the predefined threshold velocity.
In some implementations, the first horizontal gesture includes a finger contact and movement of the finger contact, the first set of one or more predefined conditions includes an initial movement of the finger contact within a predefined initial time since the finger contact was detected being less than a predefined initial movement threshold, and the second set of one or more predefined conditions includes an initial movement of the finger contact within the predefined initial time since the finger contact was detected being greater than the predefined initial movement threshold.
In some implementations, the second set of one or more predefined conditions includes that the first horizontal gesture is a single-finger gesture.
In some embodiments, the processing unit 706 is configured to: while the display unit 702 displays the insertion marker at the second location in the text of the electronic document, detecting a vertical gesture on the touch-sensitive surface unit 704 (e.g., by the detection unit 708); in response to determining that the vertical gesture satisfies the third set of one or more predefined conditions, panning the electronic document on the display unit 702 according to the direction of the vertical gesture (e.g., by the panning unit 710) and maintaining the second position of the insertion marker in the text (e.g., by the holding unit 712); and in response to determining that the vertical gesture satisfies a fourth set of one or more predefined conditions that is different from the third set of one or more predefined conditions, moving the insertion marker from the line in the text containing the second location to a vertically adjacent line in the text according to the direction of the vertical gesture (e.g., by moving unit 714).
In some implementations, the third set of one or more predefined conditions includes that the initial velocity of the vertical gesture is less than a predefined threshold velocity, and the fourth set of one or more predefined conditions includes that the initial velocity of the vertical gesture is greater than the predefined threshold velocity.
In some implementations, the vertical gesture includes a finger contact and movement of the finger contact, the third set of one or more predefined conditions includes an initial movement of the finger contact within a predefined initial time since the finger contact was detected being less than a predefined initial movement threshold, and the fourth set of one or more predefined conditions includes an initial movement of the finger contact within the predefined initial time since the finger contact was detected being greater than the predefined initial movement threshold.
In some implementations, the fourth set of one or more predefined conditions includes that the vertical gesture is a single-finger gesture.
In some embodiments, the processing unit 706 is configured to: in response to determining that the first horizontal gesture satisfies a fifth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, the insertion marker is moved (e.g., by the moving unit 714) from the first position by one word in the text according to the direction of the first horizontal gesture.
In some implementations, the fifth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
In some embodiments, the processing unit 706 is configured to: in response to determining that the first horizontal gesture satisfies a sixth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, the insertion marker is moved to the beginning or end of the line of text containing the first location according to the direction of the first horizontal gesture.
In some implementations, the sixth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
In some embodiments, the processing unit 706 is configured to: in response to determining that the first horizontal gesture satisfies a seventh set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, the insertion marker is moved (e.g., by moving unit 714) to the beginning of the sentence containing the first location or the beginning of the next sentence after the sentence containing the first location, depending on the direction of the first horizontal gesture.
In some implementations, the seventh set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
In some embodiments, the processing unit 706 is configured to: while the display unit 702 displays the selected text range in the text of the electronic document, detecting a second horizontal gesture on the touch-sensitive surface unit 704 (e.g., by the detection unit 708); and in response to determining that the second horizontal gesture satisfies the eighth set of one or more predefined conditions, place an insertion marker at the beginning or end of the selected text range according to the direction of the second horizontal gesture (e.g., by the placement unit 716), and deselect the selected text range (e.g., by the deselection unit 718).
In some implementations, the eighth set of one or more predefined conditions includes that the second horizontal gesture is a single-finger gesture.
In some embodiments, the processor is configured to: in response to determining that the vertical gesture satisfies a ninth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, the insertion marker is moved (e.g., by moving unit 714) from the second position to the beginning of the paragraph containing the second position or the beginning of the next paragraph after the paragraph containing the second position according to the direction of the vertical gesture.
In some implementations, the ninth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture.
In some embodiments, the processor is configured to: in response to determining that the vertical gesture satisfies a tenth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, the device moves the insertion marker from the second location to the beginning of the page containing the second location or the beginning of the next page after the page containing the second location (e.g., by moving element 714) according to the direction of the vertical gesture.
In some implementations, the tenth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture.
In some implementations, the display unit 702 is a touch-sensitive display unit, and the touch-sensitive surface unit 704 is on the display unit 702.
It should be appreciated that while the embodiments described above are used to move insertion markers in text, the embodiments described above may also be used in a similar manner to move, place, or manipulate other types of insertion markers or cursors, such as moving the current cell to highlight or focus on in a spreadsheet application, or changing the highlighting/focus of the current shape or object in a drawing or presentation application.
The operations in the information processing method described above may be implemented by executing one or more functional blocks in an information processing apparatus (such as a general-purpose processor or a dedicated chip). These modules and combinations thereof and/or combinations thereof with general purpose hardware (e.g., as described above with reference to fig. 1A and 3) are included within the scope of the claimed invention.
The above may be implemented by the components described in fig. 1A to 1B with reference to fig. 6A to 6F. For example, detection operation 608, translation operation 614, hold operation 616, and move operation 622 may be implemented by event sorter 170, event recognizer 180, and event handler 190. Event monitor 171 in event sorter 170 detects a touch on touch-sensitive display 112 and event depiler module 174 delivers event information to application 136-1. A respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186 and determines whether a first contact located at a first location on the touch-sensitive surface corresponds to a predefined event or sub-event, such as a selection of an object on a user interface. Upon detection of a respective predefined event or sub-event, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 may utilize or call data updater 176 or object updater 177 to update application internal state 192. In some embodiments, event handler 190 accesses a respective GUI updater 178 to update the content displayed by the application. Similarly, those skilled in the art will readily appreciate that other processes may also be implemented based on the components described in fig. 1A-1B.
The foregoing description, for purpose of explanation, describes the invention with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
Claims (52)
1. A method, comprising:
at an electronic device with a display and a touch-sensitive surface:
displaying text of an electronic document on the display;
displaying an insertion marker at a first location in the text of the electronic document;
detecting a first horizontal gesture on the touch-sensitive surface;
in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions:
translating the electronic document on the display according to the direction of the first horizontal gesture, an
Maintaining the insertion marker at the first location in the text; and
in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
2. The method of claim 1, wherein the first set of one or more predefined conditions includes an initial velocity of the first horizontal gesture being less than a predefined threshold velocity, and the second set of one or more predefined conditions includes the initial velocity of the first horizontal gesture being greater than the predefined threshold velocity.
3. The method of claim 1, wherein:
the first horizontal gesture comprises a finger contact and movement of the finger contact;
the first set of one or more predefined conditions comprises: within a predefined initial time from detecting the finger contact, an initial movement of the finger contact is less than a predefined initial movement threshold; and
the second set of one or more predefined conditions comprises: the initial movement of the finger contact is greater than the predefined initial movement threshold within the predefined initial time from detection of the finger contact.
4. The method of any of claims 1-3, wherein the second set of one or more predefined conditions includes that the first horizontal gesture is a single-finger gesture.
5. The method of any of claims 1 to 4, comprising:
when the insertion marker is displayed at the second location in the text of the electronic document:
detecting a vertical gesture on the touch-sensitive surface;
in response to determining that the vertical gesture satisfies a third set of one or more predefined conditions:
translating the electronic document on the display according to the direction of the vertical gesture, an
Maintaining the insertion marker at the second location in the text; and
in response to determining that the vertical gesture satisfies a fourth set of one or more predefined conditions that is different from the third set of one or more predefined conditions, moving the insertion marker from a row in the text that includes the second location to a vertically adjacent row in the text in accordance with the direction of the vertical gesture.
6. The method of claim 5, wherein the third set of one or more predefined conditions includes that an initial velocity of the vertical gesture is less than a predefined threshold velocity, and the fourth set of one or more predefined conditions includes that the initial velocity of the vertical gesture is greater than the predefined threshold velocity.
7. The method of claim 5, wherein:
the vertical gesture includes a finger contact and movement of the finger contact;
the third set of one or more predefined conditions comprises: within a predefined initial time from detecting the finger contact, an initial movement of the finger contact is less than a predefined initial movement threshold; and
the fourth set of one or more predefined conditions comprises: the initial movement of the finger contact is greater than the predefined initial movement threshold within the predefined initial time from detection of the finger contact.
8. The method of any of claims 5-7, wherein the fourth set of one or more predefined conditions includes that the vertical gesture is a single-finger gesture.
9. The method of any of claims 1 to 8, comprising:
in response to determining that the first horizontal gesture satisfies a fifth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, moving the insertion marker in one word of the text from the first position according to the direction of the first horizontal gesture.
10. The method of claim 9, wherein the fifth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
11. The method of any one of claims 1 to 10, wherein:
in response to determining that the first horizontal gesture satisfies a sixth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, moving the insertion marker to a beginning or an end of a line of text containing the first location according to the direction of the first horizontal gesture.
12. The method of claim 11, wherein the sixth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
13. The method of any one of claims 1 to 12, wherein:
in response to determining that the first horizontal gesture satisfies a seventh set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, moving the insertion marker to a beginning of a sentence containing the first location or a beginning of a next sentence after the sentence containing the first location, according to the direction of the first horizontal gesture.
14. The method of claim 13, wherein the seventh set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
15. The method of any of claims 1 to 14, comprising:
while displaying a selected text range in the text of the electronic document:
detecting a second horizontal gesture on the touch-sensitive surface; and
in response to determining that the second horizontal gesture satisfies an eighth set of one or more predefined conditions:
placing the insertion marker at the beginning or end of the selected text range according to the direction of the second horizontal gesture, an
Deselecting the selected text range.
16. The method of claim 15, wherein the eighth set of one or more predefined conditions includes that the second horizontal gesture is a single-finger gesture.
17. The method of claim 5, comprising:
in response to determining that the vertical gesture satisfies a ninth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, moving the insertion marker from the second position to a beginning of a paragraph containing the second position or to a beginning of a next paragraph after the paragraph containing the second position, according to the direction of the vertical gesture.
18. The method of claim 17, wherein the ninth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture.
19. The method of any one of claims 5, 17 and 18, comprising:
in response to determining that the vertical gesture satisfies a tenth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, moving the insertion marker from the second location to a beginning of a page containing the second location or to a beginning of a next page after the page containing the second location, according to the direction of the vertical gesture.
20. The method of claim 19, wherein the tenth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture.
21. The method of any of claims 1-20, wherein the display is a touch screen display and the touch-sensitive surface is on the display.
22. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying text of an electronic document on the display;
displaying an insertion marker at a first location in the text of the electronic document;
detecting a first horizontal gesture on the touch-sensitive surface;
in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions:
translating the electronic document on the display according to the direction of the first horizontal gesture, an
Maintaining the insertion marker at the first location in the text; and
in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
23. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display and a touch-sensitive surface, cause the device to:
displaying text of an electronic document on the display;
displaying an insertion marker at a first location in the text of the electronic document;
detecting a first horizontal gesture on the touch-sensitive surface;
in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions:
translating the electronic document on the display according to the direction of the first horizontal gesture, an
Maintaining the insertion marker at the first location in the text; and in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions that is different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
24. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising:
text of the electronic document; and
an insertion marker at a first location in the text of the electronic document;
wherein:
a first horizontal gesture is detected on the touch-sensitive surface;
in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions:
the electronic document is translated on the display according to the direction of the first horizontal gesture, an
The insertion marker is maintained at the first position in the text; and
in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, the insertion marker is moved from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
25. An electronic device, comprising:
a display;
a touch-sensitive surface;
means for displaying text of an electronic document on the display;
means for displaying an insertion marker at a first location in the text of the electronic document;
means for detecting a first horizontal gesture on the touch-sensitive surface;
an apparatus enabled in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions, comprising:
means for translating the electronic document on the display according to the direction of the first horizontal gesture, an
Means for maintaining the insertion marker at the first position in the text; and
means, enabled in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, for moving the insertion marker from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
26. An information processing apparatus for use in an electronic device with a display and a touch-sensitive surface, comprising:
means for displaying text of an electronic document on the display;
means for displaying an insertion marker at a first location in the text of the electronic document;
means for detecting a first horizontal gesture on the touch-sensitive surface;
an apparatus enabled in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions, comprising:
means for translating the electronic document on the display according to the direction of the first horizontal gesture, an
Means for maintaining the insertion marker at the first position in the text; and
means, enabled in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, for moving the insertion marker from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
27. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-21.
28. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with the methods of any of claims 1-21.
29. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display and a touch-sensitive surface, cause the device to perform the method of any of claims 1-21.
30. An electronic device, comprising:
a display;
a touch-sensitive surface;
apparatus for performing the method of any one of claims 1-21.
31. An information processing apparatus for use in an electronic device with a display and a touch-sensitive surface, comprising:
apparatus for performing the method of any one of claims 1-21.
32. An electronic device, comprising:
a display unit configured to display:
text of the electronic document; and
an insertion marker at a first location in the text of the electronic document; a touch-sensitive surface unit configured to receive a gesture; and
a processing unit coupled to the display unit and the touch-sensitive surface unit, the processing unit configured to:
detecting a first horizontal gesture on the touch-sensitive surface unit;
in response to determining that the first horizontal gesture satisfies a first set of one or more predefined conditions:
translating the electronic document on the display unit according to the direction of the first horizontal gesture, an
Maintaining the insertion marker at the first location in the text; and
in response to determining that the first horizontal gesture satisfies a second set of one or more predefined conditions different from the first set of one or more predefined conditions, moving the insertion marker from the first location to a second location in the text with one character in the text according to the direction of the first horizontal gesture.
33. The electronic device of claim 32, wherein the first set of one or more predefined conditions includes that an initial velocity of the first horizontal gesture is less than a predefined threshold velocity, and the second set of one or more predefined conditions includes that the initial velocity of the first horizontal gesture is greater than the predefined threshold velocity.
34. The electronic device of claim 32, wherein:
the first horizontal gesture comprises a finger contact and movement of the finger contact;
the first set of one or more predefined conditions comprises: within a predefined initial time from detecting the finger contact, an initial movement of the finger contact is less than a predefined initial movement threshold; and
the second set of one or more predefined conditions comprises: the initial movement of the finger contact is greater than the predefined initial movement threshold within the predefined initial time from detection of the finger contact.
35. The electronic device of any of claims 32-34, wherein the second set of one or more predefined conditions includes that the first horizontal gesture is a single-finger gesture.
36. The electronic device of any of claims 32-35, wherein the processing unit is configured to:
while displaying the insertion marker at the second location in the text of the electronic document:
detecting a vertical gesture on the touch-sensitive surface unit;
in response to determining that the vertical gesture satisfies a third set of one or more predefined conditions:
translating the electronic document on the display unit according to the direction of the vertical gesture, an
Maintaining the insertion marker at the second location in the text; and
in response to determining that the vertical gesture satisfies a fourth set of one or more predefined conditions that is different from the third set of one or more predefined conditions, moving the insertion marker from a row in the text that includes the second location to a vertically adjacent row in the text in accordance with the direction of the vertical gesture.
37. The electronic device of claim 36, wherein the third set of one or more predefined conditions includes that an initial velocity of the vertical gesture is less than a predefined threshold velocity, and the fourth set of one or more predefined conditions includes that the initial velocity of the vertical gesture is greater than the predefined threshold velocity.
38. The electronic device of claim 36, wherein:
the vertical gesture includes a finger contact and movement of the finger contact;
the third set of one or more predefined conditions comprises: within a predefined initial time from detecting the finger contact, an initial movement of the finger contact is less than a predefined initial movement threshold; and
the fourth set of one or more predefined conditions comprises: the initial movement of the finger contact is greater than the predefined initial movement threshold within the predefined initial time from detection of the finger contact.
39. The electronic device of any of claims 36-38, wherein the fourth set of one or more predefined conditions includes that the vertical gesture is a single-finger gesture.
40. The electronic device of any of claims 32-39, wherein the processing unit is configured to:
in response to determining that the first horizontal gesture satisfies a fifth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, moving the insertion marker in one word of the text from the first position according to the direction of the first horizontal gesture.
41. The electronic device of claim 40, wherein the fifth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
42. The electronic device of any of claims 32-41, wherein the processing unit is configured to:
in response to determining that the first horizontal gesture satisfies a sixth set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, moving the insertion marker to a beginning or an end of a line of text containing the first location according to the direction of the first horizontal gesture.
43. The electronic device of claim 42, wherein the sixth set of one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
44. The electronic device of any of claims 32-43, wherein the processing unit is configured to:
in response to determining that the first horizontal gesture satisfies a seventh set of one or more predefined conditions that is different from the first set of one or more predefined conditions and the second set of one or more predefined conditions, moving the insertion marker to a beginning of a sentence containing the first location or a beginning of a next sentence after the sentence containing the first location, according to the direction of the first horizontal gesture.
45. The electronic device of claim 44, wherein a seventh set of the one or more predefined conditions includes that the first horizontal gesture is a multi-finger gesture.
46. The electronic device of any of claims 32-45, wherein the processing unit is configured to:
while displaying a selected text range in the text of the electronic document:
detecting a second horizontal gesture on the touch-sensitive surface unit; and
in response to determining that the second horizontal gesture satisfies an eighth set of one or more predefined conditions:
placing the insertion marker at the beginning or end of the selected text range according to the direction of the second horizontal gesture, an
Deselecting the selected text range.
47. The electronic device of claim 46, wherein the eighth set of one or more predefined conditions includes that the second horizontal gesture is a single-finger gesture.
48. The electronic device of claim 36, wherein the processor is configured to:
in response to determining that the vertical gesture satisfies a ninth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, moving the insertion marker from the second position to a beginning of a paragraph containing the second position or to a beginning of a next paragraph after the paragraph containing the second position, according to the direction of the vertical gesture.
49. The electronic device of claim 48, wherein the ninth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture.
50. The electronic device of any of claims 36, 48, and 49, wherein the processor is configured to:
in response to determining that the vertical gesture satisfies a tenth set of one or more predefined conditions that is different from the third set of one or more predefined conditions and the fourth set of one or more predefined conditions, moving the insertion marker from the second location to a beginning of a page containing the second location or to a beginning of a next page after the page containing the second location, according to the direction of the vertical gesture.
51. The electronic device of claim 50, wherein the tenth set of one or more predefined conditions includes that the vertical gesture is a multi-finger gesture.
52. The electronic device of any of claims 32-51, wherein the display unit is a touchscreen display unit and the touch-sensitive surface unit is on the display unit.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US61/491,321 | 2011-05-30 | ||
| US13/217,747 | 2011-08-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1195642A true HK1195642A (en) | 2014-11-14 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10013161B2 (en) | Devices, methods, and graphical user interfaces for navigating and editing text | |
| CN107992261B (en) | Apparatus, method and graphical user interface for document manipulation | |
| US10346012B2 (en) | Device, method, and graphical user interface for resizing content viewing and text entry interfaces | |
| US9563351B2 (en) | Device, method, and graphical user interface for navigating between document sections | |
| US9645699B2 (en) | Device, method, and graphical user interface for adjusting partially off-screen windows | |
| US9772759B2 (en) | Device, method, and graphical user interface for data input using virtual sliders | |
| US10394441B2 (en) | Device, method, and graphical user interface for controlling display of application windows | |
| US8572481B2 (en) | Device, method, and graphical user interface for displaying additional snippet content | |
| CN103218154B (en) | Apparatus, method and graphical user interface for selecting a view in a three-dimensional map based on gesture input | |
| WO2012015933A1 (en) | Device, method, and graphical user interface for reordering the front-to-back positions of objects | |
| WO2014003876A1 (en) | Device, method, and graphical user interface for displaying a virtual keyboard | |
| HK1195642A (en) | Devices, methods, and graphical user interfaces for navigating and editing text | |
| HK1192630B (en) | Devices, methods, and graphical user interfaces for document manipulation | |
| HK1192630A (en) | Devices, methods, and graphical user interfaces for document manipulation |