CN119816446A - Physical Activity User Interface - Google Patents
Physical Activity User Interface Download PDFInfo
- Publication number
- CN119816446A CN119816446A CN202380063710.6A CN202380063710A CN119816446A CN 119816446 A CN119816446 A CN 119816446A CN 202380063710 A CN202380063710 A CN 202380063710A CN 119816446 A CN119816446 A CN 119816446A
- Authority
- CN
- China
- Prior art keywords
- depth
- computer system
- user interface
- detecting
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63C—LAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
- B63C11/00—Equipment for dwelling or working underwater; Means for searching for underwater objects
- B63C11/02—Divers' equipment
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63C—LAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
- B63C11/00—Equipment for dwelling or working underwater; Means for searching for underwater objects
- B63C11/02—Divers' equipment
- B63C2011/021—Diving computers, i.e. portable computers specially adapted for divers, e.g. wrist worn, watertight electronic devices for detecting or calculating scuba diving parameters
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Ocean & Marine Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本公开整体涉及显示与身体活动有关的信息。在一些实施方案中,描述了用于管理与身体活动有关的信息的显示的方法和用户界面。
The present disclosure generally relates to displaying information related to physical activity. In some embodiments, methods and user interfaces for managing the display of information related to physical activity are described.
Description
Cross Reference to Related Applications
The present application claims priority from U.S. patent application Ser. No. 18/153,940, entitled "PHYSICAL ACTIVITY USER INTERFACES", filed 1/12, 2023, and U.S. provisional patent application Ser. No. 63/404,152, entitled "PHYSICAL ACTIVITY USER INTERFACES", filed 9/6, 2022. The entire contents of each of these applications are hereby incorporated by reference in their entirety.
Technical Field
The present disclosure relates generally to computer user interfaces, and more particularly to techniques for managing the display of information related to physical activity.
Background
Users of electronic devices, such as smartwatches and other computer systems, often perform physical activities while wearing the electronic device. The electronic device may provide information about the physical activity to the user both when the user performs the physical activity and after the user completes the physical activity.
Disclosure of Invention
However, some techniques for managing the display of information related to physical activity using electronic devices are often cumbersome and inefficient. For example, some prior art techniques use complex and time consuming user interfaces that may include multiple key presses or keystrokes. The prior art requires more time than is necessary, which results in wasted user time and device energy. This latter consideration is particularly important in battery-powered devices.
Accordingly, the present technology provides faster, more efficient methods and interfaces for electronic devices to manage the display of information related to physical activity. Such methods and interfaces optionally supplement or replace other methods for managing the display of information related to physical activity. Such methods and interfaces reduce the cognitive burden on the user and create a more efficient human-machine interface. For battery-powered computing devices, such methods and interfaces conserve power and increase the time interval between battery charges.
According to some embodiments, a method performed at a computer system in communication with a display generation component and one or more sensors is described. The method includes displaying, via the display generating component, an immersion user interface when the computer system is immersed, detecting, via the one or more sensors, a first depth at which the computer system is immersed when the immersion user interface is displayed, and in response to detecting the first depth at which the computer system is immersed, displaying, via the display generating component, a first set of metrics regarding the immersion of the computer system in accordance with a determination that the first depth is less than a predetermined depth threshold, and displaying, via the display generating component, a second set of metrics regarding the immersion of the computer system in accordance with a determination that the first depth is greater than the predetermined depth threshold, the second set of metrics being different than the first set of metrics.
According to some embodiments, a non-transitory computer readable storage device is described. The non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generation component and one or more sensors, the one or more programs including instructions for displaying, via the display generation component, an immersion user interface when the computer system is immersed, detecting, via the one or more sensors, a first depth at which the computer system is immersed when the immersion user interface is displayed, and in response to detecting the first depth at which the computer system is immersed, displaying, via the display generation component, a first set of metrics regarding the immersion of the computer system in accordance with a determination that the first depth is less than a predetermined depth threshold, and displaying, via the display generation component, a second set of metrics regarding the immersion of the computer system in accordance with a determination that the first depth is greater than the predetermined depth threshold, the second set of metrics regarding the immersion of the computer system being different than the first set of metrics.
According to some embodiments, a transitory computer readable storage device is described. The transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system, wherein the computer system is in communication with a display generating component and one or more sensors, the one or more programs including instructions for displaying, via the display generating component, an immersion user interface when the computer system is immersed, detecting, via the one or more sensors, a first depth at which the computer system is immersed when the immersion user interface is displayed, and in response to detecting the first depth at which the computer system is immersed, displaying, via the display generating component, a first set of metrics relating to the immersion of the computer system in accordance with a determination that the first depth is less than a predetermined depth threshold, and displaying, via the display generating component, a second set of metrics relating to the immersion of the computer system in accordance with a determination that the first depth is greater than the predetermined depth threshold, the second set of metrics being different from the first set of metrics.
According to some embodiments, a computer system is described. The computer system includes one or more processors, wherein the computer system is in communication with a display generation component and one or more sensors, and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for displaying an immersion user interface via the display generation component when the computer system is immersed, detecting a first depth of the computer system being immersed via the one or more sensors when the immersion user interface is displayed, and in response to detecting the first depth of the computer system being immersed, displaying a first set of metrics relating to the immersion of the computer system via the display generation component in accordance with a determination that the first depth is less than a predetermined depth threshold, and displaying a second set of metrics relating to the immersion of the computer system via the display generation component in accordance with a determination that the first depth is greater than the predetermined depth threshold, the second set of metrics being different than the first set of metrics.
According to some embodiments, a computer system is described. The computer system includes one or more processors, wherein the computer system is in communication with a display generation component and one or more sensors, a memory storing one or more programs configured to be executed by the one or more processors, means for displaying an immersion user interface via the display generation component when the computer system is immersed, means for detecting a first depth at which the computer system is immersed via the one or more sensors when the immersion user interface is displayed, and means for, in response to detecting the first depth at which the computer system is immersed, displaying a first set of metrics regarding the immersion of the computer system via the display generation component in accordance with a determination that the first depth is less than a predetermined depth threshold, and displaying a second set of metrics regarding the immersion of the computer system via the display generation component in accordance with a determination that the first depth is greater than the predetermined depth threshold, the second set of metrics being different than the first set of metrics.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs including instructions for displaying, via the display generating component, an immersion user interface when the computer system is immersed, detecting, via the one or more sensors, a first depth at which the computer system is immersed when the immersion user interface is displayed, and in response to detecting the first depth at which the computer system is immersed, displaying, via the display generating component, a first set of metrics regarding the immersion of the computer system in accordance with a determination that the first depth is less than a predetermined depth threshold, and displaying, via the display generating component, a second set of metrics regarding the immersion of the computer system in accordance with a determination that the first depth is greater than the predetermined depth threshold, the second set of metrics being different from the first set of metrics.
Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, a faster, more efficient method and interface for managing the display of information related to physical activity is provided for a device, thereby improving the effectiveness, efficiency and user satisfaction of such devices. Such methods and interfaces may supplement or replace other methods for managing the display of information related to physical activity.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
Fig. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 4A illustrates an exemplary user interface for an application menu on a portable multifunction device in accordance with some embodiments.
FIG. 4B illustrates an exemplary user interface of a multifunction device with a touch-sensitive surface separate from a display in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device in accordance with some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device in accordance with some embodiments.
Fig. 6A-6M illustrate schematic diagrams and exemplary user interfaces for managing the display of information related to physical activity, according to some embodiments.
Fig. 7 is a flow chart illustrating a method for managing the display of information related to physical activity, according to some embodiments.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. However, it should be recognized that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
There is a need for an electronic device that provides an efficient method and interface for managing the display of information related to physical activity. For example, there is a need for an electronic device that allows a user to quickly and easily view metrics associated with physical activities being performed by the user. Such techniques may alleviate the cognitive burden on users accessing information related to physical activity, thereby improving productivity. Further, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
1A-1B, 2, 3, 4A-4B, and 5A-5B provide a description of an exemplary device for performing techniques for managing event notifications. Fig. 6A-6M illustrate exemplary user interfaces for managing the display of information related to physical activity. Fig. 7 is a flow chart illustrating a method of managing the display of information related to physical activity, according to some embodiments. The user interfaces in fig. 6A through 6M are used to illustrate the processes described below, including the process in fig. 7.
The processes described below enhance operability of a device and make user-device interfaces more efficient (e.g., by helping a user provide appropriate input and reducing user error in operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs required to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without further user input and/or additional techniques. These techniques also reduce power usage and extend battery life of the device by enabling a user to use the device faster and more efficiently.
Furthermore, in a method described herein in which one or more steps are dependent on one or more conditions having been met, it should be understood that the method may be repeated in multiple iterations such that during the iteration, all conditions that determine steps in the method have been met in different iterations of the method. For example, if a method requires performing a first step (if a condition is met) and performing a second step (if a condition is not met), one of ordinary skill will know that the stated steps are repeated until both the condition and the condition are not met (not sequentially). Thus, a method described as having one or more steps depending on one or more conditions having been met may be rewritten as a method that repeats until each of the conditions described in the method have been met. However, this does not require the system or computer-readable medium to claim that the system or computer-readable medium contains instructions for performing the contingent operation based on the satisfaction of the corresponding condition or conditions, and thus is able to determine whether the contingent situation has been met without explicitly repeating the steps of the method until all conditions to decide on steps in the method have been met. It will also be appreciated by those of ordinary skill in the art that, similar to a method with optional steps, a system or computer readable storage medium may repeat the steps of the method as many times as necessary to ensure that all optional steps have been performed.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. In some embodiments, these terms are used to distinguish one element from another element. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of the various described embodiments. In some embodiments, the first touch and the second touch are two separate references to the same touch. In some implementations, both the first touch and the second touch are touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally interpreted to mean "when..once..once.," in response to determining "or" in response to detecting ". Similarly, the phrase "if determined" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determination" or "in response to determination" or "upon detection of [ stated condition or event ]" or "in response to detection of [ stated condition or event ]" depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communication device, such as a mobile phone, that also includes other functions, such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, those from Apple inc (Cupertino, california)Equipment, iPodApparatus and method for controlling the operation of a deviceAn apparatus. Other portable electronic devices are optionally used, such as a laptop computer or tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). It should also be appreciated that in some embodiments, the device is not a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). In some embodiments, the electronic device is a computer system in communication (e.g., via wireless communication, via wired communication) with the display generation component. The display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection. In some embodiments, the display generation component is integrated with the computer system. In some embodiments, the display generation component is separate from the computer system. As used herein, "displaying" content includes displaying content (e.g., video data rendered or decoded by display controller 156) by sending data (e.g., image data or video data) to an integrated or external display generation component via a wired or wireless connection to visually produce the content.
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications such as one or more of a drawing application, a presentation application, a word processing application, a website creation application, a disk editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photograph management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications executing on the device optionally use at least one generic physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within the respective applications. In this way, the common physical architecture of the devices (such as the touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and transparent to the user.
Attention is now directed to embodiments of a portable device having a touch sensitive display. Fig. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to as or is referred to as a "touch-sensitive display system". Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), memory controller 122, one or more processing units (CPUs) 120, peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input control devices 116, and external ports 124. The apparatus 100 optionally includes one or more optical sensors 164. The device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of a contact on the device 100 (e.g., a touch-sensitive surface, such as the touch-sensitive display system 112 of the device 100). Device 100 optionally includes one or more tactile output generators 167 (e.g., generating tactile output on a touch-sensitive surface, such as touch-sensitive display system 112 of device 100 or touch pad 355 of device 300) for generating tactile output on device 100. These components optionally communicate via one or more communication buses or signal lines 103.
As used in this specification and the claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact), or to an alternative to the force or pressure of the contact on the touch-sensitive surface (surrogate). The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or its change, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or its change and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or its change is optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, surrogate measurements of contact force or pressure are directly used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to surrogate measurements). In some implementations, surrogate measurements of contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functions that are not otherwise accessible to the user on a smaller sized device of limited real estate for displaying affordances and/or receiving user input (e.g., via a touch-sensitive display, touch-sensitive surface, or physical/mechanical control, such as a knob or button).
As used in this specification and in the claims, the term "haptic output" refers to a previously positioned physical displacement of a device relative to the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to the centroid of the device, to be detected by a user with the user's feel. For example, in the case where the device or component of the device is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or touch pad) is optionally interpreted by a user as a "press click" or "click-down" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as "press click" or "click down", even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moved. As another example, movement of the touch-sensitive surface may optionally be interpreted or sensed by a user as "roughness" of the touch-sensitive surface, even when the smoothness of the touch-sensitive surface is unchanged. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the haptic output generated corresponds to a physical displacement of the device or component thereof that would generate that sensory perception of a typical (or average) user.
It should be understood that the device 100 is merely one example of a portable multifunction device, and that the device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
Memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs, such as computer programs (e.g., including instructions), and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and process data. In some embodiments, peripheral interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
The RF (radio frequency) circuit 108 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 108 converts/converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 108 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The RF circuitry 108 optionally includes well-known circuitry for detecting a Near Field Communication (NFC) field, such as by a short-range communication radio. Wireless communications optionally use any of a variety of communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA, hspa+, dual element HSPA (DC-HSPDA), long Term Evolution (LTE), near Field Communications (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth low energy (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11 ac), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)), messages (e.g., extensible message handling and presence protocol (XMPP), protocols for instant messaging and presence using extended session initiation protocol (sime), messages and presence and/or the like), instant messaging and SMS (SMS) and other protocols, or any other suitable communications protocol not yet developed on the date.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between the user and device 100. Audio circuitry 110 receives audio data from peripheral interface 118, converts the audio data to electrical signals, and sends the electrical signals to speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuitry 110 converts the electrical signals into audio data and sends the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or the RF circuitry 108 by the peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuit 110 and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
I/O subsystem 106 couples input/output peripheral devices on device 100, such as touch screen 112 and other input control devices 116, to peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, a depth camera controller 169, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive electrical signals from/transmit electrical signals to other input control devices 116. The other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click-type dials, and the like. In some implementations, the input controller 160 is optionally coupled to (or not coupled to) any of a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. One or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2). In some embodiments, the electronic device is a computer system that communicates (e.g., via wireless communication, via wired communication) with one or more input devices. In some implementations, the one or more input devices include a touch-sensitive surface (e.g., a touch pad as part of a touch-sensitive display). In some implementations, the one or more input devices include one or more camera sensors (e.g., one or more optical sensors 164 and/or one or more depth camera sensors 175) such as for tracking gestures (e.g., hand gestures and/or air gestures) of the user as input. In some embodiments, one or more input devices are integrated with the computer system. In some embodiments, one or more input devices are separate from the computer system. In some embodiments, the air gesture is a gesture that is detected without the user touching an input element that is part of the device (or independent of an input element that is part of the device) and based on a detected movement of a portion of the user's body through the air, including a movement of the user's body relative to an absolute reference (e.g., an angle of the user's arm relative to the ground or a distance of the user's hand relative to the ground), a movement relative to another portion of the user's body (e.g., a movement of the user's hand relative to the user's shoulder, a movement of the user's hand relative to the other hand, and/or a movement of the user's finger relative to the other finger or part of the hand), and/or an absolute movement of a portion of the user's body (e.g., a flick gesture that includes the hand moving a predetermined amount and/or speed in a predetermined pose, or a shake gesture that includes a predetermined speed or amount of rotation of a portion of the user's body).
The quick press of the push button optionally disengages the lock of touch screen 112 or optionally begins the process of unlocking the device using gestures on the touch screen, as described in U.S. patent application 11/322,549 (i.e., U.S. patent 7,657,849) entitled "Unlocking a Device by Performing Gestures on an Unlock Image," filed on even 23, 12/2005, which is hereby incorporated by reference in its entirety. Long presses of a button (e.g., 206) optionally cause the device 100 to power on or off. The function of the one or more buttons is optionally customizable by the user. Touch screen 112 is used to implement virtual buttons or soft buttons and one or more soft keyboards.
The touch sensitive display 112 provides an input interface and an output interface between the device and the user. The display controller 156 receives electrical signals from and/or transmits electrical signals to the touch screen 112. Touch screen 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor or set of sensors that receives input from a user based on haptic and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or interruption of the contact) on touch screen 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a user's finger.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but in other embodiments other display technologies are used. Touch screen 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, a projected mutual capacitance sensing technique is used, such as that described in the text from Apple inc (Cupertino, california)And iPodTechniques used in the above.
The touch-sensitive display in some embodiments of touch screen 112 is optionally similar to the multi-touch-sensitive touch pad described in U.S. Pat. No. 6,323,846 (Westerman et al), 6,570,557 (Westerman et al), and/or 6,677,932 (Westerman et al) and/or U.S. patent publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, while touch sensitive touchpads do not provide visual output.
Touch-sensitive displays in some embodiments of touch screen 112 are described in (1) U.S. patent application Ser. No. 11/381,313, "Multipoint Touch Surface Controller" filed on month 5 and month 2, (2) U.S. patent application Ser. No. 10/840,862, "Multipoint Touchscreen" filed on month 6 and month 5, (3) U.S. patent application Ser. No. 10/903,964, "Gestures For Touch Sensitive Input Devices" filed on month 7 and month 30, (4) U.S. patent application Ser. No. 11/048,264, "Gestures For Touch Sensitive Input Devices" filed on month 1 and month 31, (5) U.S. patent application Ser. No. 11/038,590, "Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices" filed on month 18 and (6) U.S. patent application Ser. No. 11/228,758 "filed on month 9 and month 16, and" Virtual Input DEVICE PLACEMENT On A Touch Screen User Interface "; (7) U.S. patent application Ser. No. 11/228,700," Operation Of A Computer With A Touch SCREEN INTERFACE "filed on month 9 and month 16, and (8) U.S. patent application Ser. No. 11/228,228" 737 "and" 3-858 "35" U.S. No. 35 to Multi-35. All of these applications are incorporated by reference herein in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some implementations, the touch screen has a video resolution of about 160 dpi. The user optionally uses any suitable object or appendage, such as a stylus, finger, or the like, to make contact with touch screen 112. In some embodiments, the user interface is designed to work primarily through finger-based contact and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the finger-based coarse input into a precise pointer/cursor location or command for performing the action desired by the user.
In some embodiments, the device 100 optionally includes a touch pad for activating or deactivating specific functions in addition to the touch screen. In some embodiments, the touch pad is a touch sensitive area of the device that, unlike the touch screen, does not display visual output. The touch pad is optionally a touch sensitive surface separate from the touch screen 112 or an extension of the touch sensitive surface formed by the touch screen.
The apparatus 100 also includes a power system 162 for powering the various components. The power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device.
The apparatus 100 optionally further comprises one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with imaging module 143 (also referred to as a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, the optical sensor is located on the rear of the device 100, opposite the touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still image and/or video image acquisition. In some embodiments, the optical sensor is located on the front of the device such that the user's image is optionally acquired for video conferencing while viewing other video conference participants on the touch screen display. In some implementations, the positioning of the optical sensor 164 can be changed by the user (e.g., by rotating the lenses and sensors in the device housing) such that a single optical sensor 164 is used with the touch screen display for both video conferencing and still image and/or video image acquisition.
The device 100 optionally further includes one or more depth camera sensors 175. FIG. 1A shows a depth camera sensor coupled to a depth camera controller 169 in the I/O subsystem 106. The depth camera sensor 175 receives data from the environment to create a three-dimensional model of objects (e.g., faces) within the scene from a point of view (e.g., depth camera sensor). In some implementations, in conjunction with the imaging module 143 (also referred to as a camera module), the depth camera sensor 175 is optionally used to determine a depth map of different portions of the image captured by the imaging module 143. In some implementations, a depth camera sensor is located at the front of the device 100 such that user images with depth information are optionally acquired for video conferencing while the user views other video conferencing participants on a touch screen display, and self-shots with depth map data are captured. In some embodiments, the depth camera sensor 175 is located at the back of the device, or at the back and front of the device 100. In some implementations, the positioning of the depth camera sensor 175 can be changed by the user (e.g., by rotating lenses and sensors in the device housing) such that the depth camera sensor 175 is used with a touch screen display for both video conferencing and still image and/or video image acquisition.
The apparatus 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The contact strength sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor 165 receives contact strength information (e.g., pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in U.S. patent application Ser. No. 11/241,839, entitled "Proximity Detector IN HANDHELD DEVICE", 11/240,788, entitled "Proximity Detector IN HANDHELD DEVICE", 11/620,702, entitled "Using Ambient Light Sensor To Augment Proximity Sensor Output", 11/586,862, entitled "Automated Response To AND SENSING Of User ACTIVITY IN Portable Devices", and 11/638,251, entitled "Methods AND SYSTEMS For Automatic Configuration Of Peripherals", which are incorporated herein by reference in their entirety. In some embodiments, the proximity sensor is turned off and the touch screen 112 is disabled when the multifunction device is placed near the user's ear (e.g., when the user is making a telephone call).
The device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. The tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components, and/or electromechanical devices for converting energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating components (e.g., components for converting electrical signals into tactile output on a device). The contact intensity sensor 165 receives haptic feedback generation instructions from the haptic feedback module 133 and generates a haptic output on the device 100 that can be perceived by a user of the device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., inward/outward of the surface of device 100) or laterally (e.g., backward and forward in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the rear of the device 100, opposite the touch screen display 112 located on the front of the device 100.
The device 100 optionally further includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in U.S. patent publication nos. 20050190059, entitled "acceletion-based Theft Detection System for Portable Electronic Devices" and 20060017692, entitled "Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer," both of which are incorporated herein by reference in their entirety. In some implementations, information is displayed in a portrait view or a landscape view on a touch screen display based on analysis of data received from one or more accelerometers. The device 100 optionally includes a magnetometer and a GPS (or GLONASS or other global navigation system) receiver in addition to the accelerometer 168 for obtaining information about the position and orientation (e.g., longitudinal or lateral) of the device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or instruction set) 128, a contact/motion module (or instruction set) 130, a graphics module (or instruction set) 132, a text input module (or instruction set) 134, a Global Positioning System (GPS) module (or instruction set) 135, and an application (or instruction set) 136. Furthermore, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A and 3. The device/global internal state 157 includes one or more of an active application state indicating which applications (if any) are currently active, a display state indicating what applications, views, or other information occupy various areas of the touch screen display 112, sensor states including information obtained from various sensors of the device and the input control device 116, and location information relating to the device's location and/or attitude.
Operating system 126 (e.g., darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage control, power management, etc.), and facilitates communication between the various hardware components and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., universal Serial Bus (USB), firewire, etc.) is adapted to be coupled directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is in communication withThe 30-pin connector used on the (Apple inc. Trademark) device is the same or similar and/or compatible with a multi-pin (e.g., 30-pin) connector.
The contact/motion module 130 optionally detects contact with the touch screen 112 (in conjunction with the display controller 156) and other touch sensitive devices (e.g., a touchpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether a contact has occurred (e.g., detecting a finger press event), determining the strength of the contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger drag events), and determining whether the contact has ceased (e.g., detecting a finger lift event or a contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or simultaneous multi-point contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch pad.
In some implementations, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether the user has "clicked" on an icon). In some implementations, at least a subset of the intensity thresholds are determined according to software parameters (e.g., the intensity thresholds are not determined by activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of the device 100). For example, without changing the touchpad or touch screen display hardware, the mouse "click" threshold of the touchpad or touch screen may be set to any of a wide range of predefined thresholds. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds of a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different movements, timings, and/or intensities of the detected contacts). Thus, gestures are optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger press event, and then detecting a finger lift (lift off) event at the same location (or substantially the same location) as the finger press event (e.g., at the location of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attribute) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including but not limited to text, web pages, icons (such as user interface objects including soft keys), digital images, video, animation, and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphics module 132 receives one or more codes for specifying graphics to be displayed from an application or the like, and also receives coordinate data and other graphics attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by haptic output generator 167 to generate haptic output at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications (e.g., contacts 137, email 140, IM 141, browser 147, and any other application requiring text input).
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services, such as weather gadgets, local page gadgets, and map/navigation gadgets).
The application 136 optionally includes the following modules (or instruction sets) or a subset or superset thereof:
contact module 137 (sometimes referred to as an address book or contact list);
a telephone module 138;
Video conferencing module 139;
email client module 140;
an Instant Messaging (IM) module 141;
A fitness support module 142;
A camera module 143 for still and/or video images;
An image management module 144;
A video player module;
a music player module;
browser module 147;
Calendar module 148;
A gadget module 149, optionally including one or more of a weather gadget 149-1, a stock gadget 149-2, a calculator gadget 149-3, an alarm gadget 149-4, a dictionary gadget 149-5, and other gadgets acquired by a user, and a user-created gadget 149-6;
A gadget creator module 150 for forming a user-created gadget 149-6;
Search module 151;
a video and music player module 152 that incorporates the video player module and the music player module;
Notepad module 153;
map module 154, and/or
An online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage an address book or list of contacts (e.g., in application internal state 192 of contacts module 137 stored in memory 102 or memory 370), including adding one or more names to the address book, deleting names from the address book, associating telephone numbers, email addresses, physical addresses, or other information with names, associating images with names, categorizing and classifying names, providing telephone numbers or email addresses to initiate and/or facilitate communication through telephone 138, videoconferencing module 139, email 140, or IM 141, and so forth.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, telephone module 138 is optionally used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contact module 137, modify the entered telephone numbers, dial the corresponding telephone numbers, conduct a conversation, and disconnect or hang up when the conversation is completed. As described above, wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a videoconference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, transmitting, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and transmit emails with still or video images captured by the camera module 143.
In conjunction with the RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, the instant message module 141 includes executable instructions for entering a sequence of characters corresponding to an instant message, modifying previously entered characters, sending a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for phone-based instant messages or using XMPP, SIMPLE, or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the instant message sent and/or received optionally includes graphics, photographs, audio files, video files, and/or other attachments supported in an MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages transmitted using SMS or MMS) and internet-based messages (e.g., messages transmitted using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions for creating workouts (e.g., having time, distance, and/or calorie burning goals), communicating with workout sensors (exercise devices), receiving workout sensor data, calibrating sensors for monitoring workouts, selecting and playing music for workouts, and displaying, storing, and transmitting workout data.
In conjunction with touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for capturing still images or video (including video streams) and storing them into memory 102, modifying the characteristics of the still images or video, or deleting the still images or video from memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, marking, deleting, presenting (e.g., in a digital slide or album), and storing still images and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet according to user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget module 149 is a mini-application (e.g., weather gadget 149-1, stock gadget 149-2, calculator gadget 149-3, alarm gadget 149-4, and dictionary gadget 149-5) or a mini-application created by a user (e.g., user created gadget 149-6) that is optionally downloaded and used by a user. In some embodiments, gadgets include HTML (hypertext markup language) files, CSS (cascading style sheet) files, and JavaScript files. In some embodiments, gadgets include XML (extensible markup language) files and JavaScript files (e.g., yahoo | gadgets).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, gadget creator module 150 is optionally used by a user to create gadgets (e.g., to transform user-specified portions of a web page into gadgets).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, video, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player such as an iPod (trademark of Apple inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notepads, backlog, and the like in accordance with user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally configured to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to shops and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with the touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for allowing a user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on a touch screen or on an external display connected via external port 124), transmit email with links to particular online video, and otherwise manage online video in one or more file formats such as h.264. In some embodiments, the instant messaging module 141 is used instead of the email client module 140 to communicate links to specific online videos. Additional description of online video applications can be found in U.S. provisional patent application 60/936,562 entitled "Portable Multifunction Device, method, AND GRAPHICAL User Interface for Playing Online Videos" filed on day 6, 20, 2007 and U.S. patent application 11/968,067 entitled "Portable Multifunction Device, method, AND GRAPHICAL User Interface for Playing Online Videos", filed on day 12, 31, 2007, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented in a separate software program, such as a computer program (e.g., including instructions), process, or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or a touch pad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
A predefined set of functions performed solely by the touch screen and/or touch pad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates the device 100 from any user interface displayed on the device 100 to a main menu, a main desktop menu, or a root menu. In such implementations, a touch pad is used to implement a "menu button". In some other embodiments, the menu buttons are physical push buttons or other physical input control devices, rather than touch pads.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (fig. 1A) or memory 370 (fig. 3) includes event sorter 170 (e.g., in operating system 126) and corresponding applications 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
The event classifier 170 receives the event information and determines the application 136-1 and the application view 191 of the application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some implementations, the application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on the touch-sensitive display 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information such as one or more of resume information to be used when the application 136-1 resumes execution, user interface state information indicating that the information is being displayed or ready for display by the application 136-1, a state queue for enabling the user to return to a previous state or view of the application 136-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about sub-events (e.g., user touches on the touch sensitive display 112 as part of a multi-touch gesture). The peripheral interface 118 sends information it receives from the I/O subsystem 106 or sensors, such as a proximity sensor 166, one or more accelerometers 168, and/or microphone 113 (via audio circuitry 110). The information received by the peripheral interface 118 from the I/O subsystem 106 includes information from the touch-sensitive display 112 or touch-sensitive surface.
In some embodiments, event monitor 171 communicates requests to peripheral interface 118 at predetermined intervals. In response, the peripheral interface 118 sends event information. In other embodiments, the peripheral interface 118 transmits event information only if there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving an input exceeding a predetermined duration).
In some implementations, the event classifier 170 also includes a hit view determination module 172 and/or an active event identifier determination module 173.
When the touch sensitive display 112 displays more than one view, the hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view is made up of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a level of programming within the application's programming or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as a hit view, and the set of events identified as being correctly entered is optionally determined based at least in part on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should process sub-events. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in a sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The activity event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some embodiments, the active event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the activity event recognizer determination module 173 determines that all views including the physical location of a sub-event are actively engaged views and, thus, that all actively engaged views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to an area associated with one particular view, the higher view in the hierarchy will remain the actively engaged view.
The event dispatcher module 174 dispatches event information to an event recognizer (e.g., event recognizer 180). In embodiments that include an active event recognizer determination module 173, the event dispatcher module 174 delivers event information to the event recognizers determined by the active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue that is retrieved by the corresponding event receiver 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, the application 136-1 includes an event classifier 170. In yet another embodiment, the event sorter 170 is a stand-alone module or part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the user interface of the application. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher level object from which methods and other properties are inherited, such as the user interface toolkit or application 136-1. In some implementations, the respective event handler 190 includes one or more of a data updater 176, an object updater 177, a GUI updater 178, and/or event data 179 received from the event classifier 170. Event handler 190 optionally utilizes or invokes data updater 176, object updater 177, or GUI updater 178 to update the application internal state 192. Alternatively, one or more of application views 191 include one or more corresponding event handlers 190. Additionally, in some implementations, one or more of the data updater 176, the object updater 177, and the GUI updater 178 are included in a respective application view 191.
The respective event identifier 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events based on the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 further includes at least a subset of metadata 183 and event delivery instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about sub-events such as touches or touch movements. The event information also includes additional information, such as the location of the sub-event, according to the sub-event. When a sub-event relates to movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes rotation of the device from one orientation to another orientation (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about a current orientation of the device (also referred to as a device pose).
The event comparator 184 compares the event information with predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of the event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definition 186. Event definition 186 includes definitions of events (e.g., a predefined sequence of sub-events), such as event 1 (187-1), event 2 (187-2), and others. In some implementations, sub-events in an event (e.g., 187-1 and/or 187-2) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, a double click includes a first touch on the displayed object for a predetermined length of time (touch start), a first lift-off on the displayed object for a predetermined length of time (touch end), a second touch on the displayed object for a predetermined length of time (touch start), and a second lift-off on the displayed object for a predetermined length of time (touch end). In another example, the definition of event 2 (187-2) is a drag on the displayed object. For example, dragging includes touching (or contacting) on the displayed object for a predetermined period of time, movement of the touch on the touch-sensitive display 112, and lifting of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some implementations, the event definitions 186 include definitions of events for respective user interface objects. In some implementations, the event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event (187) further includes a delay action that delays delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to an event type of the event recognizer.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any of the events in the event definition 186, the respective event recognizer 180 enters an event impossible, event failed, or event end state after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable attributes, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to the actively engaged event recognizer. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how event recognizers interact or are able to interact with each other. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to different levels in a view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the corresponding event recognizer 180 activates an event handler 190 associated with the event. In some implementations, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activate event handler 190 is different from transferring (and deferring) sub-events to corresponding hit views. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag obtains the flag and executes a predefined process.
In some implementations, the event delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, the sub-event delivery instructions deliver the event information to an event handler associated with the sub-event sequence or to an actively engaged view. Event handlers associated with the sequence of sub-events or with the actively engaged views receive the event information and perform a predetermined process.
In some embodiments, the data updater 176 creates and updates data used in the application 136-1. For example, the data updater 176 updates a telephone number used in the contact module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, the object updater 177 creates a new user interface object or updates the positioning of the user interface object. GUI updater 178 updates the GUI. For example, the GUI updater 178 prepares the display information and communicates the display information to the graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, the data updater 176, the object updater 177, and the GUI updater 178 are included in a single module of the respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be appreciated that the above discussion regarding event handling of user touches on a touch sensitive display also applies to other forms of user inputs that utilize an input device to operate the multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds, contact movements on a touchpad, such as taps, drags, scrolls, etc., stylus inputs, movements of a device, verbal instructions, detected eye movements, biometric inputs, and/or any combination thereof are optionally used as inputs corresponding to sub-events defining an event to be identified.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 200. In this and other embodiments described below, a user can select one or more of these graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when a user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down), and/or scrolling of a finger that has been in contact with the device 100 (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with the graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over the application icon optionally does not select the corresponding application.
The device 100 optionally also includes one or more physical buttons, such as a "home desktop" or menu button 204. As previously described, menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, the device 100 includes a touch screen 112, menu buttons 204, a press button 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and a docking/charging external port 124. Depressing button 206 is optionally used to turn the device on/off by depressing the button and holding the button in a depressed state for a predefined time interval, to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed, and/or to unlock the device or initiate an unlocking process. In an alternative embodiment, the device 100 also accepts voice input through the microphone 113 for activating or deactivating certain functions. The device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on the touch screen 112, and/or one or more haptic output generators 167 for generating haptic outputs for a user of the device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child learning toy), a gaming system, or a control device (e.g., a home controller or an industrial controller). The device 300 generally includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. The device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. The I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touchpad 355, a tactile output generator 357 (e.g., similar to the tactile output generator 167 described above with reference to fig. 1A), a sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch sensitive sensor, and/or a contact intensity sensor (similar to the contact intensity sensor 165 described above with reference to fig. 1A)) for generating tactile output on the device 300. Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices, and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory storage devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures, or a subset thereof, similar to those stored in memory 102 of portable multifunction device 100 (fig. 1A). Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above-described modules corresponds to a set of instructions for performing the functions described above. The above-described modules or computer programs (e.g., sets of instructions or instructions) need not be implemented in a separate software program (such as a computer program (e.g., instructions), process or module, and thus the various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on the portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
Signal strength indicator 402 for wireless communications (such as cellular signals and Wi-Fi signals);
Time 404;
bluetooth indicator 405;
battery status indicator 406;
Tray 408 with icons for common applications such as:
An icon 416 labeled "phone" of phone module 138, optionally including an indicator 414 of the number of missed calls or voice mails;
an icon 418 of email client module 140 marked "mail" optionally including an indicator 410 of the number of unread emails;
Icon 420 labeled "browser" of browser module 147, and
Icon 422 labeled "iPod" of video and music player module 152 (also known as iPod (trademark of Apple inc. Module 152)), and
Icons of other applications, such as:
icon 424 marked "message" for IM module 141;
Icon 426 of calendar module 148 marked "calendar";
Icon 428 marked "photo" of image management module 144;
icon 430 marked "camera" for camera module 143;
Icon 432 of online video module 155 marked "online video";
icon 434 labeled "stock market" for stock market gadget 149-2;
icon 436 marked "map" of map module 154;
icon 438 labeled "weather" for weather gadget 149-1;
Icon 440 labeled "clock" for alarm clock gadget 149-4;
Icon 442 labeled "fitness support" for fitness support module 142;
icon 444 labeled "notepad" for notepad module 153, and
The "set" marked icon 446 of a set application or module provides access to the settings of the device 100 and its various applications 136.
It should be noted that the iconic labels illustrated in fig. 4A are merely exemplary. For example, the icon 422 of the video and music player module 152 is labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet device or touchpad 355 of fig. 3) separate from a display 450 (e.g., touch screen display 112). The device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 359) for detecting the intensity of the contact on the touch-sensitive surface 451 and/or one or more tactile output generators 357 for generating tactile outputs for a user of the device 300.
While some of the examples below will be given with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments the device detects inputs on a touch sensitive surface separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to the primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). According to these embodiments, the device detects contact (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at a location corresponding to a respective location on the display (e.g., 460 corresponds to 468 and 462 corresponds to 470 in fig. 4B). In this way, when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separated from the display (e.g., 450 in FIG. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and movement thereof) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be appreciated that similar approaches are optionally used for other user interfaces described herein.
Additionally, while the following examples are primarily given with reference to finger inputs (e.g., finger contacts, single-finger flick gestures, finger swipe gestures), it should be understood that in some embodiments one or more of these finger inputs are replaced by input from another input device (e.g., mouse-based input or stylus input). For example, a swipe gesture is optionally replaced with a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detection of contact, followed by ceasing to detect contact) when the cursor is over the position of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be appreciated that multiple computer mice are optionally used simultaneously, or that the mice and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some implementations, the device 500 has a touch sensitive display 504, hereinafter referred to as a touch screen 504. Alternatively, or in addition to touch screen 504, device 500 also has a display and a touch-sensitive surface. As with devices 100 and 300, in some implementations, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting the intensity of an applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch sensitive surface) may provide output data representative of the intensity of the touch. The user interface of the device 500 may respond to touches based on the intensity of the touches, meaning that touches of different intensities may invoke different user interface operations on the device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications, international patent application serial number PCT/US2013/040061, filed on 5/8/2013, entitled "Device,Method,and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application", published as WIPO publication number WO/2013/169849, and international patent application serial number PCT/US2013/069483, filed on 11/2013, entitled "Device,Method,and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships", published as WIPO publication number WO/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, the device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, leash, shoe, purse, backpack, or the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B depicts an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with respect to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O section 514 with one or more computer processors 516 and memory 518. The I/O portion 514 may be connected to a display 504, which may have a touch sensitive component 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). In addition, the I/O portion 514 may be connected to a communication unit 530 for receiving application and operating system data using Wi-Fi, bluetooth, near Field Communication (NFC), cellular, and/or other wireless communication technologies. The device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. Personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which are operatively connected to I/O section 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, for example, may cause the computer processors to perform techniques described below, including process 700 (fig. 7). A computer-readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, and device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memories such as flash memory, solid state drives, etc. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5B). For example, an image (e.g., an icon), a button, and text (e.g., a hyperlink) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element for indicating the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when the cursor detects an input (e.g., presses an input) on a touch-sensitive surface (e.g., touch pad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) above a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations including a touch screen display (e.g., touch sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, the contact detected on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by a contact) is detected on the touch screen display at the location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus moves from one region of the user interface to another region of the user interface without a corresponding movement of the cursor or movement of contact on the touch screen display (e.g., by moving the focus from one button to another button using tab or arrow keys), in which the focus selector moves according to movement of the focus between the different regions of the user interface. Regardless of the particular form that the focus selector takes, the focus selector is typically controlled by the user in order to deliver a user interface element (or contact on the touch screen display) that is interactive with the user of the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touch screen), the position of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (rather than other user interface elements shown on the device display).
As used in the specification and claims, the term "characteristic intensity" of a contact refers to the characteristic of a contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined period of time (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detection of contact, before or after detection of lift-off of contact, before or after detection of start of movement of contact, before or after detection of end of contact, and/or before or after detection of decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of a maximum value of the intensity of the contact, a mean value of the intensity of the contact, a value at the first 10% of the intensity of the contact, a half maximum value of the intensity of the contact, a 90% maximum value of the intensity of the contact, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, contact of the feature strength that does not exceed the first threshold results in a first operation, contact of the feature strength that exceeds the first strength threshold but does not exceed the second strength threshold results in a second operation, and contact of the feature strength that exceeds the second threshold results in a third operation. In some implementations, a comparison between the feature strength and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform or forgo performing the respective operations) rather than for determining whether to perform the first or second operations.
As used herein, an "installed application" refers to a software application that has been downloaded onto an electronic device (e.g., device 100, 300, and/or 500) and is ready to be started (e.g., turned on) on the device. In some embodiments, the downloaded application becomes an installed application using an installer that extracts program portions from the downloaded software package and integrates the extracted portions with the operating system of the computer system.
As used herein, the term "open application" or "executing application" refers to a software application having maintained state information (e.g., as part of device/global internal state 157 and/or application internal state 192). The open or executing application is optionally any of the following types of applications:
an active application that is currently displayed on a display screen of a device that is using the application;
A background application (or background process) that is not currently displayed but whose one or more processes are being processed by the one or more processors, and
A suspended or dormant application that is not running but has state information stored in memory (volatile and non-volatile, respectively) and available to resume execution of the application.
As used herein, the term "closed application" refers to a software application that does not have maintained state information (e.g., the state information of the closed application is not stored in the memory of the device). Accordingly, closing the application includes stopping and/or removing application processes of the application and removing state information of the application from memory of the device. Generally, when in a first application, opening a second application does not close the first application. The first application becomes a background application when the second application is displayed and the first application stops being displayed.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6M illustrate exemplary user interfaces for managing the display of information related to physical activity, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the process in fig. 7.
The left half of fig. 6A includes diagram 608. Diagram 608 depicts a relationship of user 600 with respect to a body of water (e.g., ocean, pond, diving pool, or lake). As illustrated by diagram 608 in fig. 6A, user 600 is located in a vessel on the surface of a body of water (e.g., user 600 is not submerged in the body of water). Further, as illustrated in fig. 6A, user 600 wears computer system 602 on the wrist of user 600. At fig. 6A, user 600 and computer system 602 are not submerged in the body of water.
The right half of fig. 6A depicts a detailed depiction of computer system 602. At fig. 6A, a computer system 602 displays a diving board user interface 604. As illustrated in fig. 6A, the diving board user interface 604 includes a plurality of application user interface objects 616. Each of the plurality of application user interface objects 616 corresponds to a respective application installed on the computer system 602. The computer system 602 launches the respective application in response to detecting a selection of a respective application user interface object of the plurality of application user interface objects 616. The plurality of application user interface objects 616 includes a driver application user interface object 616a. The diving application user interface object 616a corresponds to a diving application installed on the computer system 602. While computer system 602 is depicted as a smart watch, it should be appreciated that this is merely an example, and that the techniques described herein may work with other types of computer systems (such as smartphones and/or diver computers). Throughout the discussion of FIGS. 6A-6M, various references are made to determination of the depth of computer system 602. In some embodiments, computer system 602 uses one or more depth detection techniques (e.g., using one or more sensors integrated into computer system 602) to make a determination regarding the depth of computer system 602.
At fig. 6B, the positioning of user 600 transitions from being outside the body of water to being submerged within the body of water. At fig. 6B, a determination is made that computer system 602 is submerged in a body of water (e.g., via one or more sensors in communication (e.g., wired communication and/or wireless communication) with computer system 602). Because it is determined that computer system 602 is submerged within the body of water, computer system 602 displays a submerged user interface 610. That is, upon detecting that computer system 602 is submerged in a body of water, computer system 602 automatically (e.g., without interventional input) displays submerged user interface 610. The submerged user interface 610 corresponds to a submerged application installed on the computer system 602. In some embodiments, computer system 602 displays submerged user interface 610 in response to detecting an input (e.g., a tap input, a swipe input, and/or activation of a hardware control integrated into computer system 602) corresponding to selection of submerged application user interface object 616A (e.g., as shown in fig. 6A). In some embodiments, computer system 602 automatically (e.g., without intervening user input) ceases display of immersion user interface 610 in response to determining that computer system 602 is no longer immersed in the body of water.
As illustrated in fig. 6B, the immersion user interface 610 includes a depth metric 618 and a temperature metric 620. Depth metric 618 indicates the current depth (e.g., measured in feet) of computer system 602 from the surface of the body of water. The temperature metric 620 indicates the current temperature of the body of water (e.g., measured in degrees Fahrenheit). In some embodiments, the temperature metric indicates a current temperature of the body of water in degrees celsius. In some implementations, the depth metric 618 indicates the depth of the computer system 602 in meters. In some embodiments, the temperature metric 620 indicates the current temperature of the body of water in units of both degrees fahrenheit and degrees celsius. In some implementations, the depth metric 618 indicates the current depth of the computer system 602 in both feet and meters. In some implementations, the computer system 602 displays the temperature metric 620 over the depth metric 618.
Additionally, as illustrated in fig. 6B, the immersion user interface 610 includes a depth animation 612, a depth scale 626a, and a depth scale 626B. The depth scale 626a includes a plurality of lines, each line representing a depth distance measured in feet. The distance represented by depth scale 626a ranges from 0 feet to 120 feet. Similarly, the depth scale 626b includes a plurality of lines, each line representing a respective depth distance measured in meters. The distance represented by depth scale 626b ranges from 0 meters to 40 meters.
Computer system 602 displays the leftmost portion of depth animation 612 at a point on depth scale 626a corresponding to the current depth of computer system 602, and computer system 602 displays the rightmost portion of depth animation 612 at a point on depth scale 626b corresponding to the current depth of computer system 602. As discussed in more detail below, the computer system 602 moves the display of the depth animation 612 on the display of the computer system 602 based on the detected depth of the computer system 602. In some implementations, the computer system 602 displays the depth animation 612 such that the depth animation 612 simulates the dynamic properties of a water meniscus (e.g., the display of the depth animation 612 depends on the rotational orientation of the computer system 602).
At FIG. 6B, because it is determined that computer system 602 is submerged in the body of water, computer system 602 enters a water-in-lock state. When the computer system 602 is in the water-in-lock state, the computer system 602 has reduced functionality. That is, when the computer system 602 is in a normal operating state, the computer system 602 performs one or more respective operations in response to detecting a respective user input (e.g., a tactile input on a display of the computer system 602). However, while computer system 602 is in the water-in-lock state, computer system 602 foregoes performing one or more respective operations in response to detecting a respective user input (e.g., a tactile input on a display of computer system 602) (e.g., while computer system 602 is in the water-in-lock state, computer system 602 denies the tactile input).
At fig. 6C, user 600 descends to a greater depth within the body of water. At fig. 6C, computer system 602 is determined to be 36 feet below the surface of the body of water (e.g., as indicated by depth metric 618). At FIG. 6C, because it is determined that the depth of computer system 602 is 36 feet below the surface of the body of water, computer system 602 updates the display of depth animation 612. That is, at FIG. 6C, the computer system 602 moves the leftmost portion of the depth animation 612 to a position on the depth scale 626a corresponding to 36 feet. Further, at FIG. 6C, the computer system 602 moves the rightmost portion of the depth animation 612 to a position (e.g., 10.9 meters) on the depth scale 626b that corresponds to 36 feet. The display of both the depth metric 618 and the temperature metric 620 is dynamic. Accordingly, at FIG. 6C, because it is determined that the depth of computer system 602 is 36 feet, computer system 602 updates the display of depth metric 618 to indicate that the current depth of computer system 602 is 36 feet. Further, at FIG. 6C, the temperature of the body of water is determined to be 74 degrees. Because the temperature of the body of water is determined to be 74 degrees, computer system 602 updates the display of temperature metric 620 to indicate the detected temperature of the body of water.
At fig. 6C, it is determined that the depth of computer system 602 is greater than a predetermined depth threshold (e.g., 4 feet, 6 feet, 8 feet, or 10 feet). Because it is determined that the depth of computer system 602 is greater than the predetermined depth threshold, computer system 602 initiates tracking and recording of metrics for the first period of diving of user 600. The computer system 602 starts a diving timer as part of the tracking and recording of the measurement for the first diving period. The diving timer records the amount of time that the computer system 602 is submerged at a depth greater than a predetermined depth threshold. Further, because it is determined that the depth of computer system 602 is greater than the predetermined depth threshold, computer system 602 displays time metric 624. The time metric 624 indicates an amount of time (e.g., in minutes and seconds) that the computer system 602 has been submerged at a depth greater than a predetermined depth threshold. In some embodiments, time metric 624 indicates an amount of time in seconds (e.g., rather than minutes and seconds) that computer system 602 has been submerged at a depth greater than a predetermined depth threshold. In some embodiments, time metric 624 indicates an amount of time computer system 602 has been submerged within the body of water. In some embodiments, computer system 602 displays time metric 624 in accordance with a determination that computer system 602 is submerged in a body of water.
As illustrated in fig. 6C, computer system 602 displays depth animation 612 behind depth metric 618, time metric 624, and temperature metric 620. The computer system 602 uses white in contrast to black of the depth animation 612 to display the depth metric 618, the time metric 624, and the temperature metric 620, while the computer system 602 displays the depth animation 612 behind the depth metric 618, the time metric 624, and the temperature metric 620. In some implementations, the computer system 602 displays the depth metric 618, the time metric 624, and the temperature metric 620 as non-white (e.g., red, blue, green, and/or yellow) in contrast to the display of the depth animation 612, while the computer system 602 displays the depth animation 612 behind the depth metric 618, the time metric 624, and the temperature metric 620.
At fig. 6C, because it is determined that the depth of computer system 602 is greater than the predetermined depth threshold, computer system 602 enters a diving lock state (e.g., and exits a diving lock state). The computer system 602 suppresses alerts (e.g., visual alerts, audio alerts, vibration alerts) that the computer system 602 would otherwise output if the computer system 602 were in a normal operating state. Further, when the computer system 602 is in the dive locked state, the computer system 602 (e.g., one or more light sources with which the computer system 602 communicates) lights up a display of the computer system. The computer system 602 exits the dive locked state (e.g., and transitions to the normal operating state or the water-in locked state described above) in response to detecting a user input (e.g., a hardware button is pressed) corresponding to activating a hardware button (e.g., a rotatable crown button and/or a side button) integrated into the computer system 602. When the computer system 602 is in the dive locked state, the computer system 602 performs a corresponding operation (e.g., launches an application, displays an animation, and/or displays a corresponding user interface) in response to detecting that a hardware button of the computer system 602 is pressed (e.g., the hardware button is pressed). In contrast, when the computer system 602 is not in the submersible locked state (e.g., the computer system 602 is in a normal operating state or a submersible locked state), the computer system 602 performs a corresponding operation in response to detecting that the hardware button is pressed and then released (e.g., the hardware button returns to its original position). Additionally, when the computer system 602 is in the dive lockout state, the computer system 602 does not perform any operations in response to detecting a long press of a hardware button (e.g., pressing and holding) and/or in response to detecting a series of one or more inputs (e.g., double clicking a hardware button).
At fig. 6D, user 600 descends to a greater depth within the body of water. At fig. 6D, it is determined that computer system 602 is 60 feet below the surface of the body of water (e.g., as indicated by depth metric 618). At FIG. 6D, because computer system 602 is determined to be 60 feet below the surface of the body of water, computer system 602 updates the display of depth animation 612. That is, at FIG. 6D, the computer system 602 moves the leftmost portion of the depth animation 612 to a position on the depth scale 626a corresponding to 60 feet. Further, at FIG. 6D, the computer system 602 moves the rightmost portion of the depth animation 612 to a position (e.g., 18.2 meters) on the depth scale 626b that corresponds to 60 feet.
At fig. 6D, computer system 602 does not display depth animation 612 behind depth metrics 618 (e.g., because computer system 602 moved the display of depth animation 612 downward on the display of computer system 602). Because the depth animation 612 is not displayed behind the depth metric 618, the computer system 602 changes the display color of the depth metric 618 from white (e.g., as shown in fig. 6C) to black. At fig. 6D, computer system 602 remains displaying time metric 624 and temperature metric 620 using white, as computer system 602 displays depth animation 612 behind both time metric 624 and temperature metric 620. In some implementations, when the computer system 602 does not display the depth animation 612 behind the depth metric 618, the computer system 602 displays the depth metric 618 as non-black (e.g., yellow, red, green, blue, and/or orange).
At fig. 6E, user 600 descends to a greater depth within the body of water. At fig. 6E, computer system 602 is determined to be 120 feet below the surface of the body of water (e.g., as indicated by depth metric 618). At FIG. 6E, because computer system 602 is determined to be 120 feet below the surface of the body of water, computer system 602 updates the display of depth animation 612. That is, at FIG. 6E, the computer system 602 moves the leftmost portion of the depth animation 612 to a position on the depth scale 626a corresponding to 120 feet. Further, at FIG. 6E, the computer system 602 moves the rightmost portion of the depth animation 612 to a position on the depth scale 626b corresponding to 120 feet (e.g., 36.5 meters).
At fig. 6E, computer system 602 does not display depth animation 612 behind time metric 624, depth metric 618, and temperature metric 620 (e.g., because computer system 602 moved the display of depth animation 612 downward on the display of computer system 602). Because computer system 602 does not display depth animation 612 behind time metric 624, depth metric 618, and temperature metric 620, computer system 602 uses black (e.g., as opposed to white as shown in fig. 6C) to display each of time metric 624, depth metric, and temperature metric 620. Similar to the depth metric 618, when the depth animation 612 is displayed behind the time metric 624 and the temperature metric 620, the computer system 602 displays the time metric 624 and the temperature metric 620 using colors that are in contrast to the depth animation 612. When depth animation 612 is not displayed behind time metric 624 and temperature metric 620, computer system 602 changes the colors of time metric 624 and temperature metric 620. In some implementations, when computer system 602 displays depth animation 612 behind temperature metric 620 instead of time metric 624, computer system 602 displays time metric 624 using black and temperature metric 620 using white.
At fig. 6F, user 600 descends to a greater depth within the body of water. At fig. 6F, it is determined that the depth of computer system 602 is greater than a maximum depth threshold (e.g., 110 feet, 120 feet, 125 feet, 130 feet, 140 feet, or 150 feet). Because it is determined that the depth of computer system 602 is greater than the maximum depth threshold, computer system 602 automatically (e.g., without interventional user input) displays depth alert user interface 630. Computer system 602 uses the display of depth alert user interface 630 in place of the display of immersion user interface 610. Depth alert user interface 630 provides an indication that the depth of computer system 602 is greater than a maximum depth threshold. In some embodiments, computer system 602 displays depth alert user interface 630 rhythmically (e.g., repeatedly flashing). In some embodiments, computer system 602 activates one or more light sources in communication with computer system 602 when depth alert user interface 630 is displayed.
Further, at fig. 6F, because computer system 602 is determined to be at a depth greater than the maximum depth threshold, computer system 602 outputs continuous haptic feedback 636. Upon detecting that the depth of computer system 602 is greater than the maximum depth threshold, computer system 602 outputs continuous haptic feedback 636. In some embodiments, when the depth of computer system 602 is greater than the maximum depth threshold, the intensity of continuous haptic feedback 636 is proportional to the depth of computer system 602 (e.g., the intensity of continuous haptic feedback 636 increases as the depth of computer system 602 increases or decreases as the depth of computer system 602 decreases). In some implementations, the computer system 602 outputs discrete haptic feedback (e.g., a single vibratory output) in response to determining that the depth of the computer system 602 is greater than a maximum depth threshold.
At fig. 6G, user 600 is ascending within the body of water. At fig. 6G, it is determined that the depth of computer system 602 transitions from greater than the maximum depth threshold to less than the maximum depth threshold. Because it is determined that the depth of computer system 602 transitions from greater than the maximum depth threshold to less than the maximum depth threshold, computer system 602 stops the display of depth alert user interface 630 and redisplays immersion user interface 610. Computer system 602 automatically (e.g., without intervening user input) ceases display of depth alert user interface 630 and redisplays immersion user interface 610 when computer system 602 is repositioned above the maximum depth threshold. Further, at fig. 6G, because it is determined that the depth of computer system 602 transitions from greater than the maximum depth threshold to less than the maximum depth threshold, computer system 602 ceases to output continuous haptic feedback 636.
At fig. 6H, user 600 descends to a greater depth within the body of water. At fig. 6H, it is determined that the depth of computer system 602 is greater than the maximum depth threshold. Because it is determined that the depth of computer system 602 is greater than the maximum depth threshold, computer system 602 redisplays depth warning user interface 630 and outputs continuous haptic feedback 636 (e.g., as described above at fig. 6F). At fig. 6H, computer system 602 detects input 650H corresponding to activation of side button 638.
At fig. 6I, in response to detecting input 650h, computer system 602 ceases display of depth alert user interface 630 and redisplays immersion user interface 610. At fig. 6I, computer system 602 is maintained at a depth greater than a maximum depth threshold (e.g., the depth of computer system 602 does not change between fig. 6H and fig. 6I). Even if the depth of computer system 602 is greater than the maximum depth threshold, computer system 602 stops the display of depth alert user interface 630 in response to detecting input 650 h.
In the event that the depth of computer system 602 is greater than the maximum depth threshold, then immersion user interface 610 does not include depth metric 618 when computer system 602 displays immersion user interface 610. Conversely, as illustrated in fig. 6I, when the depth of the computer system 602 is greater than the maximum depth threshold, the immersion user interface 610 includes a depth alert 634. Further, when the depth of computer system 602 is greater than the maximum depth threshold, immersion user interface 610 does not include depth animation 612. Conversely, when the depth of the computer system 602 is greater than the maximum depth threshold, the computer system 602 displays the background of the submerged user interface 610 as a solid color (e.g., yellow, black, red, and/or orange). In some embodiments, computer system 602 continues to track the depth of computer system 602 when the depth of computer system 602 is greater than a maximum depth threshold. In some implementations, the computer system 602 displays the depth metric 618 when the depth of the computer system 602 is greater than a maximum depth threshold.
At fig. 6J, user 600 is ascending within the body of water. At fig. 6J, it is determined that the depth of computer system 602 transitions from greater than a predetermined depth threshold to less than the predetermined depth threshold. At fig. 6J, because it is determined that the depth of computer system 602 transitions from greater than the predetermined depth threshold to less than the predetermined depth threshold, computer system 602 stops tracking metrics for the first diving period. Accordingly, a computer system 602 falling past a predetermined depth threshold value indicates the beginning of the first diving period, and a computer system 602 rising past a predetermined depth threshold value indicates the end of the first diving period. In some embodiments, because it is determined that the depth of computer system 602 transitions from greater than the predetermined depth threshold to less than the predetermined depth threshold, computer system 602 transitions from the submersible locked state to the submersible locked state.
As illustrated in fig. 6J, because it is determined that the depth of computer system 602 transitions from greater than the predetermined depth threshold to less than the predetermined depth threshold, computer system 602 displays summary user interface 640. Summary user interface 640 includes summaries of various metrics tracked and recorded by computer system 602 during the first diving period. Summary user interface 640 includes a maximum depth metric 642, a diving time metric 644, and a water temperature range metric 646. The maximum depth metric 642 indicates the deepest depth of the computer system 602 detected during the first diving period. The diving time metric 644 indicates the amount of time that has elapsed during the first diving period. The water temperature range metric 646 indicates the range of water temperatures detected during the first diving period. In some embodiments, summary user interface 640 includes additional information tracked by computer system 602 during the first diving period (e.g., the geographic location of the first diving period, the amount of oxygen consumed by user 600 during the first diving period, the heart rate range of user 600 during the first diving period, and/or the water pressure range during the first diving period). In some embodiments, computer system 602 stops display of summary user interface 640 in response to determining that computer system 602 is no longer submerged in the body of water. In some embodiments, in accordance with a determination that computer system 602 is no longer submerged in the body of water, computer system 602 sends instructions to an external device (e.g., a smart phone owned by user 600) that cause the external device to display summary user interface 640. In some embodiments, in accordance with a determination that the depth of computer system 602 transitions from greater than a predetermined depth threshold to less than a predetermined depth threshold, computer system 602 sends instructions to an external device (e.g., a smart phone owned by user 600) that cause the external device to display summary user interface 640.
At fig. 6K, user 600 descends within the body of water. At fig. 6K, it is determined that the depth of computer system 602 transitions from less than a predetermined depth threshold to greater than the predetermined depth threshold. Because it is determined that the depth of computer system 602 transitions from less than the predetermined depth threshold to greater than the predetermined depth threshold, computer system 602 begins tracking metrics (e.g., as described above with respect to fig. 6C) for a second submergence period (e.g., different than the first submergence period). Further, because it is determined that the depth of computer system 602 transitions from less than the predetermined depth threshold to greater than the predetermined depth threshold, computer system 602 stops displaying summary user interface 640 and redisplays immersion user interface 610 (e.g., as described above with respect to fig. 6B-6E). As illustrated in fig. 6K, the immersion user interface 610 includes a time metric 624. At fig. 6K, the time metric 624 indicates an amount of time that the computer system 602 is at a depth greater than a predetermined depth threshold during the second submergence period. When the depth of computer system 602 transitions from less than the predetermined depth threshold to greater than the predetermined depth threshold, computer system 602 automatically (e.g., without an intervening use input) ceases display of summary user interface 640 and displays immersion user interface 610.
At fig. 6K, it is determined that the battery level of computer system 602 is below a battery level threshold (e.g., 10%, 15%, 20%, 25%, or 30%). Because it is determined that the battery level of computer system 602 is below the battery level threshold, computer system 602 displays battery level user interface object 652 within immersion user interface 610. The battery level user interface object 652 indicates the current battery level of the computer system 602. When the immersion user interface 610 includes the battery level user interface object 652, the immersion user interface 610 does not include the temperature metric 620. In some embodiments, computer system 602 stops displaying battery level user interface object 652 in accordance with determining that the battery level of computer system 602 is above a battery level threshold. In some implementations, the computer system 602 changes the appearance of the battery level user interface object 652 (e.g., the computer system 602 changes the color and/or size of the battery level user interface object 652) in accordance with determining that the battery level of the computer system 602 is below a minimum battery level threshold (e.g., the minimum battery level threshold corresponds to a lower battery level than the battery level corresponding to the battery level threshold) (e.g., the computer system 602 displays the battery level user interface object 652 as a flashing). In some implementations, when the battery level of the computer system 602 is below a battery level threshold, the submerged user interface 610 includes both the temperature metric 620 and the battery level user interface object 652.
At fig. 6L, user 600 is ascending within the body of water. At fig. 6L, it is determined that the depth of computer system 602 transitions from greater than a predetermined depth threshold to less than the predetermined depth threshold. At fig. 6L, because it is determined that the depth of computer system 602 transitions from greater than the predetermined depth threshold to less than the predetermined depth threshold, computer system 602 stops tracking and recording metrics for the second diving period (e.g., as described above with respect to fig. 6J) and displays summary user interface 640. At fig. 6L, summary user interface 640 includes a summary of metrics tracked by computer system 602 during the second diving period (e.g., as described above with respect to fig. 6J). At FIG. 6L, computer system 602 detects an input 650L corresponding to activation of side button 638. In some embodiments, summary user interface 640 includes a summary of metrics tracked and recorded by computer system 602 during the first and second diving periods.
At FIG. 6M, in response to detecting input 650l, computer system 602 stops the display of summary user interface 640 and redisplays immersion user interface 610. That is, computer system 602 stops the display of summary user interface 640 in response to detecting activation of a hardware button or in response to detecting a transition in the depth of computer system 602 from greater than a predetermined depth threshold to less than a predetermined depth threshold. In some implementations, computer system 600 stops the display of summary user interface 640 in response to detecting a rotation and/or a press of a rotatable crown hardware button integrated into computer system 602.
Fig. 7 is a flow chart illustrating a method for managing the display of information related to physical activity using a computer system, according to some embodiments. The method 700 is performed at a computer system (e.g., 602) (e.g., a smart watch, wearable electronic device, smart phone, and/or tablet) in communication with a display generation component (e.g., a display controller and/or touch-sensitive display system) and one or more sensors (e.g., an accelerometer, a gyroscope, a water sensor, and/or a depth sensor) (e.g., a computer system worn by a user). Some operations in method 700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 700 provides an intuitive way for managing the display of information related to physical activity. The method reduces the cognitive burden on the user to manage the display of information related to physical activity, thereby creating a more efficient human-machine interface. For battery-powered computing devices, enabling a user to more quickly and efficiently manage the display of information related to physical activity saves power and increases the time interval between battery charges.
When the computer system is submerged (e.g., submerged in water (e.g., a lake, pond, or ocean) (e.g., the computer system is submerged via a diver initiating a dive) (e.g., a distance from the surface of the water), the computer system displays (702) a submerged user interface (e.g., 610) (e.g., a user interface corresponding to a diving application installed on the computer system) (e.g., a user interface corresponding to a diving exercise tracking function) via a display generating assembly.
While displaying the submerged user interface, the computer system detects (704), via one or more sensors, a first depth at which the computer system is submerged.
In response to detecting (706) a first depth to which the computer system is submerged and determining from (708) that the first depth is less than a predetermined depth threshold (e.g., a depth of the computer system 602 at fig. 6B) (e.g., 3 inches, 6 inches, 1 foot, 2 feet, 5 feet, or 10 feet), the computer system displays a first set of measurements (e.g., 618 and 620) regarding the computer system's submergence (e.g., a depth of the computer system (e.g., centimeters, inches, feet, and/or meters), a temperature of water (e.g., degrees fahrenheit and/or degrees celsius), an amount of time the computer system has been submerged, a second set of measurements (e.g., 618 and 620) via the display generation assembly the time of day, the amount of oxygen and/or water pressure remaining in the diver's tank) (e.g., the first set of metrics is displayed as part of displaying the immersion user interface) (e.g., the first set of metrics is updated in real time), and in accordance with (710) determining that the first depth is greater than a predetermined depth threshold (e.g., the depth of computer system 602 at fig. 6C), the computer system displays a second set of metrics (e.g., 618, 620, and 624) about the immersion of the computer system via the display generating assembly, the second set of metrics being different from the first set of metrics (e.g., the second set of metrics is updated in real time) (e.g., the second set of metrics includes one or more metrics included in the first set of metrics) (e.g., the depth of the computer system), the temperature of the water, the amount of time the computer system has been submerged, the time of day, the amount of oxygen remaining in the diver's tank, and/or the water pressure). in some embodiments, the submerged user interface is displayed in response to the computer system detecting that the computer system is submerged in water. In some embodiments, the submerged user interface is displayed in response to the computer system detecting an input corresponding to a selection of a user interface element corresponding to a submerged application installed on the computer system. In some embodiments, the computer system stops displaying the second set of metrics and displays the first set of metrics in response to detecting that the computer system is no longer submerged at a depth greater than a predetermined depth threshold. In some embodiments, the display of the metrics is contrasted with the background of the submerged user interface (e.g., the background of the submerged user interface is black and the metrics are white, and vice versa). In some implementations, respective metrics in the first and/or second sets of metrics are displayed at different sizes (e.g., time metrics are displayed at a size greater than the water temperature metric). In some embodiments, the respective metrics in the first and/or second sets of metrics are displayed in different colors (e.g., the time metrics are displayed as a first color (e.g., red, yellow, orange, black, and/or white) and the water temperature metrics are displayed as a second color different from the first color). in some embodiments, the first set of metrics and/or the second set of metrics include a time metric that indicates a total time that the computer system has been submerged in water (e.g., at a depth less than and/or greater than a predetermined depth). In some embodiments, the color of the respective metrics in the first set of metrics and/or the second set of metrics changes color (e.g., sequentially) as the depth to which the computer system is submerged increases (e.g., from white to black, and vice versa). In some embodiments, the color of the background of the submerged user interface changes as the depth to which the computer system is submerged increases (e.g., from black to white, and vice versa). Displaying the respective set of metrics when the set of prescribed conditions is met (e.g., the computer system is at a depth greater than or less than a predetermined depth threshold) allows the computer system to automatically perform a display operation that provides the user with a particular set of information based on whether the user is engaged in a diving activity, the computer system performing the operation when the set of conditions is met without further user input. Displaying the respective set of metrics based on the depth of the computer system provides visual feedback to the user regarding the current depth of the computer system, thereby providing improved visual feedback.
In some embodiments, the first set of metrics includes a water depth metric (e.g., 618) (e.g., a representation of a current depth of the computer system in water (e.g., text and/or graphical representation)) (e.g., water depth metric shown in inches, feet, and/or meters) and a water temperature metric (e.g., 620) (e.g., a representation of a current real-time temperature of water in which the computer system is submerged (e.g., text and/or graphical representation)) (e.g., temperature metric shown in degrees fahrenheit and/or celsius).
In some embodiments, the second set of metrics includes a water depth metric (e.g., 618) (e.g., a representation of a current depth of the computer system in water (e.g., text and/or graphical representation)) (e.g., water depth metric shown in inches, feet, and/or meters), a water temperature metric (e.g., 620) (e.g., a representation of a current real-time temperature of water in which the computer system is submerged (e.g., text and/or graphical representation)) (e.g., temperature metric shown in degrees fahrenheit and/or degrees celsius), and a diving time metric (e.g., 624) (e.g., a representation of an amount of time (e.g., text and/or graphical representation) that the computer system has been submerged at a depth greater than a predetermined depth threshold).
In some embodiments, in response to detecting the first depth at which the computer system (e.g., 602) is submerged, and in accordance with a determination that the first depth is greater than a predetermined depth threshold (e.g., as described above with reference to fig. 6C), the computer system (e.g., automatically (e.g., without intervening user input)) starts a diving timer (e.g., 624) (e.g., the diving timer tracks the number of hours, minutes, and/or seconds at which the computer system is submerged below the predetermined depth threshold) (e.g., the diving timer is displayed concurrently with various other metrics (e.g., water depth metrics and/or water temperature metrics)). In some embodiments, when the diving timer is active (e.g., and when the computer system is submerged at a depth greater than a predetermined depth threshold), the computer system detects a first depth change of the computer system from a first depth to a second depth (e.g., as described above with reference to fig. 6J) (e.g., the first depth is above the second depth). In some embodiments, in response to detecting a first depth change of the computer system from a first depth to a second depth, and in accordance with a determination that the second depth of the computer system is less than a predetermined depth threshold (e.g., as described above with reference to fig. 6J) (e.g., the second depth is greater than the predetermined depth threshold), the computer system stops operation of the diving timer (e.g., the computer system maintains operation of the diving timer in accordance with a determination that the second depth of the computer system is greater than the predetermined depth threshold). In some embodiments, the computer system displays the total amount of time the computer system is submerged at a depth greater than a predetermined threshold as part of stopping operation of the diving timer. In some embodiments, the computer system restarts the diving timer in response to detecting that the computer system is re-submerged at a depth greater than a predetermined depth threshold. Stopping operation of the diving timer in response to detecting the first depth change of the computer system allows a user to control operation of the diving timer without displaying additional controls, which provides additional control options without cluttering the user interface. Stopping operation of the diving timer when a prescribed condition is met (e.g., in accordance with a determination that the second depth of the computer system is less than a predetermined depth threshold), allows the computer system to automatically control execution of the diving timer such that the diving timer is active only when the user is during diving activity, the computer system performing the operation without further user input when the set of conditions is met.
In some embodiments, in response to detecting a first depth at which the computer system (e.g., 602) is submerged, and in accordance with a determination that the first depth is less than a predetermined depth threshold (e.g., the depth of the computer system 602 at fig. 6B), the computer system enters a first operational state (e.g., as described above with reference to fig. 6B) (e.g., a water-in-lock state) (e.g., a state in which the functionality of the computer system is limited (e.g., the computer system is not responsive to touch input while in the first operational state)), and in accordance with a determination that the first depth is greater than a predetermined depth threshold (e.g., the depth of the computer system 602 at fig. 6C), the computer system enters a second operational state (e.g., as described above with reference to fig. 6C) (e.g., a submersible lock state) (e.g., a state in which the functionality of the computer system is limited (e.g., a radio (GPS and/or cellular radio) integrated within the computer system) is inactive) (e.g., a heart rate sensor and/or a blood oxygen sensor is deactivated) (e.g., the functionality of the computer system is more limited while in the second operational state of the computer system than while in the second operational state). In some embodiments, the computer system exits the first operational state in response to detecting that the computer system is no longer submerged. In some embodiments, the computer system transitions from the first operational state to the second operational state in response to detecting that the depth of the computer system transitions from less than a predetermined depth threshold to greater than the predetermined depth threshold. In some embodiments, the computer system transitions from the second operational state to the first operational state in response to detecting that the depth of the computer system transitions from greater than a predetermined depth threshold to less than the predetermined depth threshold. Entering the first operational state or the second operational state when the set of conditions is met (e.g., the depth of the computer system is less than or greater than a predetermined depth threshold) allows the computer system to automatically control the operational state of the computer system such that the computer system is in the proper operational state based on the depth of the computer system, the computer system performing operations when the set of conditions is met without further user input.
In some embodiments, a computer system detects an event (e.g., a notification received from an external computer system, a text message received from an application installed on the computer system, a notification generated corresponding to an application installed on the computer system, an email received, and/or a countdown timer completed) corresponding to an alert (e.g., 636) (e.g., a visual alert, a tactile alert, and/or an audio alert), and the computer system (e.g., 602) is configured to output an alert when the event is detected (e.g., when the computer system is in a standard (e.g., normal) operating state, the computer system outputs the alert after the event is detected). In some embodiments, in response to detecting the event, and in accordance with a determination that the computer system (e.g., 602) is in the second operational state (e.g., as described above with reference to fig. 6C), the computer system suppresses the alert (e.g., the computer system does not output the alert), and in accordance with a determination that the computer system is not in the second operational state (e.g., the computer system is in a standard (e.g., normal) operational state), the computer system outputs the alert (e.g., the computer system outputs the visual alert, the computer system outputs the audio alert, and/or the computer system outputs the tactile alert). In some embodiments, the computer system initially suppresses the alert corresponding to the event while the computer system is in the second operational state, and the computer system outputs the alert corresponding to the event once the computer system exits the second operational state. In some implementations, the computer system suppresses a first type of alert (e.g., audio, visual, and/or tactile alert) corresponding to the event and outputs a second type of alert (e.g., audio, visual, and/or tactile alert) corresponding to the event. In some embodiments, the computer system suppresses the alert when the computer system is in the first operating state. Suppressing alerts that the computer system is configured to output if an event is detected when a prescribed condition is met (e.g., the computer system is in a second operating state) reduces the power consumption of the computer system, thereby extending the battery life of the computer system, which improves (e.g., extends) the overall battery life of the computer system.
In some embodiments, the computer system detects a tactile user input (e.g., 650h or 650 l) corresponding to one or more operations of the computer system (e.g., 602) (e.g., tap input, press and hold, and/or swipe) (e.g., the computer system is configured to perform one or more operations in response to detecting the tactile user input while the computer system is in a standard operational (e.g., normal) state). In some embodiments, in response to detecting a tactile user input corresponding to one or more operations of the computer system, and in accordance with a determination that the computer system is in a second operational state (e.g., as described above with reference to fig. 6C), the computer system suppresses one or more operations of the computer system (e.g., in accordance with a determination that the computer system is not in the second operational state, performs the one or more operations of the computer system). Suppressing one or more operations of the computer system in response to detecting the tactile user input when the prescribed condition is met (e.g., the computer system is in the second operational state) reduces power consumption of the computer system, thereby extending battery life of the computer system, which improves (e.g., extends) the overall battery life of the computer system.
In some embodiments, a computer system (e.g., 602) communicates with one or more light sources (e.g., one or more light sources are integrated into a display generation component) (e.g., the display generation component has a backlight). In some embodiments, in accordance with a determination that the computer system is in the second operational state (e.g., as described above with reference to fig. 6C), the computer system activates one or more light sources, wherein the light sources illuminate the display generating component (e.g., as described above with reference to fig. 6C) (e.g., the one or more light sources remain illuminated while the computer system is in the second operational state) (e.g., in accordance with a determination that the computer system is not in the second operational state, the computer system illuminates the one or more light sources based on a set of meeting one or more criteria). In some embodiments, the computer system stops illuminating the one or more light sources in response to the computer system transitioning from the second operating state to the first operating state and/or to a standard (e.g., normal) operating state. In some embodiments, the computer system remains on with the one or more light sources in response to the computer system transitioning from the second operating state to the first operating state and/or to a standard (e.g., normal) operating state. Activating one or more light sources when a prescribed set of conditions is met (e.g., the computer system is in a second operational state), allowing the computer system to automatically control light operation at a point in time when the amount of ambient illumination is reduced, the computer system performing the operation when the set of conditions is met without requiring further user input.
In some implementations, the computer system (e.g., 602) communicates (e.g., wireless and/or wired communication) with a first hardware button (e.g., a rotatable and depressible crown button) (e.g., the first hardware button is integrated into the computer system). In some embodiments, when the computer system is in the second operational state (e.g., as described above with reference to fig. 6C) (e.g., and when the computer system is submerged at a depth greater than a predetermined depth threshold), the computer system detects activation of the first hardware button (e.g., tactile input (e.g., 650l or 650 h)) (e.g., the first hardware button is pressed) (e.g., the user performs a long press (e.g., presses and holds) input on the first hardware button). In some implementations, in response to detecting activation of the first hardware button, the computer system exits the second operational state (e.g., as described above with reference to fig. 6C) (e.g., and enters the first operational state). In some embodiments, the computer system exits the second operational state when the computer system is submerged at a depth greater than a predetermined depth threshold. In some embodiments, the computer system exits the second operating state and enters a standard (e.g., normal) operating state in response to detecting activation of the first hardware button.
In some implementations, while the computer system (e.g., 602) is at a first depth (e.g., the depth of the computer system at fig. 6G), the computer system detects a second depth change of the computer system from the first depth to a third depth (e.g., as described above with reference to fig. 6H) (e.g., the third depth is greater than the first depth). In some embodiments, in response to detecting a second depth change of the computer system from the first depth to the third depth, and in accordance with a determination that the third depth is greater than a maximum depth threshold (e.g., the depth of the computer system at fig. 6H) (e.g., as described above with reference to fig. 6H) (e.g., the computer system may track the maximum depth of the computer system) (e.g., 130 feet), the computer system displays a depth alert user interface (e.g., 630) (e.g., when the watch is submerged at a depth greater than the maximum depth (e.g., when the depth alert user interface is displayed), the depth alert user interface indicates that the depth reading is not available) (e.g., the computer system displays a second set of metrics prior to display of the depth alert user interface, and the display of the depth alert user interface replaces the display of the second set of metrics) (e.g., in accordance with a determination that the third depth is not greater than the maximum depth threshold, the display of the depth alert user interface is abandoned). In some embodiments, the background of the depth alert user interface is displayed as a solid color (e.g., black), and the text of the depth alert user interface is displayed in one or more colors (e.g., yellow, white, orange, and/or red) that contrast with the background of the depth alert user interface. In response to detecting that the computer system is at a depth greater than the maximum depth threshold, a depth alert user interface is displayed that provides visual feedback to the user regarding the current depth of the computer system (e.g., the computer system is at a depth greater than the maximum depth threshold), thereby providing improved visual feedback. Displaying a depth alert user interface when the condition set is satisfied (e.g., the computer system is at a depth greater than a maximum depth threshold) allows the computer system to automatically perform a display operation that alerts a user to the depth of the computer system, the computer system performing the operation when the condition set is satisfied without requiring further user input.
In some implementations, the computer system outputs (e.g., continuously outputs) a haptic (e.g., vibration) alert (e.g., 636) when the computer system (e.g., 602) displays the depth alert user interface (e.g., the computer system stops outputting haptic alerts when the computer system stops displaying the depth alert user interface). In some embodiments, the computer system outputs a visual and/or audio alert when the computer system outputs a tactile alert. In some implementations, the computer system stops outputting the haptic alert in response to detecting that the depth of the computer system has transitioned from greater than the maximum depth threshold to less than the maximum depth threshold. In some implementations, the computer system continuously outputs discrete haptic alerts as the depth alert user interface is displayed. In some implementations, the computer system outputs a single haptic alert for a duration of display of the depth alert user interface while the depth alert user interface is displayed. Outputting a haptic alert when the depth of the computer system is greater than a maximum depth threshold provides haptic feedback to the user regarding the current depth of the computer system, thereby providing improved haptic feedback. Outputting a haptic alert when the condition set is met (e.g., the computer system is at a depth greater than a maximum depth threshold), allowing the computer system to automatically output vibration feedback to the user alerting the user to the current depth of the computer system, the computer system performing an operation when the condition set is met without further user input.
In some implementations, when the depth alert user interface (e.g., 630) is displayed, the computer system detects a third depth change (e.g., as described above with reference to fig. 6J) of the computer system (e.g., 602) from a third depth to a fourth depth (e.g., the fourth depth is less than the third depth). In some implementations, in response to detecting a third depth change of the computer system from a third depth to a fourth depth, and in accordance with a determination that the fourth depth is less than a maximum depth threshold (e.g., a depth of the computer system 602 at fig. 6J) (e.g., the third depth is above the maximum depth), the computer system (e.g., automatically (e.g., without intervening user input)) ceases display of the depth alert user interface (e.g., and ceases outputting the haptic alert) (e.g., the computer system maintains display of the depth alert user interface in accordance with a determination that the fourth depth is greater than the maximum depth threshold). In some embodiments, the computer system displays the second set of metrics as part of stopping the display of the depth alert user interface. In some embodiments, the computer system displays the first set of metrics as part of stopping the display of the depth alert user interface. In some implementations, the computer system redisplays the depth alert user interface in response to detecting that the depth of the computer system transitions from less than the maximum depth threshold to greater than the maximum depth threshold. Stopping the display of the depth alert user interface in response to detecting that the depth of the computer system has changed from the third depth to the fourth depth allows the user to stop the display of the depth alert user interface (e.g., by rising above a maximum depth threshold) without displaying additional controls, thereby providing additional control options without cluttering the user interface.
In some implementations, the computer system (e.g., 602) communicates (e.g., wireless and/or wired communication) with a second hardware button (e.g., 638) (e.g., a rotatable and depressible crown mechanism or a button integrated on a side of the computer system). In some implementations, when a depth alert user interface (e.g., 630) is displayed (e.g., and when the computer system is submerged at a depth greater than the maximum depth threshold), the computer system detects an input (e.g., 650h or 650 l) corresponding to activation of the second hardware button (e.g., pressing the second hardware button and/or rotating the second hardware button). In some embodiments, in response to detecting an input corresponding to activation of the second hardware button, the computer system ceases display of the depth alert user interface (e.g., and ceases outputting the haptic sensation) (e.g., as part of ceasing to display the depth alert, the computer system displays a user interface showing a real-time indication of the time of diving and the temperature of the water and an indication that the depth of the computer system is greater than a maximum depth threshold). In some embodiments, the computer system displays the second set of metrics as part of stopping the display of the depth alert user interface. In some embodiments, the computer system displays the first set of metrics as part of stopping the display of the depth alert user interface. Stopping the display of the depth alert user interface in response to detecting an input corresponding to activation of the second hardware button provides visual feedback to the user regarding the status of the computer system (e.g., the computer system has detected an input corresponding to activation of the second hardware button), thereby providing improved visual feedback.
In some embodiments, when the computer system is submerged at a fourth depth (e.g., the depth of computer system 602 at fig. 6F) that is greater than the maximum depth threshold, the computer system (e.g., 602) displays two metrics (e.g., 618, 624, 620) (e.g., the time of submersion (e.g., the amount of time the computer system has been submerged at a depth that is greater than the predetermined depth threshold) and the water temperature (e.g., the real-time water temperature)) without displaying the additional metrics (e.g., displaying the two metrics on the submerged user interface). In some implementations, additional metrics in addition to the two metrics are displayed in response to detecting that the depth of the computer system has transitioned from greater than the maximum depth threshold to less than the maximum depth threshold. In some embodiments, the computer system tracks three or more metrics as the two metrics are displayed (e.g., the computer system tracks metrics that are not displayed). When the computer system is at a depth greater than the maximum depth threshold, the two metrics are displayed without displaying additional metrics, providing visual feedback to the user regarding the depth of the computer system, thereby providing improved visual feedback.
In some implementations, when the second set of metrics (e.g., 618, 624, and 620) is displayed (e.g., and when the depth of the computer system is greater than a predetermined depth threshold), the computer system detects a fourth depth change of the computer system from the first depth to a fifth depth (e.g., as described in fig. 6J) (e.g., the fifth depth is less than the first depth) (e.g., the depth of computer system 602 at fig. 6J). In some implementations, in response to detecting a fourth depth change of the computer system from the first depth to the fifth depth, and in accordance with a determination that the fifth depth is less than a predetermined depth threshold (e.g., a depth of the computer system 602 at fig. 6J) (e.g., the fifth depth is greater than the predetermined depth threshold), the computer system stops display of the second set of metrics (e.g., automatically (e.g., without intervening user input)) and the computer system displays a summary screen user interface (e.g., 640) (e.g., the summary screen user interface includes a subset of metrics included in the second set of metrics) (e.g., automatically (e.g., without intervening user input)). In some embodiments, the summary screen user interface is displayed while the computer system is in a first operational state (e.g., a water-in-lock state). In some embodiments, the summary screen user interface is displayed when the computer system is no longer submerged. In some embodiments, the summary screen user interface includes each metric included in the second set of metrics. In some embodiments, the summary screen user interface includes a finish selectable button that, when selected, causes the summary screen to cease displaying. In some embodiments, the computer system stops the display of the summary screen user interface in response to detecting activation of one or more hardware controls integrated into the computer system. Displaying the summary screen user interface when a prescribed condition is met (e.g., the computer system transitions from a depth greater than a predetermined depth threshold to a depth less than the predetermined depth threshold) allows the computer system to automatically perform a display operation that provides information to the user regarding the user's recent diving activity, the computer system performing the operation without further user input when the set of conditions is met. Displaying a summary screen in response to detecting a fourth depth change of the computer system from the first depth to a fifth depth that is less than a predetermined depth threshold provides visual feedback to the user regarding the current depth of the computer system, thereby providing improved visual feedback.
In some embodiments, summary screen user interface (e.g., 640) includes a maximum depth metric (e.g., 642) (e.g., when the computer system is submerged below a predetermined depth threshold, the maximum depth metric indicates the maximum depth to which the computer system is submerged), an underwater time metric (e.g., 644) (e.g., how long the computer system is submerged below the predetermined depth threshold), and a water temperature range metric (e.g., 646) (e.g., a water temperature range when the computer system is submerged below the predetermined depth threshold).
In some implementations, when the summary screen user interface (e.g., 640) is displayed, the computer system detects a fifth depth change (e.g., sixth depth greater than fifth depth) of the computer system (e.g., 602) from the fifth depth to the sixth depth (e.g., as described above with reference to fig. 6K). In some embodiments, in response to detecting a fifth depth change of the computer system from the fifth depth to the sixth depth, and in accordance with a determination that the sixth depth is greater than a predetermined depth threshold (e.g., the depth of computer system 602 at fig. 6K), the computer system stops display of the summary screen user interface (e.g., and displays a second set of metrics) (e.g., the computer system maintains display of the summary screen user interface in accordance with a determination that the sixth depth is less than the predetermined depth threshold). In some implementations, the computer system displays a second summary screen (e.g., different than the initially displayed summary screen) in response to detecting a transition in depth of the computer from a fifth depth that is greater than the predetermined depth threshold to a corresponding depth that is less than the predetermined depth threshold. Stopping the display of the summary screen user interface in response to detecting the fifth depth change of the computer system allows the user to control the display of the summary screen user interface (e.g., by dropping to a depth greater than a predetermined depth threshold) without displaying additional controls, thereby providing additional control options without cluttering the user interface. Stopping the display of the summary screen user interface when the set of specified conditions is met (e.g., the sixth depth is greater than the predetermined depth threshold) allows the computer system to automatically manage the display of the summary screen user interface, the computer system performing operations when the set of conditions is met without further user input.
In some implementations, the computer system (e.g., 602) communicates (e.g., wired and/or wireless communication) with a third hardware button (e.g., 638) (e.g., a rotatable and/or depressible crown button or a hardware button integrated into a side of the computer system). In some embodiments, the computer system detects activation of the third hardware button (e.g., 650l, 650 h) while the summary screen user interface (e.g., 640) is displayed (e.g., and while the computer system is submerged at a depth less than a predetermined depth threshold) (e.g., pressing and holding, tapping input, and/or rotating the third hardware button). In some embodiments, in response to detecting activation of the third hardware button, the computer system stops display of the summary screen user interface. In some embodiments, the computer system displays the submerged user interface as part of stopping the display of the summary screen user interface. In some embodiments, the computer system displays the home screen user interface as part of ceasing display of the summary screen user interface. Stopping the display of the summary screen user interface in response to detecting activation of the third hardware button provides visual feedback to the user regarding the status of the computer system (e.g., the computer system has detected activation of the third hardware button), thereby providing improved visual feedback.
In some embodiments, upon displaying the submerged user interface (e.g., 610), in accordance with a determination that the battery level of the computer system (e.g., 602) is below a battery level threshold (e.g., the battery level of the computer system is below 5%, 10%, 15%, or 20%), the computer system displays a battery level indicator (e.g., 652) (e.g., a battery icon comprising a numerical representation of the current battery level of the computer system). In some embodiments, the battery indicator is displayed simultaneously with the first set of metrics or the second set of metrics. In some embodiments, the battery indicator is displayed simultaneously with the depth alert user interface. In some embodiments, the battery indicator is displayed when the computer system is submerged at a depth greater than a predetermined depth threshold. In some embodiments, the battery indicator is displayed when the computer system is submerged at a depth less than a predetermined depth threshold. In some embodiments, in accordance with a determination that the battery power is above a battery power threshold (e.g., the battery of the computer system is sufficiently charged), the computer system ceases to display the battery power indicator. Displaying the battery level indicator when the condition set is met (e.g., the battery level of the computer system is below a battery level threshold) allows the computer system to automatically perform a display operation that indicates to a user the remaining battery level of the computer system, the computer system performing the operation when the condition set is met without requiring further user input.
In some embodiments, the immersion user interface (e.g., 610) includes a depth animation (e.g., 612) (e.g., a meniscus animation (e.g., an animation simulating the behavior of a water surface) (e.g., a depth animation is displayed concurrently with the first or second set of metrics) (e.g., a depth animation is displayed in the context of the immersion user interface). In some embodiments, when the computer system (e.g., 602) is at a first depth (e.g., the depth of computer system 600 at fig. 6B), the computer system displays the depth animation on the immersion user interface at a first location corresponding to the first depth (e.g., the location of the display of the depth animation at fig. 6B), in some embodiments, in response to detecting a sixth depth change of the computer system, the computer system updates a display position of the depth animation on the immersion user interface from a first position to a second position (e.g., the second position is higher or lower than the first position) that corresponds to the seventh depth (e.g., a display of the depth animation 612 at fig. 6C) (e.g., a display position of the depth animation on the display generation component is dependent on the detected depth of the computer system) (e.g., the position of the display of the depth animation moves downward on the display generation component as the depth of the computer system increases and the position of the display of the depth animation moves upward on the display generation component as the depth of the computer system decreases. In some embodiments, the depth animation forms a convex shape. In some embodiments, the depth animation forms a concave shape. In some embodiments, the depth animation is displayed with a subset of the metrics in the first set of metrics or the second set of metrics. In some implementations, the depth animation is displayed behind each of the metrics in the first set of metrics or the second set of metrics. Displaying depth animations at various locations on the immersion user interface based on the depth of the computer system provides visual feedback to the user regarding the current depth of the computer system, thereby providing improved visual feedback.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical application. Those skilled in the art will be able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
While the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It should be understood that such changes and modifications are considered to be included within the scope of the disclosure and examples as defined by the claims.
As described above, one aspect of the present technology is to collect and use data from various sources to improve the delivery of physical health information or any other content that may be of interest to a user. The present disclosure contemplates that in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, social network IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identification or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, the personal information data may be used to deliver targeted information about the physical health of the user that is of greater interest to the user. Accordingly, the use of such personal information data enables the user to have programmatic control over the information delivered. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used to provide insight into the general health of a user, or may be used as positive feedback to individuals who use technology to pursue health goals.
The present disclosure contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information data will adhere to sophisticated privacy policies and/or privacy measures. In particular, such entities should exercise and adhere to the use of privacy policies and measures that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be convenient for the user to access and should be updated as changes to the collection and/or use of data. Personal information from users should be collected for legitimate and reasonable physical uses and must not be shared or sold outside of these legitimate uses. Further, such collection/sharing should be performed after receiving the user's informed consent. Additionally, such entities should consider taking any necessary steps for protecting and securing access to such personal information data and ensuring that other entities having access to the personal information data adhere to the privacy policies and procedures of other entities. Moreover, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and measures. In addition, policies and measures should be adapted to the particular type of personal information data collected and/or accessed and to applicable laws and standards including consideration of a particular jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance circulation and liability act (HIPAA), while health data in other countries may be subject to other regulations and policies and should be treated accordingly. Thus, different privacy measures should be claimed for different personal data types in each country.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively blocks use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, with respect to health services, the present technology may be configured to allow a user to choose to "opt-in" or "opt-out" to participate in the collection of personal information data during or at any time after registration with a service. In another example, the user may choose not to provide physical activity data for delivering the target physical wellness advice. In yet another example, the user may choose to limit the length of time that the mood-related data is maintained, or to completely prohibit development of the underlying mood state. In addition to providing the "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Furthermore, it is intended that personal information data should be managed and processed in a manner that minimizes the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the collection and deletion of data. Further, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without the need to access such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, suggestions about the user's physical well-being may be selected and delivered by inferring preferences based on non-personal information data or absolute minimum amount of personal information, such as content requested by a device associated with the user, other non-personal information available to a medical provider, or publicly available information.
Claims (27)
1. A method, the method comprising:
At a computer system in communication with a display generation component and one or more sensors:
displaying an immersed user interface via the display generating component while the computer system is immersed;
Detecting a first depth to which the computer system is submerged via the one or more sensors while the submerged user interface is displayed, and
In response to detecting the first depth at which the computer system is submerged:
In accordance with a determination that the first depth is less than a predetermined depth threshold, displaying, via the display generation component, a first set of metrics regarding the immersion of the computer system, and
In accordance with a determination that the first depth is greater than the predetermined depth threshold, a second set of metrics regarding the immersion of the computer system is displayed via the display generation component, the second set of metrics being different from the first set of metrics.
2. The method of claim 1, wherein the first set of metrics comprises a water depth metric and a water temperature metric.
3. The method of any of claims 1-2, wherein the second set of metrics includes a water depth metric, a water temperature metric, and a diving time metric.
4. A method according to any one of claims 1 to 3, the method further comprising:
in response to detecting the first depth at which the computer system is submerged, and in accordance with a determination that the first depth is greater than the predetermined depth threshold, starting a diving timer;
detecting a first depth change of the computer system from the first depth to a second depth while the diving timer is active, and
In response to detecting the first depth change of the computer system from the first depth to the second depth, and in accordance with a determination that the second depth of the computer system is less than the predetermined depth threshold, operation of the diving timer is stopped.
5. The method of any one of claims 1 to 4, further comprising:
in response to detecting the first depth at which the computer system is submerged:
in accordance with a determination that the first depth is less than a predetermined depth threshold, entering a first operational state, and
In accordance with a determination that the first depth is greater than the predetermined depth threshold, a second operational state is entered.
6. The method of claim 5, the method further comprising:
Detecting an event corresponding to an alert, the computer system configured to output the alert upon detecting the event, and
In response to detecting the event:
suppressing the alert in accordance with a determination that the computer system is in the second operational state, and
In accordance with a determination that the computer system is not in the second operational state, the alert is output.
7. The method of any one of claims 5 to 6, further comprising:
detecting a tactile user input corresponding to one or more operations of the computer system, and
In response to detecting the haptic user input corresponding to one or more operations of the computer system:
In accordance with a determination that the computer system is in the second operational state, the one or more operations of the computer system are inhibited.
8. The method of any of claims 5 to 7, wherein the computer system is in communication with one or more light sources, the method further comprising:
in accordance with a determination that the computer system is in the second operational state, the one or more light sources are activated, wherein the light sources illuminate the display generating component.
9. The method of any of claims 5 to 8, wherein the computer system is in communication with a first hardware button, the method further comprising:
Detecting activation of the first hardware button while the computer system is in the second operational state, and
In response to detecting the activation of the first hardware button, the second operational state is exited.
10. The method of any one of claims 1 to 9, the method further comprising:
Detecting a second depth change of the computer system from the first depth to a third depth while the computer system is at the first depth;
In response to detecting the second depth change of the computer system from the first depth to the third depth, and in accordance with a determination that the third depth is greater than a maximum depth threshold, a depth alert user interface is displayed.
11. The method of claim 10, wherein the computer system outputs a haptic alert when the computer system displays the depth alert user interface.
12. The method of any one of claims 10 to 11, the method further comprising:
Detecting a third depth change of the computer system from the third depth to a fourth depth while the depth alert user interface is displayed, and
In response to detecting the third depth change of the computer system from the third depth to a fourth depth, and in accordance with a determination that the fourth depth is less than the maximum depth threshold, ceasing display of the depth alert user interface.
13. The method of any of claims 10 to 12, wherein the computer system is in communication with a second hardware button, the method further comprising:
detecting an input corresponding to activation of the second hardware button while the depth alert user interface is displayed, and
In response to detecting the input corresponding to the activation of the second hardware button, display of the depth alert user interface is stopped.
14. The method of any of claims 10 to 13, wherein the computer system displays two metrics without displaying additional metrics while the computer system is submerged at the fourth depth that is greater than a maximum depth threshold.
15. The method of any one of claims 1 to 14, the method further comprising:
detecting a fourth depth change of the computer system from the first depth to a fifth depth while displaying the second set of metrics, and
In response to detecting the fourth depth change of the computer system from the first depth to the fifth depth, and in accordance with a determination that the fifth depth is less than the predetermined depth threshold:
stopping the display of the second set of metrics, and
A summary screen user interface is displayed.
16. The method of claim 15, wherein the summary screen user interface comprises a maximum depth metric, an underwater time metric, and a water temperature range metric.
17. The method of any one of claims 15 to 16, the method further comprising:
detecting a fifth depth change of the computer system from the fifth depth to a sixth depth while the summary screen user interface is displayed, and
In response to detecting the fifth depth change of the computer system from the fifth depth to the sixth depth, and in accordance with a determination that the sixth depth is greater than the predetermined depth threshold, ceasing display of the summary screen user interface.
18. The method of any of claims 15 to 17, wherein the computer system is in communication with a third hardware button, the method further comprising:
Detecting activation of the third hardware button while the summary screen user interface is displayed, and
In response to detecting the activation of the third hardware button, display of the summary screen user interface is stopped.
19. The method of any one of claims 1 to 18, wherein:
And displaying a battery power indicator in response to determining that the battery power of the computer system is below a battery power threshold when the submerged user interface is displayed.
20. The method of any of claims 1-19, wherein the submerged user interface comprises a depth animation, the method further comprising:
displaying the depth animation at a first location on the submerged user interface corresponding to the first depth while the computer system is at the first depth;
Detecting a sixth depth change of the computer system from the first depth to a seventh depth while displaying the depth animation at the first location corresponding to the first depth, and
In response to detecting the sixth depth change of the computer system, a display position of the depth animation is updated from the first position to a second position on the submerged user interface corresponding to the seventh depth, the second position being different from the first position.
21. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more sensors, the one or more programs comprising instructions for performing the method of any of claims 1-20.
22. A computer system configured to communicate with a display generation component and one or more sensors, the computer system comprising:
one or more processors, and
A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-20.
23. A computer system configured to communicate with a display generation component and one or more sensors, the computer system comprising:
means for performing the method according to any one of claims 1 to 20.
24. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more sensors, the one or more programs comprising instructions for performing the method of any of claims 1-20.
25. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of a computer system in communication with a display generation component and one or more sensors, the one or more programs comprising instructions for:
displaying an immersed user interface via the display generating component while the computer system is immersed;
Detecting a first depth to which the computer system is submerged via the one or more sensors while the submerged user interface is displayed, and
In response to detecting the first depth at which the computer system is submerged:
In accordance with a determination that the first depth is less than a predetermined depth threshold, displaying, via the display generation component, a first set of metrics regarding the immersion of the computer system, and
In accordance with a determination that the first depth is greater than the predetermined depth threshold, a second set of metrics regarding the immersion of the computer system is displayed via the display generation component, the second set of metrics being different from the first set of metrics.
26. A computer system configured to communicate with a display generation component and one or more sensors, the computer system comprising:
one or more processors, and
A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for:
Means for displaying an immersed user interface via the display generating assembly when the computer system is immersed;
Means for detecting a first depth to which the computer system is submerged via the one or more sensors while the submerged user interface is displayed, and
Means for, in response to detecting the first depth at which the computer system is submerged:
In accordance with a determination that the first depth is less than a predetermined depth threshold, displaying, via the display generation component, a first set of metrics regarding the immersion of the computer system, and
In accordance with a determination that the first depth is greater than the predetermined depth threshold, a second set of metrics regarding the immersion of the computer system is displayed via the display generation component, the second set of metrics being different from the first set of metrics.
27. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component and one or more sensors, the one or more programs comprising instructions for:
displaying an immersed user interface via the display generating component while the computer system is immersed;
Detecting a first depth to which the computer system is submerged via the one or more sensors while the submerged user interface is displayed, and
In response to detecting the first depth at which the computer system is submerged:
In accordance with a determination that the first depth is less than a predetermined depth threshold, displaying, via the display generation component, a first set of metrics regarding the immersion of the computer system, and
In accordance with a determination that the first depth is greater than the predetermined depth threshold, a second set of metrics regarding the immersion of the computer system is displayed via the display generation component, the second set of metrics being different from the first set of metrics.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263404152P | 2022-09-06 | 2022-09-06 | |
| US63/404,152 | 2022-09-06 | ||
| US18/153,940 US20240077309A1 (en) | 2022-09-06 | 2023-01-12 | Physical activity user interfaces |
| US18/153,940 | 2023-01-12 | ||
| PCT/US2023/030718 WO2024054347A1 (en) | 2022-09-06 | 2023-08-21 | Physical activity user interfaces |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119816446A true CN119816446A (en) | 2025-04-11 |
Family
ID=88018102
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202380063710.6A Pending CN119816446A (en) | 2022-09-06 | 2023-08-21 | Physical Activity User Interface |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4584150A1 (en) |
| CN (1) | CN119816446A (en) |
| WO (1) | WO2024054347A1 (en) |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US3859005A (en) | 1973-08-13 | 1975-01-07 | Albert L Huebner | Erosion reduction in wet turbines |
| US4826405A (en) | 1985-10-15 | 1989-05-02 | Aeroquip Corporation | Fan blade fabrication system |
| FI103193B (en) * | 1995-12-21 | 1999-05-14 | Suunto Oy | Dykardatamaskin |
| JP3520400B2 (en) * | 1997-09-03 | 2004-04-19 | セイコーエプソン株式会社 | Information display device for divers |
| KR100595924B1 (en) | 1998-01-26 | 2006-07-05 | 웨인 웨스터만 | Method and apparatus for integrating manual input |
| US7688306B2 (en) | 2000-10-02 | 2010-03-30 | Apple Inc. | Methods and apparatuses for operating a portable device based on an accelerometer |
| US7218226B2 (en) | 2004-03-01 | 2007-05-15 | Apple Inc. | Acceleration-based theft detection system for portable electronic devices |
| US6677932B1 (en) | 2001-01-28 | 2004-01-13 | Finger Works, Inc. | System and method for recognizing touch typing under limited tactile feedback conditions |
| US6570557B1 (en) | 2001-02-10 | 2003-05-27 | Finger Works, Inc. | Multi-touch system and method for emulating modifier keys via fingertip chords |
| EP1862872B8 (en) * | 2005-03-25 | 2012-02-15 | Citizen Holdings Co., Ltd. | Electronic device and display control method |
| US7657849B2 (en) | 2005-12-23 | 2010-02-02 | Apple Inc. | Unlocking a device by performing gestures on an unlock image |
| WO2013169849A2 (en) | 2012-05-09 | 2013-11-14 | Industries Llc Yknots | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
| EP3435220B1 (en) | 2012-12-29 | 2020-09-16 | Apple Inc. | Device, method and graphical user interface for transitioning between touch input to display output relationships |
| KR102059572B1 (en) * | 2018-03-30 | 2019-12-26 | 주식회사 포에스텍 | Touch input diver pad |
-
2023
- 2023-08-21 EP EP23768720.7A patent/EP4584150A1/en active Pending
- 2023-08-21 CN CN202380063710.6A patent/CN119816446A/en active Pending
- 2023-08-21 WO PCT/US2023/030718 patent/WO2024054347A1/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024054347A4 (en) | 2024-04-25 |
| EP4584150A1 (en) | 2025-07-16 |
| WO2024054347A1 (en) | 2024-03-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12182373B2 (en) | Techniques for managing display usage | |
| US12265703B2 (en) | Restricted operation of an electronic device | |
| US11979467B2 (en) | Multi-modal activity tracking user interface | |
| US11875021B2 (en) | Underwater user interface | |
| CN119024929B (en) | User interface for a compass application | |
| US20240077309A1 (en) | Physical activity user interfaces | |
| US10953307B2 (en) | Swim tracking and notifications for wearable devices | |
| US20220198984A1 (en) | Dynamic user interface with time indicator | |
| KR102880143B1 (en) | Health event logging and coaching user interfaces | |
| JP2021525427A (en) | Access to system user interfaces in electronic devices | |
| US20190369699A1 (en) | User interfaces for indicating battery information on an electronic device | |
| US12405631B2 (en) | Displaying application views | |
| US20220374106A1 (en) | Methods and user interfaces for tracking execution times of certain functions | |
| US20240402881A1 (en) | Methods and user interfaces for sharing and accessing workout content | |
| US20250069740A1 (en) | Methods and user interfaces for personalized wellness coaching | |
| US20240399209A1 (en) | Methods and user interfaces for accessing and managing workout content and information | |
| US20230389806A1 (en) | User interfaces related to physiological measurements | |
| US20250103175A1 (en) | Techniques for managing display usage | |
| US20240402889A1 (en) | User interfaces for logging and interacting with emotional valence data | |
| CN119816446A (en) | Physical Activity User Interface | |
| US20250312677A1 (en) | User interfaces for managing a workout session | |
| US20240386716A1 (en) | Techniques for detecting text | |
| US20240402968A1 (en) | Methods, devices, and user interfaces for user notification | |
| WO2023235147A9 (en) | User interfaces related to physiological measurements | |
| CN119816807A (en) | Interface for device interaction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |