[go: up one dir, main page]

WO2025072869A1 - Techniques for configuring navigation of a device - Google Patents

Techniques for configuring navigation of a device Download PDF

Info

Publication number
WO2025072869A1
WO2025072869A1 PCT/US2024/049121 US2024049121W WO2025072869A1 WO 2025072869 A1 WO2025072869 A1 WO 2025072869A1 US 2024049121 W US2024049121 W US 2024049121W WO 2025072869 A1 WO2025072869 A1 WO 2025072869A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer system
component
criteria
input
target location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/049121
Other languages
French (fr)
Other versions
WO2025072869A4 (en
Inventor
Moritz Von Volkmann
Andrew S. Kim
Arto Kivila
Christopher P. Foss
Corey K. Wang
David A. KRIMSLEY
Brendan J. TILL
Matthew J. Allen
Tommaso NOVI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/896,677 external-priority patent/US20250109965A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of WO2025072869A1 publication Critical patent/WO2025072869A1/en
Publication of WO2025072869A4 publication Critical patent/WO2025072869A4/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D1/00Steering controls, i.e. means for initiating a change of direction of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • B62D15/027Parking aids, e.g. instruction means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • B62D15/027Parking aids, e.g. instruction means
    • B62D15/0285Parking performed automatically
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • Electronic devices are often capable of navigating to destinations. Such destinations can be static (e.g., stationary and/or not dynamically configurable). Such destinations can also be broadly defined such that arrival at the destination is imprecise. Computer systems sometimes provide users with navigation assistance. Such assistance can assist a user in navigating to a target destination. Electronic devices are often capable of navigating to destinations using available map data. While navigating, the electronic device can encounter physical areas with different qualities of map data. The quality of the map data can cause errors resulting in incorrect navigation instructions
  • Some techniques for configuring navigation of a device using electronic devices are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes. Existing techniques require more time than necessary, wasting user time and device energy. This latter consideration is particularly important in battery-operated devices.
  • the present technique provides electronic devices with faster, more efficient methods and interfaces for configuring navigation, interacting with different map data, and/or providing navigation assistance.
  • Such methods and interfaces optionally complement or replace other methods for configuring navigation, interacting with different map data, and/or providing navigation assistance.
  • Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface.
  • such methods and interfaces conserve power and increase the time between battery charges, for example, by reducing the number of unnecessary, extraneous, and/or repetitive received inputs and reducing battery usage by a display.
  • a method that is performed at a computer system that is in communication with a display component and one or more input devices comprises: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices.
  • the one or more programs includes instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices.
  • the one or more programs includes instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
  • a computer system that is in communication with a display component and one or more input devices is described.
  • the computer system that is in communication with a display component and one or more input devices comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
  • a computer system that is in communication with a display component and one or more input devices.
  • the computer system that is in communication with a display component and one or more input devices comprises means for performing each of the following steps: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices.
  • the one or more programs include instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
  • a method that is performed at a computer system that is in communication with a display component and one or more input devices comprises: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices.
  • the one or more programs includes instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices.
  • the one or more programs includes instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second
  • a computer system that is in communication with a display component and one or more input devices is described.
  • the computer system that is in communication with a display component and one or more input devices comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
  • a computer system that is in communication with a display component and one or more input devices.
  • the computer system that is in communication with a display component and one or more input devices comprises means for performing each of the following steps: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices.
  • the one or more programs include instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective
  • a method that is performed at a computer system that is in communication with a first movement component and a second movement component different from the first movement component comprises: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described.
  • the one or more programs includes instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described.
  • the one or more programs includes instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
  • a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described.
  • the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
  • a computer system that is in communication with a first movement component and a second movement component different from the first movement component.
  • the computer system comprises means for performing each of the following steps: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component.
  • the one or more programs include instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
  • a method that is performed at a computer system that is in communication with a first movement component and a second movement component different from the first movement component comprises: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described.
  • the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described.
  • the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from
  • a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described.
  • the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from
  • a computer system that is in communication with a first movement component and a second movement component different from the first movement component.
  • the computer system comprises means for performing each of the following steps: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component.
  • the one or more programs include instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from
  • a method that is performed at a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component comprises: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and where
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component is described.
  • the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component is described.
  • the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
  • a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, is described.
  • the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
  • a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, is described.
  • the computer system comprises means for performing each of the following steps: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component.
  • the one or more programs include instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
  • a method that is performed at a computer system in communication with an input component comprises: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with an input component is described.
  • the one or more programs includes instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with an input component.
  • the one or more programs includes instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
  • a computer system in communication with an input component comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
  • a computer system in communication with an input component comprises means for performing each of the following steps: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system in communication with an input component.
  • the one or more programs include instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
  • a method that is performed at a computer system that is in communication with one or more output components comprises: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components.
  • the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components.
  • the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
  • a computer system that is in communication with one or more output components.
  • the computer system that is in communication with one or more output components comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
  • a computer system that is in communication with one or more output components.
  • the computer system that is in communication with one or more output components comprises means for performing each of the following steps: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components.
  • the one or more programs include instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
  • a method that is performed at a computer system that is in communication with one or more output components comprises: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components.
  • the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
  • a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components.
  • the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
  • a computer system that is in communication with one or more output components.
  • the computer system that is in communication with one or more output components comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors.
  • the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
  • a computer system that is in communication with one or more output components comprises means for performing each of the following steps: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
  • a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components.
  • the one or more programs include instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
  • Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
  • devices are provided with faster, more efficient methods and interfaces for configuring navigation of a device, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices.
  • Such methods and interfaces may complement or replace other methods for configuring navigation of a device.
  • FIG. l is a block diagram illustrating a system with various components in accordance with some embodiments.
  • FIGS. 2A-2D illustrate exemplary user interfaces for navigating a first device with respect to a second device in accordance with some embodiments.
  • FIG. 3 is a flow diagram illustrating methods for navigating a first device with respect to a second device in accordance with some embodiments.
  • FIGS. 4A-4G illustrate exemplary user interfaces for configuring a device to navigate to a specific location in accordance with some embodiments.
  • FIG. 5 is a flow diagram illustrating methods for configuring a device to navigate to a specific location in accordance with some embodiments.
  • FIGS. 6A-6F illustrate exemplary diagrams for navigating a movable computer system to a target destination in accordance with some embodiments.
  • FIGS. 7A-7C illustrate exemplary diagrams for navigating between objects in a forward manner in accordance with some embodiments.
  • FIGS. 8A-8C illustrate exemplary diagrams for navigating between objects in a backward manner in accordance with some embodiments.
  • FIG. 9 is a flow diagram illustrating a method for configuring a movable computer system in accordance with some embodiments.
  • FIGS. 10A-10B is a flow diagram illustrating a method for selectively modifying movement components of a movable computer system in accordance with some embodiments.
  • FIGS. 11 A-l ID illustrate exemplary diagrams for redirecting a movable computer system in accordance with some embodiments.
  • FIG. 12 is a flow diagram illustrating a method for providing feedback based on an orientation of a movable computer system in accordance with some embodiments.
  • FIG. 13 is a flow diagram illustrating a method for redirecting a movable computer system in accordance with some embodiments.
  • FIGS. 14A-14H illustrate exemplary user interfaces for interacting with different map data in accordance with some embodiments.
  • FIG. 15 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments.
  • FIG. 16 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments.
  • Efficient techniques can reduce a user’s mental load when configuring navigation of a device. This reduction in mental load can enhance user productivity and make the device easier to use.
  • the techniques described herein can reduce battery usage and processing time (e.g., by providing user interfaces that require fewer user inputs to operate).
  • FIG. 1 provides illustrations of exemplary devices for performing operations herein.
  • FIGS. 2A-6G illustrate exemplary user interfaces for navigating a first device with respect to a second device in accordance with some embodiments.
  • FIG. 3 is a flow diagram illustrating methods of navigating a first device with respect to a second device in accordance with some embodiments.
  • the user interfaces in FIGS. 2A-6G are used to illustrate the processes described below, including the processes in FIG. 3.
  • FIGS. 4A-4D illustrate exemplary user interfaces for configuring a device to navigate to a specific location in accordance with some embodiments.
  • FIG. 5 is a flow diagram illustrating methods of configuring a device to navigate to a specific location in accordance with some embodiments.
  • FIGS. 6A-6F illustrate exemplary diagrams for navigating a movable computer system to a target destination in accordance with some embodiments.
  • FIGS. 7A-7C illustrate exemplary diagrams for navigating between objects in a forward manner in accordance with some embodiments.
  • FIGS. 8A-8C illustrate exemplary diagrams for navigating between objects in a backward manner in accordance with some embodiments.
  • FIG. 9 is a flow diagram illustrating a method for configuring a movable computer system in accordance with some embodiments.
  • FIGS. 10A-10B is a flow diagram illustrating a method for selectively modifying movement components of a movable computer system in accordance with some embodiments.
  • FIGS. 6A-6F, 7A-7C, and 8A-8C are used to illustrate the processes described below, including the processes in FIGS. 9, 10A- 10B, and 12.
  • FIGS. 11 A-l ID illustrate exemplary diagrams for redirecting a movable computer system in accordance with some embodiments.
  • FIG. 12 is a flow diagram illustrating a method for providing feedback based on an orientation of a movable computer system in accordance with some embodiments.
  • FIG. 13 is a flow diagram illustrating a method for redirecting a movable computer system in accordance with some embodiments.
  • the diagrams in FIGS. 11 A-l ID are used to illustrate the processes described below, including the processes in FIGS. 12-13.
  • FIGS. 14A-14H illustrate exemplary user interfaces for interacting with different map data in accordance with some embodiments.
  • FIG. 15 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments.
  • FIG. 16 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments.
  • the user interfaces in FIGS. 14A- 14H are used to illustrate the processes described below, including the processes in FIGS. 15 and 16.
  • system or computer readable medium contains instructions for performing conditional operations that require that one or more conditions be satisfied before the operations occur.
  • a person having ordinary skill in the art would also understand that, similar to a method with conditional steps, a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the conditional steps have been performed.
  • the device is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
  • a touch-sensitive surface e.g., a touch screen display and/or a touchpad
  • the device is a portable, movable, and/or mobile electronic device (e.g., a processor, a smart phone, a smart watch, a tablet, a fitness tracking device, a laptop, a headmounted display (HMD) device, a communal device, a vehicle, a media device, a smart speaker, a smart display, a robot, a television and/or a personal computing device).
  • a portable, movable, and/or mobile electronic device e.g., a processor, a smart phone, a smart watch, a tablet, a fitness tracking device, a laptop, a headmounted display (HMD) device, a communal device, a vehicle, a media device, a smart speaker, a smart display, a robot, a television and/or a personal computing device.
  • HMD headmounted display
  • the electronic device is a computer system that is in communication with a display component (e.g., by wireless or wired communication).
  • the display component may be integrated into the computer system or may be separate from the computer system. Additionally, the display component may be configured to provide visual output to a display (e.g., a liquid crystal display, an OLED display, or CRT display).
  • a display e.g., a liquid crystal display, an OLED display, or CRT display.
  • “displaying” content includes causing to display the content (e.g., video data rendered or decoded by a display controller) by transmitting, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display component to visually produce the content.
  • visual output is any output that is capable of being perceived by the human eye, including, and not limited to images, videos, graphs, charts, and other graphical representations of data.
  • the electronic device is a computer system that is in communication with an audio generation component (e.g., by wireless or wired communication).
  • the audio generation component may be integrated into the computer system or may be separate from the computer system. Additionally, the audio generation component may be configured to provide audio output. Examples of an audio generation component include a speaker, a home theater system, a soundbar, a headphone, an earphone, an earbud, a television speaker, an augmented reality headset speaker, an audio jack, an optical audio output, a Bluetooth audio output, and/or an HDMI audio output).
  • audio output is any output that is capable of being perceived by the human ear, including, and not limited to sound waves, music, speech, and/or other audible representations of data.
  • the electronic device optionally includes one or more other input and/or output devices, such as physical userinterface devices (e.g., a physical keyboard, a mouse, and/or a joystick).
  • physical userinterface devices e.g., a physical keyboard, a mouse, and/or a joystick.
  • FIG. 1 illustrates an example system 100 for implementing techniques described herein.
  • System 100 can perform any of the methods described in FIGS. 3 and/or 4 (e.g., processes 700 and/or 900) and/or portions of these methods.
  • system 100 includes various components, such as processor(s) 103, RF circuitry(ies) 105, memory(ies) 107, sensors 156 (e.g., image sensor(s), orientation sensor(s), location sensor(s), heart rate monitor(s), temperature sensor(s)), input device(s) 158 (e.g., camera(s) (e.g., a periscope camera, a telephoto camera, a wide-angle camera, and/or an ultra-wide-angle camera), depth sensor(s), microphone(s), touch sensitive surface(s), hardware input mechanism(s), and/or rotatable input mechanism(s)), mobility components (e.g., actuator(s) (e.g., pneumatic actuator(s), hydraulic actuator(s), and/or electric actuator(s)), motor(s), wheel(s), movable base(s), rotatable component(s), translation component s), and/or rotatable base(s)) and output device(s) 160 (e.g.
  • system 100 is a mobile and/or movable device (e.g., a tablet, a smart phone, a laptop, head-mounted display (HMD) device, and or a smartwatch).
  • system 100 is a desktop computer, an embedded computer, and/or a server.
  • processor(s) 103 includes one or more general processors, one or more graphics processors, and/or one or more digital signal processors.
  • memory(ies) 107 is one or more non-transitory computer-readable storage mediums (e.g., flash memory and/or random-access memory) that store computer-readable instructions configured to be executed by processor(s) 103 to perform techniques described herein.
  • RF circuitry(ies) 105 includes circuitry for communicating with electronic devices and/or networks (e.g., the Internet, intranets, and/or a wireless network, such as cellular networks and wireless local area networks (LANs)). In some embodiments, RF circuitry(ies) 105 includes circuitry for communicating using near-field communication and/or short-range communication, such as Bluetooth® or Ultra-wideband.
  • networks e.g., the Internet, intranets, and/or a wireless network, such as cellular networks and wireless local area networks (LANs)
  • RF circuitry(ies) 105 includes circuitry for communicating using near-field communication and/or short-range communication, such as Bluetooth® or Ultra-wideband.
  • display(s) 121 includes one or more monitors, projectors, and/or screens.
  • display(s) 121 includes a first display for displaying images to a first eye of a user and a second display for displaying images to a second eye of the user.
  • corresponding images can be simultaneously displayed on the first display and the second display.
  • the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides the user with the illusion of depth of the objects on the displays.
  • display(s) 121 is a single display.
  • corresponding images are simultaneously displayed in a first area and a second area of the single display for each eye of the user.
  • the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display.
  • system 100 includes touch-sensitive surface(s) 115 for receiving user inputs, such as tap inputs and swipe inputs.
  • touch-sensitive surface(s) 115 for receiving user inputs, such as tap inputs and swipe inputs.
  • display(s) 121 and touch-sensitive surface(s) 115 form touch-sensitive display(s).
  • sensor(s) 156 includes sensors for detecting various conditions.
  • sensor(s) 156 includes orientation sensors (e.g., orientation sensor(s) 111) for detecting orientation and/or movement of platform 150.
  • system 100 uses orientation sensors to track changes in the location and/or orientation (sometimes collectively referred to as position) of system 100, such as with respect to physical objects in the physical environment.
  • sensor(s) 156 includes one or more gyroscopes, one or more inertial measurement units, and/or one or more accelerometers.
  • sensor(s) 156 includes a global positioning sensor (GPS) for detecting a GPS location of platform 150.
  • GPS global positioning sensor
  • sensor(s) 156 includes a radar system, LIDAR system, sonar system, image sensors (e.g., image sensor(s) 109, visible light image sensor(s), and/or infrared sensor(s)), depth sensor(s), rangefinder(s), and/or motion detector(s).
  • sensor(s) 156 includes sensors that are in an interior portion of system 100 and/or sensors that are on an exterior of system 100.
  • system 100 uses sensor(s) 156 (e.g., interior sensors) to detect a presence and/or state (e.g., location and/or orientation) of a passenger in the interior portion of system 100.
  • system 100 uses sensor(s) 156 (e.g., external sensors) to detect a presence and/or state of an object external to system 100.
  • system 100 uses sensor(s) 156 to receive user inputs, such as hand gestures and/or other air gesture.
  • system 100 uses sensor(s) 156 to detect the location and/or orientation of system 100 in the physical environment.
  • system 100 uses sensor(s) 156 to navigate system 100 along a planned route, around obstacles, and/or to a destination location.
  • sensor(s) 156 include one or more sensors for identifying and/or authenticating a user of system 100, such as a fingerprint sensor and/or facial recognition sensor.
  • image sensor(s) includes one or more visible light image sensor, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide- semiconductor (CMOS) sensors operable to obtain images of physical objects.
  • image sensor(s) includes one or more infrared (IR) sensor(s), such as a passive IR sensor or an active IR sensor, for detecting infrared light.
  • IR infrared
  • an active IR sensor can include an IR emitter, such as an IR dot emitter, for emitting infrared light.
  • image sensor(s) includes one or more camera(s) configured to capture movement of physical objects.
  • image sensor(s) includes one or more depth sensor(s) configured to detect the distance of physical objects from system 100.
  • system 100 uses CCD sensors, cameras, and depth sensors in combination to detect the physical environment around system 100.
  • image sensor(s) includes a first image sensor and a second image sensor different form the first image sensor.
  • system 100 uses image sensor(s) to receive user inputs, such as hand gestures and/or other air gestures.
  • system 100 uses image sensor(s) to detect the location and/or orientation of system 100 in the physical environment.
  • system 100 uses orientation sensor(s) for detecting orientation and/or movement of system 100.
  • system 100 can use orientation sensor(s) to track changes in the location and/or orientation of system 100, such as with respect to physical objects in the physical environment.
  • orientation sensor(s) includes one or more gyroscopes, one or more inertial measurement units, and/or one or more accelerometers.
  • system 100 uses microphone(s) to detect sound from one or more users and/or the physical environment of the one or more users.
  • microphone(s) includes an array of microphones (including a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space (e.g., inside system 100 and/or outside of system 100) of the physical environment.
  • input device(s) 158 includes one or more mechanical and/or electrical devices for detecting input, such as button(s), slider(s), knob(s), switch(es), remote control(s), joystick(s), touch-sensitive surface(s), keypad(s), microphone(s), and/or camera(s).
  • input device(s) 158 include one or more input devices inside system 100.
  • input device(s) 158 include one or more input devices (e.g., a touch- sensitive surface and/or keypad) on an exterior of system 100.
  • output device(s) 160 include one or more devices, such as display(s), monitor(s), projector(s), speaker(s), light(s), and/or haptic output device(s).
  • output device(s) 160 includes one or more external output devices, such as external display screen(s), external light(s), and/or external speaker(s).
  • output device(s) 160 includes one or more internal output devices, such as internal display screen(s), internal light(s), and/or internal speaker(s).
  • environmental controls 162 includes mechanical and/or electrical systems for monitoring and/or controlling conditions of an internal portion (e.g., cabin) of system 100.
  • environmental controls 162 includes fan(s), heater(s), air conditioner(s), and/or thermostat(s) for controlling the temperature and/or airflow within the interior portion of system 100.
  • mobility component(s) includes mechanical and/or electrical components that enable a platform to move and/or assist in the movement of the platform.
  • mobility system 164 includes powertrain(s), drivetrain(s), motor(s) (e.g., an electrical motor), engine(s), power source(s) (e.g., battery(ies)), transmission(s), suspension system(s), speed control system(s), and/or steering system(s).
  • one or more elements of mobility component(s) are configured to be controlled autonomously or manually (e.g., via system 100 and/or input device(s) 158).
  • system 100 performs monetary transactions with or without another computer system.
  • system 100 or another computer system associated with and/or in communication with system 100 (e.g., via a user account described below), is associated with a payment account of a user, such as a credit card account or a checking account.
  • system 100 can transmit a key to an entity from which goods and/or services are being purchased that enables the entity to charge the payment account for the transaction.
  • system 100 stores encrypted payment account information and transmits this information to entities from which goods and/or services are being purchased to complete transactions.
  • System 100 optionally conducts other transactions with other systems, computers, and/or devices. For example, system 100 conducts transactions to unlock another system, computer, and/or device and/or to be unlocked by another system, computer, and/or device. Unlocking transactions optionally include sending and/or receiving one or more secure cryptographic keys using, for example, RF circuitry(ies) 105.
  • system 100 is capable of communicating with other computer systems and/or electronic devices.
  • system 100 can use RF circuitry(ies) 105 to access a network connection that enables transmission of data between systems for the purpose of communication.
  • Example communication sessions include phone calls, e-mails, SMS messages, and/or videoconferencing communication sessions.
  • videoconferencing communication sessions include transmission and/or receipt of video and/or audio data between systems participating in the videoconferencing communication sessions, including system 100.
  • system 100 captures video and/or audio content using sensor(s) 156 to be transmitted to the other system(s) in the videoconferencing communication sessions using RF circuitry(ies) 105.
  • system 100 receives, using the RF circuitry(ies) 105, video and/or audio from the other system(s) in the videoconferencing communication sessions, and presents the video and/or audio using output device(s) 160, such as display(s) 121 and/or speaker(s).
  • the transmission of audio and/or video between systems is near real-time, such as being presented to the other system(s) with a delay of less than 0.1, 0.5, 1, or 3 seconds from the time of capturing a respective portion of the audio and/or video.
  • the system 100 generates tactile (e.g., haptic) outputs using output device(s) 160.
  • output device(s) 160 generates the tactile outputs by displacing a moveable mass relative to a neutral position.
  • tactile outputs are periodic in nature, optionally including frequency(ies) and/or amplitude(s) of movement in two or three dimensions.
  • system 100 generates a variety of different tactile outputs differing in frequency(ies), amplitude(s), and/or duration/ numb er of cycle(s) of movement included.
  • tactile output pattern(s) includes a start buffer and/or an end buffer during which the movable mass gradually speeds up and/or slows down at the start and/or at the end of the tactile output, respectively.
  • tactile outputs have a corresponding characteristic frequency that affects a “pitch” of a haptic sensation that a user feels. For example, higher frequency(ies) corresponds to faster movement(s) by the moveable mass whereas lower frequency(ies) corresponds to slower movement(s) by the moveable mass.
  • tactile outputs have a corresponding characteristic amplitude that affects a “strength” of the haptic sensation that the user feels. For example, higher amplitude(s) corresponds to movement over a greater distance by the moveable mass, whereas lower amplitude(s) corresponds to movement over a smaller distance by the moveable mass.
  • the “pitch” and/or “strength” of a tactile output varies over time.
  • tactile outputs are distinct from movement of system 100.
  • system 100 can includes tactile output device(s) that move a moveable mass to generate tactile output and can include other moving part(s), such as motor(s), wheel(s), axel(s), control arm(s), and/or brakes that control movement of system 100.
  • movement and/or cessation of movement of system 100 generates vibrations and/or other physical sensations in some situations, these vibrations and/or other physical sensations are distinct from tactile outputs.
  • system 100 generates tactile output independent from movement of system 100 For example, system 100 can generate a tactile output without accelerating, decelerating, and/or moving system 100 to a new position.
  • system 100 detects gesture input(s) made by a user.
  • gesture input(s) includes touch gesture(s) and/or air gesture(s), as described herein.
  • touch-sensitive surface(s) 115 identify touch gestures based on contact patterns (e.g., different intensities, timings, and/or motions of objects touching or nearly touching touch-sensitive surface(s) 115). Thus, touch-sensitive surface(s) 115 detect a gesture by detecting a respective contact pattern.
  • detecting a finger-down event followed by detecting a finger-up (e.g., liftoff) event at (e.g., substantially) the same position as the finger-down event (e.g., at the position of a user interface element) can correspond to detecting a tap gesture on the user interface element.
  • detecting a finger-down event followed by detecting movement of a contact, and subsequently followed by detecting a finger-up (e.g., liftoff) event can correspond to detecting a swipe gesture. Additional and/or alternative touch gestures are possible.
  • an air gesture is a gesture that a user performs without touching input device(s) 158.
  • air gestures are based on detected motion of a portion (e.g., a hand, a finger, and/or a body) of a user through the air.
  • air gestures include motion of the portion of the user relative to a reference.
  • Example references include a distance of a hand of a user relative to a physical object, such as the ground, an angle of an arm of the user relative to the physical object, and/or movement of a first portion (e.g., hand or finger) of the user relative to a second portion (e.g., shoulder, another hand, or another finger) of the user.
  • detecting an air gesture includes detecting absolute motion of the portion of the user, such as a tap gesture that includes movement of a hand in a predetermined pose by a predetermined amount and/or speed, or a shake gesture that includes a predetermined speed or amount of rotation of a portion of the user.
  • detecting one or more inputs includes detecting speech of a user.
  • system 100 uses one or more microphones of input device(s) 158 to detect the user speaking one or more words.
  • system 100 parses and/or communicates information to one or more other systems to determine contents of the speech of the user, including identifying words and/or obtaining a semantic understanding of the words.
  • system processor(s) 103 can be configured to perform natural language processing to detect one or more words and/or determine a likely meaning of the one or more words in the sequence spoken by the user. Additionally or alternatively, in some embodiments, the system 100 determines the meaning of the one or more words in the sequence spoken based upon a context of the user determined by the system 100.
  • system 100 outputs spatial audio via output device(s) 160.
  • spatial audio is output in a particular position.
  • system 100 can play a notification chime having one or more characteristics that cause the notification chime to be generated as if emanating from a first position relative to a current viewpoint of a user (e.g., “spatializing” and/or “spatialization” including audio being modified in amplitude, filtered, and/or delayed to provide a perceived spatial quality to the user).
  • system 100 presents visual and/or audio feedback indicating a position of a user relative to a current viewpoint of another user, thereby informing the other user about an updated position of the user.
  • playing audio corresponding to a user includes changing one or more characteristics of audio obtained from another computer system to mimic an effect of placing an audio source that generates the play back of audio within a position corresponding to the user, such as a position within a three-dimensional environment that the user moves to, spawns at, and/or is assigned to.
  • a relative magnitude of audio at one or more frequencies and/or groups of frequencies is changed, one or more filters are applied to audio (e.g., directional audio filters), and/or the magnitude of audio provided via one or more channels are changed (e.g., increased or decreased) to create the perceived effect of the physical audio source.
  • the simulated position of the simulated audio source relative to a floor of the three-dimensional environment matches an elevation of a head of a participant providing audio that is generated by the simulated audio source, or is a predetermined one or more elevations relative to the floor of the three-dimensional environment.
  • system 100 in accordance with a determination that the position of the user will correspond to a second position, different from the first position, and that one or more first criteria are satisfied, system 100 presents feedback including generating audio as if emanating from the second position.
  • system 100 communicates with one or more accessory devices.
  • one or more accessory devices is integrated with system 100.
  • one or more accessory devices is external to system 100.
  • system 100 communicates with accessory device(s) using RF circuitry(ies) 105 and/or using a wired connection.
  • system 100 controls operation of accessory device(s), such as door(s), window(s), lock(s), speaker(s), light(s), and/or camera(s).
  • accessory device(s) such as door(s), window(s), lock(s), speaker(s), light(s), and/or camera(s).
  • system 100 can control operation of a motorized door of system 100.
  • system 100 can control operation of a motorized window included in system 100.
  • accessory device(s) such as remote control(s) and/or other computer systems (e.g., smartphones, media players, tablets, computers, and/or wearable devices) functioning as input devices control operations of system 100.
  • a wearable device e.g., a smart watch
  • system 100 acts as an input device to control operations of another system, device, and/or computer, such as system 100 functioning as a key to initiate operation of an actuation system of a platform associated with another system, device, and/or computer.
  • digital assistant(s) help a user perform various functions using system 100.
  • a digital assistant can provide weather updates, set alarms, and perform searches locally and/or using a network connection (e.g., the Internet) via a natural -language interface.
  • a digital assistant accepts requests at least partially in the form of natural language commands, narratives, requests, statements, and/or inquiries.
  • a user requests an informational answer and/or performance of a task using the digital assistant.
  • the digital assistant in response to receiving the question “What is the current temperature?,” the digital assistant answers “It is 30 degrees.”
  • the digital assistant in response to receiving a request to perform a task, such as “Please invite my family to dinner tomorrow,” the digital assistant can acknowledge the request by playing spoken words, such as “Yes, right away,” and then send the requested calendar invitation on behalf of the user to each family member of the user listed in a contacts list for the user.
  • the digital assistant engages with the user in a sustained conversation involving multiple exchanges of information over a period of time. Other ways of interacting with a digital assistant are possible to request performance of a task and/or request information.
  • the digital assistant can respond to the user in other forms, e.g., displayed alerts, text, videos, animations, music, etc.
  • the digital assistant includes a client-side portion executed on system 100 and a server-side portion executed on a server in communication with system 100.
  • the client-side portion can communicate with the server through a network connection using RF circuitry(ies)105.
  • the client-side portion can provide client-side functionalities, input and/or output processing and/or communication with the server, for example.
  • the server-side portion provides server-side functionalities for any number client-side portions of multiple systems.
  • system 100 is associated with one or more user accounts.
  • system 100 saves and/or encrypts user data, including files, settings, and/or preferences in association with particular user accounts.
  • user accounts are password-protected and system 100 requires user authentication before accessing user data associated with an account.
  • user accounts are associated with other system(s), device(s), and/or server(s).
  • associating one user account with multiple systems enables those systems to access, update, and/or synchronize user data associated with the user account.
  • the systems associated with a user account can have access to purchased media content, a contacts list, communication sessions, payment information, saved passwords, and other user data.
  • user accounts provide a secure mechanism for a customized user experience.
  • FIGS. 2A-2D illustrate exemplary user interfaces for navigating a first device with respect to a second device in accordance with some embodiments.
  • the user interfaces in FIGS. 2A-2D are used to illustrate the processes described below, including the processes in FIG. 3.
  • user input is illustrated using a circular shape with dotted lines (e.g., touch user input 214 in FIG. 2A).
  • touch user input 214 in FIG. 2A can be any type of user input, including a tap on touch-sensitive screen, a button press, a gaze toward a control, a voice request with an identification of a control, a gesture made by a user and captured by a camera, and/or any other affirmative action performed by a user.
  • a single representation of a user input in a figure (1) includes one or more different types of user input and/or (2) represents different types of user input to result in different operations.
  • a single illustrated user input can be a tap input, a tap-and-hold input, and/or a swipe gesture.
  • FIG. 2 A illustrates user interface 210 for navigating a first device with respect to a second device using computer system 200 in accordance with some embodiments.
  • computer system 200 includes a touchscreen display 202.
  • computer system 200 is, or includes one or more of the features of, system 100 described above.
  • computer system 200 displays user interface 210 on touchscreen display 202.
  • User interface 210 includes navigation control user interface element 212.
  • User interface 210 is a lock screen interface, displaying time and date, as well as navigation control user interface element 212 presented as an overlay or notification.
  • a user interface that includes navigation control user interface element 212 can include a maps or navigation application interface (e.g., such that navigation control user interface element 212 is a native interface inside of such application), or any other application or operating system interface (e.g., overlaid as a notification).
  • Navigation control user interface element 212 includes an indication that another device (a “first” device in this example) is navigating with respect to computer system 200 (a “second” device in this example) where it states that: “Device is being navigated with respect to you.”
  • the use of the phrase “you” indicates that the first device is navigating with respect to the current user of computer system 200 (e.g., based on the user being logged in), or is navigating with respect to the current device on which the notification is being displayed (e.g., computer system 200, regardless of user affiliation).
  • Navigation control user interface element 212 can include one or more controls (e.g., affordances, buttons, and/or icons) or be configured to receive user input some other way, for causing one or more actions.
  • navigation control user interface element 212 can receive user input to cause an action.
  • computer system 200 receives a touch user input 214 (e.g., a tap, a tap- and-hold, or a hard press) on an operative portion (e.g., the displayed area) of navigation control user interface element 212.
  • a touch user input 214 e.g., a tap, a tap- and-hold, or a hard press
  • operative portion e.g., the displayed area
  • the first device is associated with a different user than the second device.
  • the first device can have been instructed to navigate with respect to the second device.
  • the instruction originates from the first device (e.g., by a user of the first device (e.g., “follow that device”)), and/or the second device (e.g., by a user of the second device (e.g., “follow me”)).
  • the instruction can originate from another device (e.g., third device) that is not the first or second device.
  • the second device can belong to a member of a particular group, (e.g., of devices (e.g., “my devices”), of users (e.g., family group, friend group, or any arbitrarily defined group), or any other permitted user that the first device user would like to navigate with respect to (e.g., a recent contact, a message recipient or sender, a contact that has shared their location, or the like)).
  • a member of a particular group e.g., of devices (e.g., “my devices”), of users (e.g., family group, friend group, or any arbitrarily defined group), or any other permitted user that the first device user would like to navigate with respect to (e.g., a recent contact, a message recipient or sender, a contact that has shared their location, or the like)).
  • my devices e.g., “my devices”
  • users e.g., family group, friend group, or any arbitrarily defined group
  • the first device is associated with the same user as the second device.
  • the user of the second device can instruct one of their own devices (e.g., associated with their same user account) that has the ability to change position (e.g., a toy and/or a drone) to navigate to the user’s current device (e.g., smartphone) location or the location of another device.
  • Navigating with respect to another device can include providing and/or receiving directions to (or being led to) a location corresponding to the other device.
  • the location corresponding to the other device is the location of the other device (e.g., the same location).
  • the location corresponding to the other device is a location within a predetermined distance from the other device (e.g., a different location, such as a safe area near the other device).
  • the first device can navigate to a location adjacent to the second device, so that the devices are close enough that a user could go to the first device when needed but not so close that the first device is on top of or collides with the user (e.g., holding the second device).
  • the device being navigated can receive location information and/or step-by- step instructions to the other device, so that it will end up at the location of the device being navigated to.
  • the device being navigated to can provide location information and/or step-by-step instructions that periodically update so that the device being navigated can follow and/or eventually reach the device being navigated to.
  • the device being navigated can receive updated location information of the target device by direct communication (e.g., with each other) or via one or more intermediate systems (e.g., a notification server).
  • FIG. 2B illustrates computer system 200 in response to receiving touch user input 214.
  • a user of computer system 200 would like to control navigation of the first device to navigate with respect to a different, third device (e.g., not computer system 200).
  • computer system 200 displays navigation control user interface elements 216 and 218.
  • computer system 200 alters the display of user interface element 210 by dimming or darkening in order to emphasize that action is being taken with respect to interface elements 212, 216, and 218.
  • Navigation control user interface element 216 includes an indication that navigation of the first device can be changed to another device (e.g., computer system), where it states: “Change navigation to Kyle”.
  • the other device e.g., a “third” device in this example
  • the third device e.g., the user named “Kyle” in this example
  • user interface element 216 indicates an option to transfer navigation to another particular device.
  • navigation control user interface element 216 can indicate or provide a plurality of options for selecting one of a group of devices to which navigation can be transferred (e.g., by stating instead “Change navigation to another user or device,” which when selected can display a plurality of user or device options).
  • the indication that navigation of the first device can be changed to another device can be an icon and/or identifier of a user account (e.g., corresponding to a contact from a contacts application and/or an address book application).
  • the indication that navigation of the first device can be changed to another device can be an icon and/or identifier of a specific device (e.g., determined using a communication channel, such an identifier of a device that is broadcast via a Bluetooth channel to other devices when in range).
  • information used for determining another device is retrieved from one or more local and/or remote resources (e.g., from a cloud storage service and/or a location service).
  • User interface 210 also includes navigation control user interface element 218, which includes an indication that navigation with respect to the second device can be stopped, where it states: “Stop navigating with respect to you”.
  • “you” indicates that the current device is being used as the target navigation for the first device.
  • user input on navigation control user interface element 218 can cause navigation with respect to computer system 200 to stop (e.g., and display of interface elements 212, 216, and 218 to cease).
  • computer system 200 receives a touch user input 220 (e.g., a tap, a tap- and-hold, or a hard press) on an operative portion (e.g., any portion in this example) of navigation control user interface element 216.
  • a touch user input 220 e.g., a tap, a tap- and-hold, or a hard press
  • FIG. 2C illustrates computer system 200 in response to receiving touch user input 220.
  • a user of computer system 200 would like to transfer the first device to navigate with respect to a different, third device (e.g., not computer system 200).
  • computer system 200 displays navigation control user interface element 222 and ceases displaying navigation control user interface element 212.
  • computer system 200 causes the first device to cease navigating with respect to computer system 200 and begin navigating with respect to the third device. As illustrated in FIG.
  • navigation control user interface element 222 includes an indication that navigation of the first device has been changed to another device (e.g., another computer system), where it states: “Device is being navigated with respect to Kyle.”
  • another device e.g., another computer system
  • the other device is associated with the user identified as “Kyle.”
  • the first device and the second device are associated with one or more user accounts (e.g., the same account and/or different accounts) that are not the same as (and do not include) the Kyle user account.
  • the Kyle account corresponds to a different user account than the owner of the first device and the second devices.
  • navigation with respect to the third device will result in navigating with respect to a device corresponding to (e.g., owned and/or managed) by a different user account than of the first device and second device.
  • designating the device associated with Kyle as the target of the first device’s navigation results in the user account of Kyle and/or Kyle’s device being designated a “guest” user/device of the second device.
  • Kyle’s device when Kyle’s device is made the target of navigation, Kyle’s device can be granted (e.g., by the first device and/or by the second device, or users associated therewith) the right to perform one or more operations for controlling navigation of the first device.
  • the third device can be granted one or more of the abilities to: cease navigation with respect to themselves/their device (e.g., “don’t navigate with respect to me”), return the navigation target to the user and/or device that sent it to them (e.g., “navigate with respect to the second device again”), or assign navigation to another user or associated device (e.g., “don’t navigate with respect to me, navigate with respect to a fourth (different) device instead”).
  • This grant of rights to the third device can be temporary (e.g., expires after predefined amount of time, or after a condition occurs or is met).
  • the second device was not designated a “guest” because it corresponds to the same user account as the first device (and/or the user account and/or the second device are already established as an administrator (e.g., having a non-guest privilege level) for the first device).
  • the first, second, and/or third devices can each be different types of devices.
  • the second device (computer system 200) is a smartphone
  • the first device is a wearable device (that moves via user movement)
  • the third device is a laptop computer.
  • computer system 200 receives a touch user input 224 (e.g., a tap, a tap-and-hold, or a hard press) on an operative portion (e.g., any portion in this example) of user interface element 222.
  • a touch user input 224 e.g., a tap, a tap-and-hold, or a hard press
  • an operative portion e.g., any portion in this example
  • FIG. 2D illustrates computer system 200 in response to receiving touch user input 224.
  • computer system 200 displays navigation control user interface elements 226 and 228.
  • computer system 200 alters the display of user interface 310 by dimming or darkening in order to emphasize that action is being taken with respect to interface elements 222, 226, and 228.
  • Navigation control user interface element 226 includes an indication that the navigation target of the first device can be changed (back) to the second device (e.g., computer system 200), where it states: “Change navigation to you.”
  • a user input (such as 224) on navigation control user interface element 226 would cause computer system 200 to return to the state shown in FIG. 2A, where it displays navigation control user interface element 212 indicating that the first device is navigating with respect to computer system 200 (e.g., represented as “you”).
  • Navigation control user interface element 228 includes an indication that navigation of the first device with respect to the third device (e.g., computer system 200) can be stopped, where it states: “Stop navigating with respect to Kyle”. For example, a user input (such as 224) on user interface element 228 would cease navigation of the first device with respect to the third device associated with Kyle (e.g., navigation instructions would cease at the first device). For example, in response to user input on user interface element 228, computer system 200 can display user interface 210 without displaying navigation control user interface element 212 (e.g., just display a normal lock screen).
  • FIG. 3 is a flow diagram illustrating a method for navigating a first device with respect to a second device using a computer system in accordance with some embodiments.
  • Process 300 is performed at a computer system (e.g., system 100).
  • the computer system is in communication with a display component and one or more input devices.
  • Some operations in process 300 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • process 300 provides an intuitive way for navigating a first device with respect to a second device.
  • the method reduces the cognitive burden on a user for navigating a first device with respect to a second device, thereby creating a more efficient human-machine interface. For battery operated computing devices, enabling a user to configure navigation of a device faster and more efficiently conserves power and increases the time between battery charges.
  • process 300 is performed at a computer system (e.g., 200) that is in communication with a display component (e.g., 202) (e.g., a display screen and/or a touch-sensitive display) and one or more input devices (e.g., 202) (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button).
  • a display component e.g., 202
  • input devices e.g., 202
  • the computer system is a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device.
  • the computer system is in communication with one or more output devices (e.g., a display screen, a touch-sensitive display, a haptic output device, and/or a speaker).
  • the computer system displays (302), via the display component, a first indication (e.g., 212 of FIG. 2A) that a first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to a second device (e.g., 200) different from the first device.
  • the first indication is displayed on a lock screen of the computer system (e.g., a user interface of the computer system that is configured to be allowed to perform less operations than an unlocked screen of the computer system) (e.g., the lock screen is displayed when the computer system is in a locked state (e.g., the computer system is powered on and operational but ignores most, if not all, input)).
  • the first indication is displayed in a user interface of a mapping and/or navigation application.
  • the first device is different from the computer system.
  • the second device is the computer system.
  • the second device is different from the computer system.
  • the computer system is logged into a first user account.
  • the first device is logged into the first user account.
  • the first device is logged into a user account different from the first user account.
  • the second device is logged into the first user account.
  • the second device is logged into a user account different from the first user account.
  • navigating with respect to the second device includes navigating to locations corresponding to a current location of the second device as the second device moves.
  • navigating with respect to the second device includes following the second device.
  • the computer system receives (304), via the one or more input devices, a request (e.g., 220) to have the first device navigate with respect to a third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) instead of the second device (e.g., 200), wherein the third device is different from the first device (e.g., the device referenced in 212 of FIGS. 2A-2D).
  • a request e.g., 220
  • the third device e.g., device associated with Kyle referenced in 216 of FIG. 2B
  • the second device e.g., 200
  • the request is received after or while displaying the first indication.
  • the third device is different from the computer system.
  • the request corresponds to input directed to a user interface including the first indication.
  • the third device is logged into a user account different from the first user account.
  • the third device is logged into the first user account.
  • the computer system displays (306), via the display component, a second indication (e.g., 222 of FIGS. 2C and/or 2D) that the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B).
  • the computer system forgoes navigating with respect to the second device in response to receiving the request.
  • the second indication is different from the first indication.
  • the second indication is displayed in the user interface of the mapping and/or navigation application.
  • Allowing the computer system to receive a request to cause the first device to navigate with respect to the third device instead of the second device while the first device is navigating with respect to the second device provides the user the ability to change navigation targets easily and/or efficiently without requiring additional steps to stop following the second device and/or establish a connection with the third device before initiating navigation with respect to the third device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system in response to receiving the request (e.g., 220), the computer system ceases to display the first indication (e.g., 212). In some embodiments, in response to receiving the request, the computer system displays an indication that the first device is not navigating with respect to the second device different from the first device. In some embodiments, in response to receiving the request, the computer system displays an indication that the first device is navigating with respect to the third device different from the second device. Ceasing to display the first indication when switching from navigating with respect to the second device to the third device provides the user with feedback about the state of the computer system, thereby providing improved visual feedback to the user.
  • the computer system includes the second device (e.g., 200). In some embodiments, the computer system is the second device. In some embodiments, the computer system includes the first device. In some embodiments, the computer system is the first device. In some embodiments, the computer system is the second device and not the first device. In some embodiments, the computer system is not the first device or the second device.
  • the computer system including the second device e.g., the device for which the first device is no longer navigating with respect to after receiving the request
  • receiving the request (e.g., 220) to have the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) navigate with respect to the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) includes detecting input (e.g., 220) (e.g., a tap gesture, a long-press gesture, a verbal request and/or command, a physical button press, an air gesture, and/or a rotation of a physical input mechanism) directed to a control (e.g., 216) that includes an indication of the third device.
  • the indication includes an indication of a user associated with the third device.
  • Having the control e.g., the control that causes the first device to navigate with respect to the third device instead of the second device
  • the indication of the third device provides the user with feedback about the state of the first device and information for how the control will affect the first device, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or providing improved visual feedback to the user.
  • the computer system displays, via the display component, a second control (e.g., 226) that includes an indication of the second device (e.g., 200), wherein the second control is different from the control (e.g., 216).
  • a second control e.g., 226
  • the computer system receives input (e.g., input on 226) (e.g., a tap gesture, a long-press gesture, a verbal request and/or command, a physical button press, an air gesture, and/or a rotation of a physical input mechanism) directed to the second control.
  • the computer system in response to receiving the input directed to the second control, displays, via the display component, a third indication (e.g., display navigation control user interface element 212 as in FIG. 2A) (e.g., the first indication or a different indication) that the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to the second device.
  • a third indication e.g., display navigation control user interface element 212 as in FIG. 2A
  • the first device e.g., the device referenced in 212 of FIGS. 2A-2D
  • forgoing displaying the second indication.
  • Displaying the second control while the first device is navigating with respect to the third device provides the user the ability to change navigation targets easily and/or efficiently without requiring additional steps to stop following the third device and/or establish a connection with the second device before initiating navigation with respect to the second device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system classifies the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) as a guest user (e.g., a user that is not associated with the first device and/or an account that is associated with the first device) of the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) (e.g., without classifying the third device as a guest user of the second device).
  • the first device is classified as a different type of user of the first device than a guest user.
  • classifying the third device as a guest user of the first device configures the third device to be able to perform one or more first operations with respect to the first device, wherein the second device is configured to be able to perform one or more second operations with respect to the first device, wherein the one or more second operations includes at least one different operation than the one or more first operations.
  • Classifying the third device as a guest user provides the user the ability to change navigation targets with different devices without needing to classify the different devices as administrators and/or take ownership of the first device, thereby improving security.
  • the third device is classified as the guest user of the first device (e.g., the device referenced in 212 of FIGS.
  • the third device e.g., device associated with Kyle referenced in 216 of FIG. 2B
  • the predefined amount of time is set by a non-guest user that is associated with the first device. Classifying the third device as a guest user for the predefined amount of time and no longer classifying the third user as the guest user after the predefined amount of time provides a time limit for such classification that prevents the third device from taking over the first device, thereby improving security.
  • the second device (e.g., 200) is a different type (e.g., a phone, a watch, a speaker, a device that can move without assistance (e.g., a device with a movement mechanism, such as a wheel, pulley, axel, engine, and/or a motor), and/or a device that cannot move without assistance) of device than the first device.
  • the third device e.g., device associated with Kyle referenced in 216 of FIG. 2B
  • the second device includes one or more capabilities that the first device does not include.
  • the first device includes one or more capabilities that the second device does not include. In some embodiments, the first device is in communication with a component that the second device is not in communication with. In some embodiments, the second device is in communication with a component that the first device is not in communication with. In some embodiments, the third device includes one or more capabilities that the first device does not include. In some embodiments, the first device includes one or more capabilities (e.g., the first device is able to move without assistance while the third device is not able to move without assistance, the first device includes a component and/or sensor that the third device does not include, and/or the first device is able to output content of a particular type that the third device is not able to output) that the third device does not include.
  • the first device includes one or more capabilities that the second device does not include. In some embodiments, the first device is in communication with a component that the second device is not in communication with. In some embodiments, the second device is in communication with a component that the first device is not in communication with. In some
  • the first device is in communication with a component that the third device is not in communication with. In some embodiments, the third device is in communication with a component that the first device is not in communication with. Having the second and third device be different types of devices than the first device allows the user to use different types of devices as targets for navigation for the first device without all of the devices needing to be the same type of device, thereby reducing friction when controlling different devices and/or allowing personal devices to control other types of devices.
  • process 500 optionally includes one or more of the characteristics of the various methods described above with reference to process 300.
  • the respective device of process 500 can be the first device of process 300. For brevity, these details are not repeated below.
  • FIGS. 4A-4G illustrate exemplary user interfaces for configuring a device to navigate to a specific location in accordance with some embodiments.
  • FIG. 5 is a flow diagram illustrating methods for configuring a device to navigate to a specific location in accordance with some embodiments.
  • the user interfaces in FIGS. 4A-4G are used to illustrate the processes described below, including the processes in FIG. 5.
  • user input is illustrated using a circular shape with dotted lines (e.g., user input 416 in FIG. 4 A).
  • the user input can be any type of user input, including a tap on touch- sensitive screen, a button press, a gaze toward a control, a voice request with an identification of a control, a gesture made by a user and captured by a camera, and/or any other affirmative action performed by a user.
  • a single representation of a user input in a figure (1) includes one or more different types of user input and/or (2) represents different types of user input to result in different operations.
  • a single illustrated user input can be a tap input, a tap-and-hold input, and/or a swipe gesture.
  • FIG. 4 A illustrates user interface 410 for configuring a device to navigate to a specific location within a physical environment using computer system 200 in accordance with some embodiments.
  • computer system 200 includes one or more of the features described above with respect to FIGS. 2A-2D.
  • computer system 200 displays, on touchscreen display 202, user interface 410, which includes a representation 412 of a physical space and a representation 414 of a target device located within the physical space.
  • the “target” device is the device for which navigation is configured using the interfaces described with respect to FIGS. 4A-4G.
  • the target device corresponding to the representation of the respective device is a particular vehicle corresponding to a particular unique identifier.
  • the target device corresponding to the representation of the device is a respective device (e.g., a smartphone, a laptop, and/or a wearable device) being used with the navigation application.
  • computer system 200 receives (e.g., captured by one or more other devices, or captured by computer system 200 (e.g., via imaging and/or scanning equipment such as one or more cameras and one or more depth sensors)) data (e.g., images and/or video) representing a physical environment.
  • data e.g., images and/or video
  • a user of computer system 200 can use one or more connected camera, lidar, radar, and/or other depth sensor to scan their garage and/or create (or cause creation of) representation 412, a digital multidimensional (e.g., 3-D, 2-D) representation of their garage.
  • representation 412 includes objects 412a and 412b, representing objects in the physical space that occupy portions of floor space 412c.
  • Representation 412 also includes floor space 412c representing an area of the physical space on which a target device can be configured to navigate to (e.g., if no other objects or devices occupy such space).
  • user interface 410 is an interface of an application (e.g., a navigation application, a device configuration application) or of an operating system of the device (e.g., a lock screen interface).
  • a user of computer system 200 scans their garage without a target device located inside of it, and subsequently views their respective representations 412 (garage) and 414 (target device). For example, a user can use computer system 200 to capture one or more images and/or depth measurements from within the garage, which are then used to create representation 412 (e.g., stitched together into a model).
  • computer system 200 displays a representation of the garage (e.g., representation 412).
  • representation 412 is an image of the garage that is a composite of one or more images (e.g., taken during the scan).
  • computer system 200 can display representation 412 of the garage.
  • the user interface (representation 412) might not initially have a representation of the target device within it.
  • a user of computer system 200 scans the target device in a separate scan (e.g., a second scan).
  • a user of computer system 200 selects (e.g., via user input received by computer system 200) a representation of the target device (e.g., selects by providing identifying information and/or dimensions).
  • the target device is assigned to a particular location (e.g., area) within the garage (e.g., that is determined to be an optimal location based on the respective dimensions of the garage and the target device) It should be recognized that other embodiments include the user of computer system 200 scanning their garage with the target device inside of it.
  • FIG. 4 A depicts representation 414 at an example first position (of representation 412).
  • a user of computer system 200 desires to configure a different position of the target device represented by representation 414 within the garage represented by representation 412, so that future navigation of the target device will navigate to the configured different (e.g., second) position.
  • the user wants to instruct computer system 200 to navigate to the location “Home” while driving their car (e.g., represented by representation 414) and cause a navigation function to remember a precise navigation location configured using user interface 410 (and subsequently navigate representation 414 to the configured location).
  • Techniques for such user interfaces are described below.
  • computer system 200 receives a user input 416 (e.g., a tap, a tap-and- hold (e.g., with movement), or a hard press) on representation 414.
  • user input 416 includes movement to the left (e.g., a tap-and-hold input, followed by a drag to the left).
  • user interface 410 does not allow invalid movement of a target device representations.
  • representation 414 is already as close to the barrier (e.g., wall) of representation 412, representation 414 does not move further to the left.
  • an indication is provided that indicates an invalid movement (e.g., to the left in FIG. 4A), such as foregoing displaying the instructed movement (e.g., stops representation 414 at a safe distance from the left wall) and/or outputting one or more of a sound, audible message, haptic, or visual notification.
  • FIG. 4B computer system 200 receives user input 418 (e.g., a tap, a tap-and- hold (e.g., with movement), or a hard press) on representation 414.
  • user input 418 includes movement to the right (e.g., a tap-and-hold, followed by a drag to the right).
  • representation 414 can move to the right (e.g., be dragged by user input 418) because it is a valid movement.
  • FIG. 4C illustrates computer system 200 in response to receiving user input 418 in accordance with some embodiments.
  • computer system 200 displays representation 414 shifted to the right with respect to floor space 412c in representation 412.
  • the representation of object 412b establishes a rightward barrier for placement of representation 414 within representation 412.
  • object 412b can represent shelving that a target device, represented by 414, cannot occupy — thus, user interface 410 and representation 412 will not allow representation 414 to be placed occupying the same space as object 412b.
  • user interface 410 includes one or more affordances for accepting (e.g., configuring, saving) a precise navigation position represented by representation 414 and/or for not accepting the precise navigation position. For example, in FIG.
  • user interface 410 includes accept affordance 410a (for accepting the current position of 414 as the precise navigation position for the target device represented by representation 414).
  • user interface 410 also includes cancel affordance 410b (for rejecting the current position of 414 as the precise navigation position for the target device represented by representation 414).
  • selection of cancel affordance 410b causes user interface 410 to cease to be displayed.
  • selection of cancel affordance 410b causes the target device to be configured to navigate to a precise navigation position that was configured prior to displaying user interface 410 (e.g., prior to beginning a process for editing the precise navigation position).
  • computer system 200 receives a touch user input 420 (e.g., a tap, a tap-and-hold, or a hard press) on accept affordance 410a.
  • a touch user input 420 e.g., a tap, a tap-and-hold, or a hard press
  • computer system 200 configures a precise navigation position to be associated with representation 414 at the “second” position, which is shown in FIG. 4C shifted to the right with respect to floor space 412c in representation 412.
  • FIG. 4D illustrates navigation user interface 422 in accordance with some embodiments.
  • Navigation user interface 422 includes map portion 422a (representing a geographic area), indicator 422b (representing a current location of computer system 200 within map portion 422a), and home affordance 422c (representing a saved/configured precise navigation position at the user’s configured “Home” location).
  • map portion 422a depicts a geographic area
  • indicator 422b depicts a current location of computer system 200 within map portion 422a
  • home affordance 422c depicts a saved/configured precise navigation position at the user’s configured “Home” location.
  • the user of computer system 200 desires to navigate their vehicle home to the configured precise navigation location (represented by home affordance 422c). Exemplary techniques for performing such actions in accordance with some embodiments are now described.
  • computer system 200 receives a touch user input 423 (e.g., a tap, a tap-and-hold, or a hard press) on home affordance 422c.
  • FIG. 4E illustrates computer system 200 in response to receiving touch user input 423 in accordance with some embodiments.
  • computer system 200 displays navigation user interface 422 as shown in FIG. 4E.
  • the appearance of navigation user interface 422 has changed because the navigation application is performing an active navigation instruction process.
  • navigation user interface 422 includes map portion 422a and indicator 422b (e.g., updated to an arrow to indicate current position and direction of travel), as well as navigation instruction field 422d (which includes a current navigation instruction (e.g., “Go Straight”)).
  • FIG. 4F illustrates navigation user interface 422 arranged in a precision navigation view, in accordance with some embodiments, and includes representation 412 of the physical space of the user’s garage.
  • navigation user interface 422 includes map portion 422a and indicator 422b (e.g., optionally updated to include an indication of the current vehicle’s dimensions (e.g., the rectangular shaped portion) and direction of travel (e.g., the arrow)). Also, in FIG.
  • navigation user interface 422 includes an updated navigation instruction field 422d, instructing that navigation should proceed to the right (“Proceed to right”), and (optionally) a precision navigation target 424.
  • precision navigation target 424 indicates where the user of the navigation user interface should place the vehicle or object being navigated (e.g., park the car).
  • precision navigation target 424 is an area or shape that corresponds to the scanned representation 414 of the vehicle (from FIGS. 4A-4C).
  • precision navigation target 424 can be any suitable indicator for indicating a location (e.g., a point or shape in space within representation 412, which can or cannot correspond to a point on 422b or 414 that should be correspondingly aligned by moving the represented vehicle (e.g., guiding the user to line up the two points)).
  • a location e.g., a point or shape in space within representation 412, which can or cannot correspond to a point on 422b or 414 that should be correspondingly aligned by moving the represented vehicle (e.g., guiding the user to line up the two points)).
  • FIG. 4G illustrates navigation completion notification 432 in accordance with some embodiments.
  • Computer system 200 displays navigation completion notification 432 in response to a determination (e.g., after detecting and/or determining, or by receiving an indication from one or more other devices) that the vehicle (e.g., represented by representations 414 and/or 422b) has reached the precision navigation target 424 (e.g., is sufficiently within or near precision navigation target 424, according to some criteria such as distance between points, area of vehicle within precision navigation target 424, or any other suitable criteria).
  • Navigation completion notification 432 indicates arrival at the location selected for navigation (“Home” selected in FIG. 4D), where it states: “Arrived Home.” As shown in FIG.
  • computer system 200 displays navigation completion notification on a lock screen interface 430 and ceases displaying a navigation interface (e.g., 410 and/or 422).
  • a navigation interface e.g., 410 and/or 422.
  • the computer system 200 automatically ceases displaying an interface with a full map, representations of a physical space or object(s), and/or navigation instructions, and in its place displays a lock screen (or home screen, or other default or idle state screen) interface with a notification that the journey is complete.
  • completion of the precise navigation successfully causes the target device to change operation from a first manner (e.g., powered on, in a particular active state) to a second manner (e.g., powered off, or in an idle/inactive/low-power state).
  • computer system 200 can transmit a message or command that causes the target device to change operation to the second manner of operation.
  • the target device automatically enters the second manner of operation upon reaching the configured precise navigation location.
  • the second device is used during subsequent navigation of the first device (e.g., target device).
  • computer system 200 can be a smartphone that detects it is being used with the user’s vehicle (e.g., based on connectivity with the vehicle, such as via Bluetooth or a wired connection), and intelligently knows to use the configured precise location for that vehicle (or any vehicle, depending on configuration settings).
  • computer system 200 is used to navigate, as illustrated by the examples in FIGS. 4D-4G.
  • the second device (e.g., 200) is not used during subsequent navigation of the first device (e.g., target device).
  • the first device navigates itself to the configured precise location (e.g., in response to receiving an instruction to do so (e.g., from user input and/or from another device)).
  • computer system 200 can be a smartphone that is used to configure the precise location, but the first (e.g., target) device is a device with the ability move itself (e.g., using wheels, tracks, and/or rotors) and perform some level of spatial location and mapping (e.g., alone or assisted by other devices).
  • a target device that is an autonomous robotic lawnmower can return to a particular place in the garage (e.g., in a safe location that will facilitate charging (e.g., near a power outlet)).
  • the lawnmower can use one or more onboard functions that facilitate location awareness (e.g., GPS, camera, radar, spatial maps, etc.) to navigate to the configured location without needing further intervention by a user or computer system 200 (e.g., to display step- by-step instructions).
  • FIG. 5 is a flow diagram illustrating a method for configuring a device to navigate to a specific location using a computer system in accordance with some embodiments.
  • Process 500 is performed at a computer system (e.g., system 100).
  • the computer system is in communication with a display component and one or more input devices.
  • Some operations in process 500 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • process 500 provides an intuitive way for configuring a device to navigate to a specific location.
  • the method reduces the cognitive burden on a user for configuring a device to navigate to a specific location, thereby creating a more efficient human-machine interface.
  • enabling a user to configure a device to navigate to a specific location faster and more efficiently conserves power and increases the time between battery charges.
  • process 500 is performed at a computer system (e.g., 200) that is in communication with a display component (e.g., 202) (e.g., a display screen and/or a touch-sensitive display) and one or more input devices (e.g., 202) (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button).
  • a display component e.g., 202
  • input devices e.g., 202
  • the computer system is a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device.
  • the computer system is in communication with one or more output devices (e.g., a display screen, a touch-sensitive display, a haptic output device, and/or a speaker).
  • one or more images e.g., radar, lidar, and/or optical images
  • a location e.g., physical space describe with respect to FIG.
  • the computer system displays (502), via the display component, a representation (e.g., 414) (e.g., a graphical representation, a line, a path, a textual representation, and/or a symbolic representation) of a respective device (e.g., device represented by 414) (e.g., a fitness tracking device, a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, a vehicle, and/or a personal computing device) at a first position (e.g., position of 414 in FIG.
  • a representation e.g., 414
  • a representation e.g., 414
  • a representation e.g., a graphical representation, a line, a path, a textual representation, and/or a symbolic representation
  • a respective device e.g., device represented by 414
  • a fitness tracking device e.g., a watch, a phone, a tablet, a processor,
  • the computer system is in communication with one or more cameras.
  • the one or more cameras are attached to and/or within a housing of the computer system.
  • the computer system via one or more cameras in communication with the computer system, captures the one or more images.
  • the computer system detects, via the one or more input devices, input corresponding to selection of a user-interface element; and in response to detecting the input, initiates a scanning process (e.g., captures, via one or more cameras in communication with the one or more input devices, the one or more images). In such examples, the scanning process is initiated before displaying the vehicle representation.
  • the computer system is the respective device. In some embodiments, the computer system is different from the respective device.
  • the computer system receives (504), via the one or more input devices, a set of one or more inputs (e.g., 416 and/or 418), wherein the set of one or more inputs includes an input (e.g., dragging input and/or non-dragging input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to a request to move the representation of the respective device from the first position (e.g., position of 414 in FIG. 4A and/or 4B) to a second position (e.g., position of 414 in FIG.
  • an input e.g., dragging input and/or non-dragging input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)
  • a first position e.g., position of
  • the input corresponding to the request is received (e.g., and/or detected) while displaying the representation of the location and/or the representation of the respective device.
  • the computer system In response to (506) (e.g., based on and/or in conjunction with) receiving the set of one or more inputs (e.g., 416 and/or 418) (e.g., the input corresponding to the request) and in accordance with a determination that a first set of criteria are met (e.g., a valid movement as described with respect to FIG. 4B), the computer system displays (508), via the display component, the representation (e.g., 414) of the respective device (e.g., device represented by 414) at the second position (e.g., position of 414 in FIG.
  • the representation e.g., 414
  • the first set of criteria includes a criterion that is met when the second position is determined to be a valid position. In some embodiments, the first set of criteria includes a criterion that is met when the second position is determined to be navigable to by the respective device.
  • the computer system configures (510) the respective device (e.g., device represented by 414) in a first manner, such that the respective device is caused to be navigated to a specific location (e.g., 424) corresponding to the second position (e.g., position of 414 in FIG. 4C) when the respective device is caused to be navigated to the location (e.g., location represented by 412) (e.g., without being navigated to a specific location corresponding to the first position when the respective device is used to be navigated to the location).
  • a specific location e.g., 424
  • the second position e.g., position of 414 in FIG. 4C
  • the representation of the respective device is displayed at the second position in response to a first input of the set of one or more inputs and a navigation application is configured to navigate the respective device to the second position in response to a second input (e.g., an input corresponding to accepting the representation of the respective device at the second position) detected after displaying the representation of the respective device at the second position.
  • the respective device is configured concurrently with displaying the representation of the respective device at the second position.
  • the respective device corresponding to the representation of the respective device is a particular vehicle corresponding to a particular unique identifier.
  • the respective device corresponding to the representation of the respective device is a respective device being used with the navigation application.
  • the respective device is caused to be navigated to a specific location corresponding to the first position when the respective devices is used to be navigated to the location before receiving the set of one or more inputs. Displaying the representation of the respective device at the first position within the representation of the location after capture of the one or more images of the location provides the user with a user interface to visualize the location with reference to the respective device, thereby providing improved visual feedback to the user.
  • Allowing the computer system to receive an input corresponding to a request to move the representation of the respective device from the first position to the second position within the representation of the location provides the user control with where to place the respective device within the location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
  • Displaying the respective device at the second location and configuring the respective device such that the respective device is caused to be navigated to the specific location corresponding to the second position when the respective device is caused to be navigated to the location provides the user with control with respect to navigating the respective device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without requiring further user input, and/or providing improved visual feedback to the user.
  • the respective device e.g., device represented by 414) is a different type (e.g., phone, watch, speaker, a device that can move without assistance (e.g., a device with a movement mechanism, such as a wheel, pulley, axel, engine, and/or a motor), and/or a device that cannot move without assistance) of device than the computer system.
  • the respective device includes one or more capabilities that the computer system does not include.
  • the computer system includes one or more capabilities that the respective device does not include.
  • the computer system is in communication with a component that the respective device is not in communication with.
  • the respective device is in communication with a component that the computer system is not in communication with. Having the respective device be a different type of devices than the computer system allows the user to use different types of devices to configure the respective device, thereby reducing friction when configuring the respective device and/or allowing personal devices to configure other types of devices.
  • the computer system before receiving the set of one or more inputs (e.g., 416 and/or 418), the computer system configures the respective device (e.g., device represented by 414), such that the respective device is caused to be navigated to a location (e.g., a particular and/or specific location) corresponding to the first position in conjunction with (e.g., when, before, immediately before, after, and/or immediately after) the respective device is caused to be navigated to the location.
  • a location e.g., a particular and/or specific location
  • Configuring the respective device before receiving the set of one or more inputs such that the respective device is caused to be navigated to the location corresponding to the first position provides the user with control with respect to navigating the respective device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system in response to (e.g., based on and/or in conjunction with) receiving the set of one or more inputs (e.g., 416 and/or 418) (e.g., the input corresponding to the request) (e.g., one or more dragging inputs or, in some examples, one or more nondragging inputs (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)), the computer system configures the respective device (e.g., device represented by 414) in a second manner, such that the respective device transitions to a reduced power state (e.g., as described with respect to FIG.
  • the respective device e.g., device represented by 414 in a second manner, such that the respective device transitions to a reduced power state (e.g., as described with respect to FIG.
  • 4G e.g., a low-power or off state
  • the second manner is different from the first manner.
  • Configuring the respective device such that the respective device transitions to the reduced power when at the location corresponding to the second position provides the user with control of operations performed by the respective device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system displays, via the display component, a notification (e.g., 432) that the respective device has reached the location.
  • the notification includes an indication that the respective device has reached the specific location corresponding to the second position.
  • Displaying the notification that the respective device has reached the location when the respective device has arrived at the specific location corresponding to the second position provides the user with information with respect to a state of the respective device, thereby reducing the number of inputs needed to perform an operation, performing an operation when a set of conditions has been met without requiring further user input, and/or providing improved visual feedback to the user.
  • the computer system in response to (e.g., based on and/or in conjunction with) receiving the set of one or more inputs (e.g., 416 and/or 418) (e.g., the input corresponding to the request) and in accordance with a determination that the first set of criteria are not met, the computer system forgoes configuring (e.g., as described above with respect to user input 416 of FIG. 4 A) the respective device in the first manner (and, in some examples, in the second manner). In some embodiments, in response to receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are not met, forgoing displaying the representation of the respective device at the second position.
  • the first set of criteria in response to receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are not met, displaying, via the display component, an indication that the second position is not a valid position. In some embodiments, in response to receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are not met, maintaining display of the representation of the respective device at the first position. In some embodiments, the first set of criteria are not met when the specific location corresponding to the second position is determined not to be a safe and/or possible location for navigation.
  • Forgoing configuring the respective device in the first manner when the first set of criteria are not met prevents the user from being able to configure the respective device to navigate to any location and instead require that a location meet the first set of criteria, thereby reducing the number of inputs needed to perform an operation, performing an operation when a set of conditions has been met without requiring further user input, and/or providing improved visual feedback to the user.
  • the computer system before displaying the representation (e.g., 412 of FIG. 4A) of the location, receives a request to capture an image (e.g., as described above with respect to FIG. 4A).
  • the computer system is in communication with one or more cameras, and the request to capture the image is a request to capture the image via the one or more cameras.
  • the computer system in response to receiving the request, causes capture (e.g., as described above with respect to FIG. 4A) (e.g., and/or initiating a scan), via a camera in communication with the computer system, of a first image, wherein the one or more images includes the first image.
  • the computer system in response to receiving the request, captures a plurality of images that includes the first image.
  • receiving the request to capture the image includes detecting an input (e.g., a tap input or, in some examples, a non-tap input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) directed to a user interface displayed via the computer system.
  • an input e.g., a tap input or, in some examples, a non-tap input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)
  • Capturing the first image that is used to generate the representation using the camera that is in communication with the computer system provides the user to ensure that the representation is for the right location, thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback
  • process 300 optionally includes one or more of the characteristics of the various methods described above with reference to process 500.
  • the respective device of process 500 can be the first device of process 300. For brevity, these details are not repeated below.
  • FIGS. 6A-6F illustrate exemplary diagrams for navigating a movable computer system to a target destination in accordance with some embodiments.
  • the diagrams in these figures are used to illustrate the processes described below, including the processes in FIGS. 9, 10A-10B, and 12.
  • one or more of the diagrams of FIGS. 6A-6F are displayed by a display of movable computer system 600 and serve as a visual aid to assist a user in navigating to the target destination. In some embodiments, one or more of the diagrams of FIGS. 6A-6F are representative of different positions of movable computer system 600 while navigating to the target destination and are not displayed by a display of movable computer system 600.
  • FIGS.6A-6D illustrate movable computer system 600 and set of parking spots 606.
  • movable computer system 600 is a vehicle, such as an automobile (e.g., sedan, coupe, scooter, or truck).
  • an automobile e.g., sedan, coupe, scooter, or truck
  • the following discussion is equally applicable to other types of movable computer systems, such as a trailer, a skateboard, an airplane, and/or a boat.
  • movable computer system 600 includes (1) a back set of wheels (e.g., one or more wheels) that is coupled to rear half 602 of movable computer system 600 and (2) a front set of wheels (e.g., one or more wheels) that is coupled to front half 604 of movable computer system 600.
  • the back set of wheels includes two or more wheels.
  • the front set of wheels includes two or more wheels.
  • movable computer system 600 is configured for steering with the back set of wheels and the front set of wheels (e.g., four-wheel steering when two wheels are coupled to the back of movable computer system 600 and two wheels are coupled to the front of movable computer system 600).
  • the back set of wheels and/or the front set of wheels are configured to be independently controlled. In such embodiments, a direction of the back set of wheels and/or the front set of wheels can be changed (e.g., rotated) independently. In some embodiments, the back set of wheels can be steered together and the front set of wheels can be steered together such that steering of the back set of wheels is independent of steering the front set of wheels. In some embodiments, each wheel in the back set of wheels can be steered independently and each wheel in the front set of wheels can be steered independently.
  • set of parking spots 606 includes target parking spot 606b.
  • target parking spot 606b is a parking spot that has been identified (e.g., by movable computer system 600 and/or by a user of movable computer system 600) as the target destination of movable computer system 600. That is, in FIGS. 6A- 6D, movable computer system 600 is navigating to target parking spot 606b. In some embodiments, through FIGS.
  • movable computer system 600 causes the back set of wheels to converge on a single angle as movable computer system 600 navigates to target parking spot 606b (e.g., an angle that is parallel to target parking spot 606b, such as illustrated by arrow 608fl in FIG. 6E).
  • target parking spot 606b e.g., an angle that is parallel to target parking spot 606b, such as illustrated by arrow 608fl in FIG. 6E.
  • target parking spot 606b is identified as the target destination by a user (e.g., an owner (e.g., inside and/or outside of movable computer system 600), a driver, and/or a passenger) of movable computer system 600.
  • a user e.g., an owner (e.g., inside and/or outside of movable computer system 600), a driver, and/or a passenger) of movable computer system 600.
  • the user can identify target parking spot 606b as the target destination by (1) gazing at target parking spot 606b for a predetermined amount of time (e.g., 1-30 seconds), (2) pointing movable computer system 600 towards target parking spot 606b, (3) providing input on a representation of target parking spot 606b, and/or (4) inputting a location (e.g., GPS coordinates and/or an address) that corresponds to and/or includes target parking spot 606b into a navigation application installed on movable computer system 600 and/or another computer system (e.g., a personal device of the user) in communication with movable computer system 600.
  • a location e.g., GPS coordinates and/or an address
  • target parking spot 606b is identified as the target destination in response to movable computer system and/or another computer system (e.g., the personal device of the user) detecting an input (e.g., a voice command, a tap input, a hardware button press, and/or an air gesture).
  • target parking spot 606b is identified as the target destination when a determination is made that a set of wheels (e.g., the front set of wheels and/or the back set of wheels) of movable computer system 600 is rotated by the user to an angle towards target parking spot 606b.
  • target parking spot 606b is identified as the target destination when a determination is made that a set of wheels (e.g., the front set of wheels and/or the back set of wheels) of movable computer system 600 is rotated by the user to an angle away from target parking spot 606b (e.g., while movable computer system 600 is within a predefined distance from target parking spot 606b).
  • a set of wheels e.g., the front set of wheels and/or the back set of wheels
  • target parking spot 606b is identified as the target destination via one or more sensors of movable computer system 600.
  • one or more cameras of movable computer system 600 can identify that target parking spot 606b is vacant and/or closest (e.g., when movable computer system 600 determines to identify a parking spot, such as in response to detecting input corresponding to a request to park) and thus identify target parking spot 606b as the target destination.
  • one or more depth sensors of movable computer system 600 can identify that a size of target parking spot 606b is large enough to accommodate movable computer system 600 and thus identify target parking spot 606b as the target destination.
  • movable computer system 600 is configurable to operate in one of three different modes as movable computer system 600 approaches target parking spot 606b. While movable computer system 600 is in a first mode (e.g., a manual mode), both the back set of wheels and the front set of wheels are configured to be controlled by the user of movable computer system 600. While movable computer system 600 is in a second mode (e.g., a semi-automatic mode), the back set of wheels or the front set of wheels is configured to be controlled by the user while the other set of wheels is configured to not be controlled by the user (e.g., the other set of wheels is configured to be controlled by movable computer system 600 and not the user).
  • a first mode e.g., a manual mode
  • both the back set of wheels and the front set of wheels are configured to be controlled by the user of movable computer system 600.
  • a second mode e.g., a semi-automatic mode
  • the back set of wheels or the front set of wheels is configured to
  • movable computer system 600 can change which set of wheels is being controlled by the user and which set of wheels is not being controlled by the user.
  • the change for which set of wheels is being controlled by the user is based on positioning of movable computer system 600 (e.g., where movable computer 600 is located and/or oriented) and/or positioning of movable computer system 600 relative to a target destination (e.g., how close and/or in what direction the target destination is relative to movable computer system 600).
  • the front set of wheels and/or the back set of wheels can transition from being configured to be controlled by the user to not being controlled by the user, or if movable computer system 600 enters a densely occupied area, the front set of wheels and/or the back set of wheels can transition from being configured to not be controlled by the user to being configured to be controlled by the user.
  • movable computer system 600 is in a third mode (e.g., an automatic mode)
  • the back set of wheels and the front set of wheels are configured to not be controlled by the user (e.g., the back set of wheels and front set wheels are configured to be controlled by movable computer system 600 and not the user).
  • movable computer system 600 transitions between different modes as movable computer system 600 approaches target parking spot 606b. For example, movable computer system 600 can transition from the first mode to the third mode or second mode once movable computer system 600 is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from target parking spot 606b. In some embodiments, movable computer system 600 transitions to a mode (e.g., the first mode, the second mode, or the third mode) based on a target destination of the moveable object.
  • a mode e.g., the first mode, the second mode, or the third mode
  • movable computer system 600 can transition to the first mode, or if the target destination is in an open field, movable computer system 600 can transition to the third mode.
  • movable computer system 600 transitions to a mode based on one or more conditions (e.g., wind, rain, and/or brightness) of a physical environment. For example, if the physical environment is experiencing heavy rain, movable computer system 600 can transition to the first mode, or if the physical environment is experiencing an above average amount of brightness, movable computer system 600 can transition to the third mode.
  • movable computer system 600 transitions to a mode based on data (e.g., amount of data, and/or type of data) about a physical environment that is accessible to movable computer system 600. For example, if movable computer system 600 does not have access to data regarding a physical environment, movable computer system 600 can transition to the first mode of movable computer system 600, or if movable computer system 600 has access to data regarding a physical environment, movable computer system 600 can transition to the third mode of movable computer system 600. In some embodiments, movable computer system 600 transitions to a mode of movable computer system 600 in response to movable computer system 600 detecting an input.
  • data e.g., amount of data, and/or type of data
  • movable computer system 600 can transition to the first mode or the second mode.
  • movable computer system 600 can transition to a mode in response to detecting an input that corresponds to the depression of a physical input mechanism of movable computer system 600 and/or in response to movable computer system 600 detecting a change in the conditions of the physical environment (e.g., change in brightness level, noise level, and/or amount of precipitation in the physical environment).
  • a speed of movable computer system 600 can decrease when a hazard (e.g., pothole and/or construction site) is detected.
  • the speed of movable computer system 600 can decrease as movable computer system 600 gets within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from target parking spot 606b.
  • a direction of travel of movable computer system 600 can change when movable computer system 600 detects an object in a path of movable computer system 600.
  • the positioning of the back set of wheels is changed in response to detection of a current path of movable computer system 600.
  • the back set of wheels can be controlled to change the current path of movable computer system 600 when it is determined that the current path is incorrect.
  • the positioning of the back set of wheels is changed based on detection of weather conditions in the physical environment (e.g., precipitation, a wind level, a noise level, and/or a brightness level of the physical environment).
  • the back set of wheels is configured to not be controlled by the user when a determination is made that movable computer system 600 is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) of target parking spot 606b. In some embodiments, the back set of wheels is configured to not be controlled by the user when a determination is made that the back set of wheels is at a predetermined angle with respect to target parking spot 606b.
  • a predetermined distance e.g., .1-50 feet
  • a predetermined time e.g. 1-10 seconds
  • movable computer system 600 prior to movable computer system 600 navigating to the target destination, being within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination, and/or detecting input requesting for movable computer system 600 to control at least one movement component, the user is able to control both the front set of wheels and the back set of wheels.
  • a predetermined distance e.g., .1-50 feet
  • a predetermined time e.g. 1-10 seconds
  • movable computer system 600 prior to movable computer system 600 navigating to the target destination, being within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1- 10 seconds) from the target destination, and/or detecting input requesting for control of at least one movement component, the user is not able to control the front set of wheels and the back set of wheels (e.g., the front set of wheels and the back set of wheels are being automatically controlled by movable computer system 600, such as without requiring user input).
  • a predetermined distance e.g., .1-50 feet
  • a predetermined time e.g., 1- 10 seconds
  • movable computer system 600 as movable computer system 600 navigates to the target destination, is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination, and/or detects input requesting for movable computer system 600 to control at least one movement component, the user of movable computer system 600 controls the position of both the back set of wheels and the front set of wheels.
  • a predetermined distance e.g., .1-50 feet
  • a predetermined time e.g. 1-10 seconds
  • movable computer system 600 navigates to the target destination, is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination, and/or detects input requesting for control of at least one movement component, the user is not able to control the position of the front set of wheels and the back set of wheels.
  • the front set of wheels or the back set of wheels is configured to be controlled by the user based on the direction of travel of movable computer system 600. For example, if movable computer system 600 is moving forward (e.g., as shown in FIG.
  • the front set of wheels can be configured to be controlled by the user, or if movable computer system 600 is moving in a reverse direction (e.g., the opposite of the direction of direction indicator 620 in FIG. 6 A), the back set of wheels can be configured to be controlled by the user.
  • the front set of wheels or the back set of wheels is configured to be controlled by the user based on the direction that the user is looking.
  • the front set of wheels can be configured to be controlled by the user while the back set of wheels is configured to not be controlled by the user, or if the user is looking towards the back set of wheels, the back set of wheels is configured to be controlled by the user while the front set of wheels is configured to not be controlled by the user.
  • direction indicator 620 in pointing to the right of movable computer system 600.
  • direction indicator 620 indicates the direction that movable computer system 600 is currently traveling. Accordingly, at FIG. 6A, movable computer system 600 is moving along a path that is perpendicular to target parking spot 606b.
  • the front set of wheels is configured to be controlled by the user of movable computer system 600 while the back set of wheels is not configured to be controlled by the user of movable computer system 600 (e.g., the positioning of the back set of wheels is fixed and/or the positioning of the back set of wheels is controlled by movable computer system 600).
  • movable computer system 600 navigates to a target destination (and/or is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination), the user of movable computer system 600 is not able to directly control the set of wheels that is furthest from the target destination and the user is able to directly control the set of wheels that is closest to the target destination. It should be recognized that, in other embodiments, the user is able to directly control the set of wheels that is furthest from the target destination and the user is not able to directly control the set of wheels that is closest to the target destination.
  • a predetermined distance e.g., .1-50 feet
  • a predetermined time e.g. 1-10 seconds
  • movable computer system 600 detects an input (e.g., a voice command, the rotation of a steering mechanism, the depression of a physical input mechanism, and/or a hand gesture) that corresponds to a request to rotate the front set of wheels towards target parking spot 606b.
  • an input e.g., a voice command, the rotation of a steering mechanism, the depression of a physical input mechanism, and/or a hand gesture
  • the front set of wheels in response to movable computer system 600 detecting the input that corresponds to the request to rotate the front set of wheels, the front set of wheels is rotated such that the front set of wheels is directed towards (e.g., pointed towards and/or facing) target parking spot 606b. While the back set of wheels is configured to not be controlled by the user and the front set of wheels is configured to be controlled by the user, the angle (and/or the position) of the back set of wheels relative to target parking spot 606b is based on an angle (and/or position) of the front set of wheels relative to target parking spot 606b.
  • movable computer system 600 can set different angles (and/or positions) of the back set of wheels depending on the angle of the front set of wheels relative to target parking spot 606b.
  • the angle of the back set of wheels is set (e.g., by movable computer system 600 and/or another computer system that is in communication with movable computer system 600) such that movable computer system 600 navigates along the most efficient, comfortable, and/or safest path to reach target parking spot 606b.
  • the angle of the back set of wheels is set based on a relative position of movable computer system 600 with respect to target parking spot 606b (e.g., the angle of the back set of wheels with respect to target parking spot 606b gradually decreases as a greater amount of movable computer system 600 is positioned within target parking spot 606b).
  • the angle of the back set of wheels is set based on the positioning of one or more external objects (e.g., individuals, animals, construction signs, and/or road conditions, such as potholes and/or accumulation of water) that are in a navigation path of movable computer system 600.
  • the angle of the back set of wheels can be adjusted such that movable computer system 600 does not contact and/or come within a threshold distance (e.g., .1 feet -5 feet) of an external object.
  • movable computer system 600 is navigating in a direction that is angled towards target parking spot 606b.
  • movable computer system 600 accelerates and/or decelerates (e.g., without detecting an input from the user) to better align and/or to stop movable computer system 600 within target parking spot 606b.
  • movable computer system 600 provides (e.g., auditory, visual, and/or tactile) feedback based on a determination that movable computer system 600 is not aligned with target parking spot 606b.
  • movable computer system 600 can provide a tone through one or more playback devices that are in communication with movable computer system 600, display a flashing user interface via one or more displays that are in communication with movable computer system 600, and/or vibrate one or more hardware elements of movable computer system 600 when a determination is made that movable computer system 600 is not aligned within target parking spot 606b (1) after movable computer system 600 has come to rest within target parking spot 606b or (2) while navigating to target parking spot 606b but before after movable computer system 600 has come to rest within target parking spot 606b.
  • movable computer system 600 provides (e.g., auditory, visual, and/or tactile) feedback based on a determination that movable computer system 600 will be misaligned within target parking spot 606b if movable computer system 600 continues along the current path of movable computer system 600.
  • movable computer system 600 can cause a steering mechanism of movable computer system 600 to rotate, vibrate at least a portion of the steering mechanism, apply a braking mechanism to the front set of tires and/or the back set of tires, and/or display a warning message, via a display of movable computer system 600, when a determination is made that the angle of approach of movable computer system 600 with respect to target parking spot 606b is too steep or shallow.
  • feedback can grow in intensity as misalignment between movable computer system 600 and target parking spot 606b grows and/or persists.
  • movable computer system 600 can provide a series of different types of feedback (e.g., first visual feedback, then audio feedback, then haptic feedback) as misalignment between movable computer system 600 and target parking spot 606b grows and/or persists.
  • movable computer system 600 stops providing feedback based on a determination (e.g., a determination made by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that movable computer system 600 transitions from being and/or will be misaligned with target parking spot 606b to being and/or will be aligned with target parking spot 606b.
  • a determination e.g., a determination made by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600
  • movable computer system 600 detects an input (e.g., a voice command, the rotation of a steering mechanism, the depression of a physical input mechanism, and/or a hand gesture) that corresponds to a request to rotate the front set of wheels to be parallel with target parking spot 606b.
  • movable computer system 600 causes the back set of wheels to change direction such that the back set of wheels is parallel with target parking spot 606b.
  • the front set of wheels are rotated such that the front set of wheels are parallel with target parking spot 606b.
  • both the back set of wheels and the front set of wheels are parallel to target parking spot 606b.
  • movable computer system 600 moves in a direction that is parallel to target parking spot 606b.
  • movable computer system 600 performs one or more operations (e.g., unlocks doors of movable computer system 600, powers off an air conditioning device of movable computer system 600, closes one or more windows of movable computer system 600, decreases a speed of movable computer system 600 (e.g., gradually decreases to a stop), and/or increases a speed of movable computer system 600) when a determination is made that movable computer system 600 is parallel to target parking spot 606b.
  • one or more operations e.g., unlocks doors of movable computer system 600, powers off an air conditioning device of movable computer system 600, closes one or more windows of movable computer system 600, decreases a speed of movable computer system 600 (e.g., gradually decreases to a stop), and/or increases a speed of movable computer system 600) when a determination is made that movable computer system 600 is parallel to target parking spot 606b.
  • a mode (e.g., the first mode, the second mode, and/or the third mode as described above) of movable computer system 600 is based on the orientation of movable computer system 600 relative to target parking spot 606b.
  • movable computer system 600 can transition from the second mode to the first mode or the third mode when a determination is made that movable computer system 600 is parallel to target parking spot 606b.
  • movable computer system 600 comes to rest within target parking spot 606b.
  • movable computer system 600 is correctly aligned within target parking spot 606b.
  • movable computer system 600 comes to rest within target parking spot 606b without detecting that the user has caused a brake to be applied to the front set of wheels and/or the back set of wheels.
  • movable computer system 600 performs one or more operations (e.g., unlocks doors of movable computer system 600, powers of an air conditioning device of movable computer system 600 and/or closes one or more windows of movable computer system 600) when a determination is made that movable computer system 600 has come to rest within target parking spot 606b.
  • one or more operations e.g., unlocks doors of movable computer system 600, powers of an air conditioning device of movable computer system 600 and/or closes one or more windows of movable computer system 600
  • movable computer system 600 transitions between different modes of movable computer system 600 when a determination is made that movable computer system 600 has come to rest within target parking spot 606b. For example, movable computer system 600 can transition from the second mode to the third mode to allow movable computer system 600 make any adjustments to the positioning of movable computer system 600. For another example, movable computer system 600 can transition from the second mode to the first mode to allow the user to rotate the front set of wheels and/or the back set of wheels after movable computer system 600 has stopped.
  • movable computer system 600 transitions, without user intervention, between respective drive states (e.g., reverse, park, neutral, and/or drive) when a determination is made that movable computer system 600 has come to rest within target parking spot 606b.
  • respective drive states e.g., reverse, park, neutral, and/or drive
  • movable computer system 600 rotates the front set of wheels and/or the back set of wheels to respective angles (e.g., based on a current context, such as an incline of a surface and/or weather) without user intervention.
  • rotating the front set of wheels and/or the back set of wheels to the respective angles helps prevent movable computer system 600 from moving (e.g., because of weather conditions (e.g., ice and/or rain) and/or because of a slope of target parking spot 606b) while movable computer system 600 is at rest within target parking spot 606b.
  • moving e.g., because of weather conditions (e.g., ice and/or rain) and/or because of a slope of target parking spot 606b
  • movable computer system 600 is at rest within target parking spot 606b.
  • FIG. 6E illustrates diagram 608, which includes set of arrows 640 and set of arrows 642.
  • set of arrows 640 and set of arrows 642 correspond to movable computer system 600 navigating to target parking spot 606b where movable computer system 600 does not deviate from a navigation path of movable computer system 600.
  • set of arrows 640 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 606b (e.g., an upward facing arrow indicates that the back set of wheels is directed away from target parking spot 606b and a downward facing arrow indicates that the back set of wheels is directed towards target parking spot 606b).
  • the back set of wheels is configured to not be controlled by a user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 640 as discussed above.
  • movable computer system 600 causes the back set of wheels to converge on a single target angle (e.g., the angle of arrow 608fl) throughout diagram 608.
  • the single target angle can be parallel to sides of target parking spot 606b.
  • set of arrows 642 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 606b (e.g., an upward facing arrow indicates that the front set of wheels is directed away from target parking spot 606b and a downward facing arrow indicates that the front set of wheels is directed towards target parking spot 606b).
  • the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 642 as discussed above.
  • arrow 608al and arrow 608a2 correspond to a first point in time where the back set of wheels and the front set of wheels are perpendicular to target parking spot 606b (e.g., movable computer system 600 is approaching target parking spot 606b).
  • target parking spot 606b e.g., movable computer system 600 is approaching target parking spot 606b.
  • the back set of wheels is not in fixed positional relationship with movable computer system 600. That is, the back set of wheels is configured to turn independent of the direction of travel of movable computer system 600 (e.g., and/or the front set of wheels).
  • arrow 608al (e.g., and the remaining arrows in set of arrows 640) does not represent a fixed positional relationship between movable computer system 600 and the back set of wheels.
  • Arrow 608b 1 and arrow 608b2 correspond to a second point in time, that follows the first point in time, where movable computer system 600 is turning into target parking spot 606b. At the second point in time the back set of wheels is angled away from target parking spot 606b and the front set of wheels is angled towards target parking spot 606b 1.
  • movable computer system 600 is configured for four-wheel steering.
  • the first set of wheels can be directed in an opposite direction than the second set of wheels to reduce the turning radius of movable computer system 600.
  • the back set of wheels and movable computer system 600 have a fixed positional relationship.
  • the arrows included in set of arrows 640 can be directed in a direction that mimics the direction of travel of movable computer system 600.
  • Arrow 608c 1 and arrow 608c2 correspond to a third point in time that follows the second point in time where movable computer system 600 continues to turn into target parking spot 606b. At the third point in time the back set of wheels is angled towards target parking spot 606b 1 and the front set of wheels is parallel to target parking spot 606b 1. Arrow 608dl and arrow 608d2 correspond to a fourth point in time that follows the third point in time where movable computer system 600 navigates towards the rear of target parking spot 606b 1. At the fourth point in time both the front set of wheels and the back set of wheels are parallel to target parking spot 606b.
  • Arrow 608el and arrow 608e2 correspond to a fifth point in time that follows the fourth point in time where movable computer system 600 continues to navigate towards the rear of target parking spot 606b 1.
  • both the front set of wheels and the back set of wheels are parallel to target parking spot 606b as movable computer system 600 pulls further into target parking spot 606b.
  • Arrow 608fl and arrow 608f2 correspond to a sixth point in time that follows the fifth point in time as movable computer system 600 comes to a rest within target parking spot 606b.
  • both the front set of wheels and the back set of wheels are parallel to target parking spot 606b as movable computer system 600
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 606b.
  • movable computer system 600 Because a determination is made that continuing along the current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 606b, at each position represented by a respective arrow included in set of arrows 640, movable computer system 600 causes the back set of wheels to be positioned at an angle such that the back set of wheels does not cause movable computer system 600 to deviate from the current path of movable computer system 600.
  • movable computer system 600 deaccelerates in response to the user applying pressure to a brake pedal of movable computer system 600. In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 608el and arrow 608fl, movable computer system 600 deaccelerates without user intervention.
  • FIG. 6F illustrates diagram 610, which includes set of arrows 650 and set of arrows 652.
  • set of arrows 650 and set of arrows 652 correspond to movable computer system 600 navigating to another parking spot that is different from target parking spot 606b where movable computer system 600 deviates from a navigation path of movable computer system 600.
  • set of arrows 650 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of the other parking spot (e.g., an upward facing arrow indicates that the back set of wheels is directed away from the other parking spot and a downward facing arrow indicates that the back set of wheels is directed towards the other parking spot).
  • the back set of wheels is configured to not be controlled by a user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 650 as discussed above.
  • movable computer system 600 causes the back set of wheels to converge on a single target angle (e.g., the angle of arrow 61 Ofl) throughout diagram 610.
  • the single target angle can be parallel to sides of the other parking spot.
  • set of arrows 652 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of the other parking spot (e.g., an upward facing arrow indicates that the front set of wheels is directed away from the other parking and a downward facing arrow indicates that the front set of wheels is directed towards the other parking spot).
  • the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 652 as discussed above.
  • the positioning of the front set of wheels as movable computer system 600 navigates to the other parking spot at FIG. 6F mimics the positioning of the front set of wheels as movable computer system 600 navigates to target parking spot 606b at FIG. 6E. Accordingly, at FIG. 6F, set of arrows 652 is the same as set of arrows 642 at FIG. 6E. [0209] At FIG.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot.
  • movable computer system 600 Because a determination is made that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot, movable computer system 600 causes the back set of wheels to be positioned at an angle at each of the positions represented by arrows 610al-610dl that does not cause movable computer system 600 to deviate from the navigation path (e.g., the same path of movable computer system 600 at FIG. 6E).
  • the navigation path e.g., the same path of movable computer system 600 at FIG. 6E.
  • movable computer system 600 causes the back set of wheels to be positioned at an angle that does not cause movable computer system 600 to deviate from the navigation path based on a determination that if movable computer system 600 continues along the navigation path of movable computer system 600 then movable computer system 600 will not come into contact with and/or be within a predefined distance of an external object and/or be aligned with the other parking spot.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot, movable computer system 600 causes the back set of wheels to be adjusted to an angle such that causes movable computer system 600 to deviate from the navigation path to a new path.
  • the positioning of the back set of wheels (e.g., the set of wheels that is configured to not be controlled by the user) is adjusted, without user intervention, such that movable computer system 600 deviates from the navigation path to the new path.
  • the angle of the back set of wheels is (e.g., by movable computer system 600 and/or another computer system that is in communication with movable computer system 600) adjusted to an angle to offset an error made by the user in controlling the front set of wheels. Accordingly, the orientation of arrow 610el at FIG.
  • FIG. 6F is different than the orientation of arrow 608el at FIG. 6E. More specifically, at FIG. 6E, the back set of wheels is parallel to target parking spot 606b at arrow 608el, and at FIG. 6F, the back set of wheels is angled to the left of the other parking spot. The back set of wheels is angled at arrow 610el such that rear half 602 of movable computer system 600 is moved to the left within the other parking spot.
  • FIGS. 7A-7C illustrate exemplary diagrams for navigating between objects in a forward manner in accordance with some embodiments.
  • the diagrams in these figures are used to illustrate the processes described below, including the processes in FIGS. 9, 10A- 10B, and 12.
  • FIG. 7A includes a diagram that illustrates movable computer system 600 navigating towards target parking spot 706.
  • target parking spot 706 is a parking spot that is parallel to the direction of travel of movable computer system 600.
  • the diagram of FIG. 7A is displayed by a display of movable computer system 600 and serves as a visual aid to assist a user in navigating to the target destination.
  • the diagram of FIG. 7A is representative of a position of movable computer system 600 while navigating to the target destination and is not displayed by a display of movable computer system 600.
  • target parking spot 706 is positioned between object 702 and object 704.
  • object 702 and object 704 are inanimate objects such as automobiles, construction signs, trees, and/or road hazards, such as a pot hole and/or a speed bump.
  • object 702 and object 704 are animate objects, such as an individual and/or an animal.
  • direction indicator 720 indicates the path that movable computer system 600 will travel to arrive at target parking spot 706. Accordingly, as indicated by direction indicator 720, movable computer system 600 will travel forward before angling downwards towards target parking spot 706.
  • movable computer system 600 causes the back set of wheels to converge on a first angle as movable computer system 600 travels in the forward direction towards target parking spot 706 (e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708dl) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708el) as movable computer system 600 angles downwards towards target parking spot 706.
  • a second angle e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708el
  • the set of wheels of movable computer system 600 that is closest to target parking spot 706 is configured to be controlled by the user of movable computer system 600.
  • a determination is made that the front set of wheels is positioned closer to target parking spot 706 than the back set of wheels.
  • the front set of wheels is configured to be controlled by the user and the back set of wheels is configured to not be controlled by the user as movable computer system 600 navigates towards target parking spot 706.
  • the front set of wheels is configured to not be controlled by the user when a determination is made that movable computer system 600 is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) of object 702, object 704, and/or target parking spot 706.
  • a predetermined distance e.g., .1-50 feet
  • a predetermined time e.g., 1-10 seconds
  • the front set of wheels is configured to not be controlled by the user of movable computer system 600 and the back set of wheels is configured to be controlled by the user of movable computer system 600 when a determination is made that the back set of wheels is positioned closer to target parking spot 706 than the front set of wheels.
  • a navigation path of movable computer system 600 and/or a speed of movable computer system 600 changes (e.g., without detecting a user input) when a determination is made that the positioning of object 702 and/or object 704 changes (e.g., object 702 and/or object 704 moves (1) towards and/or moves away from movable computer system 600 and/or (2) relative to parking spot 706).
  • FIG. 7B illustrates diagram 708, which includes set of arrows 740 and set of arrows 742.
  • set of arrows 740 and set of arrows 742 correspond to movable computer system 600 navigating to target parking spot 706 where movable computer system 600 does not deviate from a navigation path of movable computer system 600.
  • set of arrows 740 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 706 (e.g., a rightward facing arrow indicates that the back set of wheels is directed towards target parking spot 706, an upward facing arrow indicates that the back set of wheels is directed away from target parking spot 706, and a downward facing arrow indicates that the back set of wheels is directed towards target parking spot 706) (e.g., a horizontal arrow indicates that the back set of wheels is parallel to target parking spot 706 and a vertical arrow indicates that the back set of wheels is perpendicular to target parking spot 706).
  • the back set of wheels is configured to not be controlled by a user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 740 as discussed above.
  • movable computer system 600 causes the back set of wheels to converge on a first angle as movable computer system 600 travels in the forward direction towards target parking spot 706 (e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708d 1 ) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708el) as movable computer system 600 angles downwards towards target parking spot 706.
  • a second angle e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708el
  • set of arrows 742 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 706 (e.g., a rightward facing arrow indicates that the front set of wheels is directed towards target parking spot 706, an upward facing arrow indicates that the front of wheels is directed away from target parking spot 706, and a downward facing arrow indicates that the front set of wheels is directed towards target parking spot 706) (e.g., a horizontal arrow indicates that the front set of wheels is parallel to target parking spot 706 and a vertical arrow indicates that the back set of wheels is perpendicular to target parking spot 706).
  • the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 742 as discussed above.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the respective path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 706.
  • movable computer system 600 Because a determination is made that continuing along the respective path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 706 at each position represented by a respective arrow included in set of arrows 740, movable computer system 600 causes the back set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the navigation path of movable computer system 600.
  • movable computer system 600 deaccelerates in response to the user applying pressure to a brake pedal of movable computer system 600. In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 708dl and arrow 708el, movable computer system 600 deaccelerates without user intervention.
  • FIG. 7C illustrates diagram 710, which includes set of arrows 750 and set of arrows 752.
  • set of arrows 750 and set of arrows 752 correspond to movable computer system 600 navigating to another parking spot that is different from target parking spot 706 where movable computer system 600 deviates from a navigation path of movable computer system 600.
  • set of arrows 750 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of the other parking spot (e.g., a rightward facing arrow indicates that the back set of wheels is directed towards the other parking spot, an upward facing arrow indicates that the back set of wheels is directed away from the other parking spot and a downward facing arrow indicates that the back set of wheels is directed towards the other parking spot) (e.g., a horizontal arrow indicates that the back set of wheels is parallel to the other parking spot and a vertical arrow indicates that the back set of wheels is perpendicular to the other parking spot).
  • the back set of wheels is configured to not be controlled by the user throughout at least a portion of set of arrows 750 as discussed above.
  • movable computer system 600 causes the back set of wheels to converge on a first angle as movable computer system 600 travels in the forward direction towards target parking spot 706 (e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708dl) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708el) as movable computer system 600 angles downwards towards target parking spot 706.
  • a second angle e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708el
  • set of arrows 752 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of the other parking spot (e.g., an upward facing arrow indicates that the back set of wheels is directed away from the other parking spot and a downward facing arrow indicates that the back set of wheels is directed towards the other parking spot) (e.g., a horizontal arrow indicates that the front set of wheels is parallel to the other parking spot and a vertical arrow indicates that the back set of wheels is perpendicular to the other parking spot) as movable computer system 600 navigates to the other parking spot.
  • the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 752 as discussed above.
  • a length of the other parking spot is shorter in length than target parking spot 706 at FIGS. 7A-7B. Accordingly, performing the same navigation sequence that was performed at FIG. 7B will cause movable computer system 600 to be misaligned within the other parking spot. As illustrated in FIG. 7C, the positioning of the front set of wheels as movable computer system 600 navigates to the other parking spot mimics the positioning of the front set of wheels as movable computer system 600 navigates to target parking spot 706 at FIG. 7B. Accordingly, at FIG. 7C, set of arrows 752 is the same as set of arrows 742 at FIG. 7B.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot.
  • movable computer system 600 causes the back set of wheels to be positioned at an angle that does not cause movable computer system 600 to deviate from the navigation path of movable computer system 600 at the positions of the back set of wheels that correspond to arrow 710al and arrow 71 Obi.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot, movable computer system 600 causes the back set of wheels to be adjusted to an angle that causes movable computer system 600 to deviate from the navigation path to a new path.
  • the back set of wheels is angled towards the rear of the other parking spot such that movable computer system 600 is moved towards the rear of the other parking spot while, at arrow 708c 1, the back set of wheels is angled towards the front of target parking spot 706 such that movable computer system 600 is moved towards the front of target parking spot 706.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within the other parking spot.
  • movable computer system 600 Because a determination is made that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within the other parking spot (and/or reach the second target angle), movable computer system 600 causes the back set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the new path at arrows 71 Odl and 710el (and/or reach the first target angle and the second target angle, respectively).
  • FIGS. 8A-8C illustrate exemplary diagrams for navigating between objects in a backward manner in accordance with some embodiments.
  • the diagrams in these figures are used to illustrate the processes described below, including the processes in FIGS. 9, 10A- 10B, and 12.
  • FIG. 8A includes diagram 800 that illustrates movable computer system 600 navigating towards target parking spot 806.
  • target parking spot 806 is a parking spot that is parallel to the direction of travel of movable computer system 600 (e.g., the current direction of travel of movable computer system 600 and/or a previous direction of travel of movable computer system 600).
  • the diagram of FIG. 8 A is displayed by a navigation application of movable computer system 600 and serves as a visual aid to assist a user in navigating to the target destination.
  • the diagram of FIG. 8 A is representative of a position of movable computer system 600 while navigating to the target destination and is not displayed by a navigation application of movable computer system 600.
  • target parking spot 806 is positioned between object 802 and object 804.
  • object 802 and object 804 are inanimate objects such as automobiles, construction signs, trees, and/or road hazards, such as a pothole or and/or a speed bump.
  • object 802 and object 804 are animate objects, such as an individual and/or an animal.
  • direction indicator 820 indicates the path that movable computer system 600 will travel to arrive at target parking spot 806. Accordingly, as indicated by direction indicator 820, movable computer system 600 will travel in a reverse direction before angling downwards at an angle (e.g., a 90-degree angle or an angle that is substantially 90 degrees) towards target parking spot 806.
  • an angle e.g., a 90-degree angle or an angle that is substantially 90 degrees
  • the set of wheels of movable computer system 600 that is closest to target parking spot 806 is configured to be controlled by a user of movable computer system 600.
  • a determination is made (e.g., by movable computer system 600 and/or by a computer system that is in communication with movable computer system 600) that the back set of wheels is positioned closer to target parking spot 806 than the front set of wheels.
  • a navigation path of movable computer system 600 and/or a speed of movable computer system 600 changes (e.g., without detecting a user input) when a determination is made that the positioning of object 702 and/or object 704 changes (e.g., object 702 and/or object 704 moves (1) towards and/or moves away from movable computer system 600 and/or (2) relative to parking spot 706).
  • FIG. 8B illustrates diagram 808, which includes set of arrows 840 and set of arrows 842.
  • set of arrows 840 and set of arrows 842 correspond to movable computer system 600 navigating to target parking spot 806 where movable computer system 600 does not deviate from a navigation path of movable computer system 600.
  • set of arrows 840 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the back set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the back set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the back set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the back set of wheels is perpendicular with target parking spot 806).
  • the back set of wheels is configured to be controlled by a user throughout at least a portion of set of arrows 840 as discussed above.
  • set of arrows 842 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the front set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the front set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the front set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the front set of wheels is perpendicular with target parking spot 806).
  • the front set of wheels is configured to not be controlled by the user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 842 as discussed above.
  • movable computer system 600 causes the front set of wheels to converge on a first angle as movable computer system 600 travels in the backward direction towards target parking spot 806 (e.g., an angle that is perpendicular or approximately perpendicular to curb 800, such as illustrated by arrow 808c2) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to curb 800, such as illustrated by arrow 808d2) as movable computer system 600 angles downwards towards target parking spot 806.
  • a first angle as movable computer system 600 travels in the backward direction towards target parking spot 806
  • movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 806. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 806, movable computer system 600 causes the front set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the navigation path.
  • FIG. 8C illustrates diagram 810, which includes set of arrows 850 and set of arrows 852.
  • set of arrows 850 and set of arrows 852 correspond to movable computer system 600 navigating to target parking spot 806 where movable computer system 600 deviates from a navigation path of movable computer system 600. It should be recognized that the deviation in FIG. 8C is a result of an error by the user rather than a different parking spot, as described above with respect to FIGS. 6E-6F and 7B-7C.
  • set of arrows 850 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the back set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the back set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the back set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the back set of wheels is perpendicular with target parking spot 806).
  • the back set of wheels is configured to be controlled by a user throughout at least a portion of set of arrows 850 as discussed above.
  • set of arrows 852 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the front set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the front set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the front set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the front set of wheels is perpendicular with target parking spot 806).
  • the front set of wheels is configured to not be controlled by the user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 852 as discussed above.
  • movable computer system 600 causes the front set of wheels to converge on a first angle as movable computer system 600 travels in the backward direction towards target parking spot 806 (e.g., an angle that is perpendicular or approximately perpendicular to a curb, such as similar to arrow 808d2 in FIG.
  • movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to the curb, such as illustrated by arrow 810e2) as movable computer system 600 angles downwards towards target parking spot 806.
  • a second angle e.g., an angle that is parallel or substantially parallel to the curb, such as illustrated by arrow 810e2
  • the positioning of the back set of wheels as movable computer system 600 navigates to target parking spot 806 at FIG. 8C does not mimic the positioning of the back set of wheels as movable computer system 600 navigates to target parking spot 806 at FIG. 8B.
  • arrow 808b 1 in FIG. 8B indicates that the back set of wheels is angled towards target parking spot for a second point in time while arrow 81 Obi in FIG.
  • FIG. 8C indicates that the back set of wheels is perpendicular to target parking spot 806 for a second point in time. Accordingly, movable computer system 600 navigates along a different path to target parking spot 806 at FIG. 8B in contrast to the path movable computer system 600 navigates along at FIG. 8C.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 806. Because a determination is made that if movable computer system 600 continues along the current path of movable computer system 600 then movable computer system 600 will be correctly aligned within target parking spot 806, movable computer system 600 causes the front set of wheels to be positioned such that movable computer system 600 does not deviate from its current path.
  • a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the current path of movable computer system 600 will cause movable computer system 600 to be misaligned within target parking spot 806. Because a determination is made that continuing along the current path of movable computer system 600 will cause movable computer system 600 to be misaligned within target parking spot 806, movable computer system 600 causes the front of wheels to be adjusted to an angle that causes movable computer system 600 to deviate from the current path to a new path.
  • the orientation of arrow 810c2 at FIG. 8C is different than the orientation of arrow 808c2 at FIG. 8B. More specifically, at arrow 810c2, the front set of wheels is perpendicular with respect to the position of target parking spot 806 such that movable computer system 600 is moved perpendicular to target parking spot 806 while, at arrow 808c2, the front set of wheels is angled towards the rear of target parking spot 806 such that movable computer system 600 is moved at an angle with respect to target parking spot 806.
  • FIG. 9 is a flow diagram illustrating a method (e.g., process 900) for configuring a movable computer system in accordance with some embodiments. Some operations in process 900 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • process 900 provides an intuitive way for configuring a movable computer system.
  • Process 900 reduces the cognitive burden on a user for configuring a movable computer system, thereby creating a more efficient human-machine interface.
  • process 900 is performed at a computer system (e.g., 600 and/or 1100) that is in communication with a first movement component (e.g., 602 and/or 604) (e.g., an actuator, a wheel, and/or an axel) and a second movement component (e.g., 602 and/or 604) different from (e.g., separate from and/or not directly connected to) the first movement component.
  • a first movement component e.g., 602 and/or 604
  • a second movement component e.g., 602 and/or 604 different from (e.g., separate from and/or not directly connected to) the first movement component.
  • the computer system is a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, a vehicle, and/or a personal computing device.
  • HMD head-mounted display
  • the computer system is in communication with one or more cameras (e.g., one or more telephoto, wide angle, and/or ultra-wide-angle cameras).
  • the first movement component is located on a first side of the computer system.
  • the second movement component is located on a second side different and/or opposite from the first side.
  • the first side of the computer system is the front and/or front side of the computer system and the second side of the computer system is the back and/or back side of the computer system and/or vice- versa.
  • the first movement component primarily causes a change in orientation of the first side of the computer system, causes the first side of the computer system to change position more than the second side of the computer system changes position, and/or impacts the first side of the computer system more than the second side of the computer system.
  • the second movement component primarily causes a change in orientation of the second side of the computer system, causes the second side of the computer system to change position more than the first side of the computer system, and/or impacts the second side of the computer system more than the first side of the computer system changes the position.
  • a target location e.g., 606b
  • the computer system While detecting a target location (e.g., 606b) (e.g., the destination, a target destination, a stopping location, a parking spot, a demarcated area, and/or a pre-defined area) in a physical environment (e.g., and while the first movement component is moving in a first direction and/or the second movement component is moving in a second direction (e.g., the same as or different from the first direction)) (e.g., and/or in response to detecting a current location of the computer system relative to the target location), the computer system detects (902) an event with respect to the target location (e.g., as described above in relation to FIG. 6A).
  • a target location e.g., 606b
  • the computer system detects (902) an event with respect to the target location (e.g., as described above in relation to FIG. 6A).
  • detecting the event includes detecting that the computer system is within a predefined distance from the target location. In some embodiments, detecting the event includes detecting, via an input component in communication with the computer system, an input corresponding to a request to assist navigation to the target location. In some embodiments, detecting the event includes detecting a current angle of the first and/or second movement component.
  • the computer system configures (904) (e.g., maintains configuration or changes configuration of) (e.g., based on a distance, location, and/or direction of the target location relative to the computer system) (e.g., based on an angle of the second movement component) one or more angles of one or more movement components (e.g., 602 and/or 604) (e.g., a set of one or more movement components including the first movement component and the second movement component), wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle (e.g., 906) (e.g., a wheel angle, and/or a direction) of the first movement component (e.g., a wheel angle, and/or a direction) of the first movement component (e.g., a wheel angle, and/or a direction) of the first movement component (e.g., a wheel angle, and/or
  • the target location is detected via one or more sensors (e.g., a camera, a depth sensor, and/or a gyroscope) in communication with the computer system (e.g., one or more sensors of the computer system).
  • the target location is detected via (e.g., based on and/or using) a predefined map of the physical environment.
  • the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first (e.g., semi-autonomous) mode.
  • the first set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location.
  • the first set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is moving in a third direction (e.g., the same as or different from the first and/or second direction) (e.g., at least partially toward the target location).
  • a steering mechanism e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof in communication with the computer system does not directly control the angle of the first movement component when the first set of one or more criteria is satisfied.
  • the steering mechanism does not directly control the angle of the first movement component when the first set of one or more criteria is satisfied.
  • the angle of the first movement component is reactive to the angle of the second movement component.
  • the angle of the first movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location.
  • the manual manner is the first manner.
  • the automatic manner is the first manner.
  • the first manner is the manual manner and is not the automatic manner.
  • the angle e.g., a wheel angle, and/or a direction
  • the computer system in response to detecting the change with respect to the computer system and the target location and in accordance with a determination that a second set of one or more criteria, different from the first set of one or more criteria is satisfied, the angle (e.g., a wheel angle, and/or a direction) of the first movement component and the angle of the second movement component continues to be controlled in the first manner.
  • the computer system in response to detecting the change with respect to the computer system and the target location and in accordance with a determination that a second set of one or more criteria, the computer system forgoes configuring the angle of the first movement component to be controlled in the automatic manner.
  • the event is detected while navigating to a destination in the physical environment.
  • the event is detected while the angle of the first movement component and the angle of the second movement component are configured to be controlled in a first manner (e.g., manually (e.g., by a user of the computer system and/or by a person), semi -manually, semi-autonomously, and/or fully autonomously (e.g., by one or more computer systems and not by a person and/or user of the computer system) (e.g., by the computer system and/or a user of the computer system)).
  • configuring the angle of the first movement component and the angle of the second movement component to be controlled in the first manner includes forgoing configuring the angle of the first movement component and/or the angle of the second movement component to be controlled by the computer system.
  • configuring the angle of the first movement component and the angle of the second movement component to be controlled in the first manner includes configuring the angle of the first movement component and/or the angle of the second movement component to be controlled based on input (e.g., user input) detected via one or more sensors in communication with the computer system.
  • input e.g., user input
  • the angle of the first movement component and the angle of the second movement component are configured to be controlled in the first manner when the computer system is configured to be at least partially manually controlled.
  • the angle of the first movement component and the angle of the second movement component are configured to be controlled in the first manner when the computer system is at least a predefined distance from the destination.
  • the angle of the first movement component and the angle of the second movement component are configured to be controlled in the first manner when the computer system is within a predefined distance from the destination. In some embodiments, in response to detecting the event and in accordance with a determination that a third set of one or more criteria is satisfied, configuring the angle of the first movement component and/or the angle of the second movement component to be manually controlled.
  • navigating includes displaying one or more navigation instructions corresponding to the destination.
  • navigating includes, at a first time, automatically controlling the first movement component and/or the second movement component based on a determined path to the destination.
  • Causing an angle of the first movement component to be controlled in an automatic manner and an angle of the second movement component to be controlled in a manual manner in response to detecting an event and the first set of one or more criteria being satisfied allows the computer system to partially assist a user in reaching the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system detects a current angle of the second movement component (e.g., 602 and/or 604).
  • the current angle of the second movement component is set based on input detected via one or more input devices (e.g., a camera and/or a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof)) in communication with the computer system.
  • input devices e.g., a camera and/or a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof) in communication with the computer system.
  • the computer system in response to detecting the current angle of the second movement component and in accordance with a determination that the current angle of the second movement component is a first angle, automatically modifies (e.g., based on the current angle of the second movement component) a current angle of the first movement component (e.g., 602 and/or 604) to be a second angle (e.g., from an angle to a different angle) (e.g., the first angle or a different angle) (e.g., without automatically modifying a current angle of the second movement component) (e.g., as described above in relation to FIG. 6B).
  • a current angle of the first movement component e.g., 602 and/or 604
  • a second angle e.g., from an angle to a different angle
  • the first angle or a different angle e.g., without automatically modifying a current angle of the second movement component
  • the current angle of the first movement component in response to detecting the current angle of the second movement component, is automatically modified a first amount in accordance with a determination that the current angle of the second movement component is the first angle.
  • the computer system in response to detecting the current angle of the second movement component and in accordance with a determination that the current angle of the second movement component is a third angle different from the first angle, automatically modifies (e.g., based on the current angle of the second movement component) the current angle of the first movement component to be a fourth angle (e.g., the second angle or an angle different from the second angle) different from the second angle (e.g., as described above in relation to FIG.
  • the current angle of the first movement component is automatically modified in accordance with and/or based on the current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified to compensate for, match, offset, be opposite of the current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified relative to the target location (e.g., such that the computer system is directed, positioned, and/or oriented to head to the target location).
  • the current angle of the first movement component in response to detecting the current angle of the second movement component, is automatically modified a second amount different from the first amount in accordance with a determination that the current angle of the second movement component is the third angle.
  • Automatically modifying a current angle of the first movement component based on a current angle of the second movement component allows the computer system to adapt the current of the first movement component (which, in some embodiments, is being automatically controlled) to the current angle of the second movement component (which, in some embodiments, is being manually controlled), thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system detects a current location of the computer system (e.g., 600 and/or 1100).
  • the computer system in response to detecting the current location of the computer system and in accordance with a determination that the current location of the computer system is a first orientation (e.g., direction and/or heading) (and/or location) relative to the target location (e.g., 606b), the computer system automatically modifies a current angle of the first movement component (e.g., 602 and/or 604) to be a fifth angle (e.g., from an angle to a different angle) (e.g., without automatically modifying a current angle of the second movement component) (e.g., as described above in relation to FIG. 6B).
  • a current angle of the first movement component e.g., 602 and/or 604
  • a fifth angle e.g., from an angle to a different angle
  • the current angle of the first movement component in response to detecting the current location of the computer system, is automatically modified a third amount in accordance with a determination that the current location of the computer system is the first orientation relative to the target location.
  • In response to detecting the current location of the computer system and in accordance with a determination that the current location of the computer system is a second orientation relative to the target location, wherein the second orientation is different from the first orientation, the computer system automatically modifies (e.g., based on the second orientation) the current angle of the first movement component to be a sixth angle different from the fifth angle (e.g., as describe above in relation to FIG. 6B) (e.g., without automatically modifying a current angle of the second movement component).
  • the current angle of the first movement component is automatically modified in accordance with and/or based on the current location of the computer system. In some embodiments, the current angle of the first movement component is automatically modified to compensate for, match, offset, be opposite of a current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified relative to the target location (e.g., such that the computer system is directed, positioned, and/or oriented to head to the target location). In some embodiments, in response to detecting the current location of the computer system, the current angle of the first movement component is automatically modified a fourth amount different from the third amount in accordance with a determination that the current location of the computer system is the second orientation relative to the target location.
  • Automatically modifying the current angle of the first movement component based on a current location of the computer system relative to the target location allows the computer system to automatically align the first movement component with the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system detects a current location of an object external to (e.g., and/or separate and/or different from) the computer system (e.g., 600 and/or 1100).
  • the computer system in response to detecting the current location of the object external to the computer system and in accordance with a determination that the current location of the object is a first location, automatically modifies a current angle of the first movement component (e.g., 602 and/or 604) to be a seventh angle (e.g., from an angle to a different angle) (e.g., without automatically modifying a current angle of the second movement component) (e.g., as described above in relation to FIG. 6B).
  • the current angle of the first movement component in response to detecting the current location of the object, is automatically modified a fifth amount in accordance with a determination that the current location of the object is the first location.
  • the computer system in response to detecting the current location of the object external to the computer system and in accordance with a determination that the current location of the object is a second location different from the first location, automatically modifies (e.g., based on the second location) the current angle of the first movement component to be an eighth angle different from the seventh angle (e.g., as described above in relation to FIG. 6B) (e.g., without automatically modifying a current angle of the second movement component).
  • the current angle of the first movement component is automatically modified in accordance with and/or based on a current location of the computer system.
  • the current angle of the first movement component is automatically modified to compensate for, match, offset, be opposite of a current angle of the second movement component.
  • the current angle of the first movement component is automatically modified relative to the target location (e.g., such that the computer system is directed, positioned, and/or oriented to head to the target location).
  • the current angle of the first movement component in response to detecting the current location of the object, is automatically modified a sixth amount different from the fifth amount in accordance with a determination that the current location of the object is the second location.
  • Automatically modifying the current angle of the first movement component based on a current location of an object external to the computer system allows the computer system to avoid the object, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system before detecting the event with respect to the target location (e.g., 606b), the computer system detects, via one or more input devices (e.g., the first movement component, the second movement component, a different movement component, a camera, a touch-sensitive surface, a physical input mechanism, a steering mechanism, and/or another computer system separate from the computer system) in communication with (e.g., of and/or integrated with) the computer system (e.g., 600 and/or 1100), an input (e.g., a tap input and/or non-tap input (e.g.
  • one or more input devices e.g., the first movement component, the second movement component, a different movement component, a camera, a touch-sensitive surface, a physical input mechanism, a steering mechanism, and/or another computer system separate from the computer system
  • an input e.g., a tap input and/or non-tap input (e.g.
  • the computer system navigates to the target location.
  • Causing an angle of the first movement component to be controlled in an automatic manner and an angle of the second movement component to be controlled in a manual manner while navigating to the target location allows the computer system to partially assist a user in reaching the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the input corresponds to (e.g., manually maintaining when within a threshold distance from the target location, modifying, and/or changing) an angle of the second movement component (e.g., 602 and/or 604) (e.g., as described above in relation to FIG. 6A).
  • an angle of the second movement component e.g., 602 and/or 604 (e.g., as described above in relation to FIG. 6A).
  • an angle of a third movement component (e.g., 602 and/or 604) is configured to be controlled in the automatic manner (e.g., based on configuring the one or more angles); and an angle of a fourth movement component (e.g., 602 and/or 604) is configured to be controlled in the manual manner (e.g., based on configuring the one or more angles).
  • the third movement component is different from the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or 604).
  • the fourth movement component is different from the first movement component, the second movement component, and the third movement component (e.g., as described above in relation to FIGS. 6A and 6B).
  • the third movement component is automatically modified differently than the first movement component when configured to be controlled in the automatic manner. Causing angles of multiple movement component to be controlled in an automatic manner and angles of multiple movement component to be controlled in a manual manner in response to detecting an event and the first set of one or more criteria being satisfied allows the computer system to partially assist a user in reaching the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • configuring the one or more angles of one or more movement components includes, in accordance with a determination that the target location (e.g., 606b) is a first type of target location (e.g., a parking spot perpendicular to traffic) (e.g., a location with a first orientation), configuring the angle of the first movement component (e.g., 602 and/or 604) to converge to (e.g., be, reach over time, and/or change over time to be) a target angle at the target location (e.g., as described above in relation to FIG. 6A).
  • a first type of target location e.g., a parking spot perpendicular to traffic
  • the angle of the first movement component e.g., 602 and/or 604
  • configuring the angle of the first movement component to converge to the target angle at the target location includes configuring the angle of the first movement component to be an intermediate angle different from the target angle before reaching the target location.
  • the intermediate angle is an angle different from an angle of the first movement component when detecting the event.
  • the intermediate angle is an angle between an angle of the first movement component when detecting the event and the target angle.
  • Configuring the angle of the first movement component to converge to a target angle at the target location allows the computer system to partially assist a user in reaching the target angle at the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • configuring the one or more angles of one or more movement components includes, in accordance with a determination that the target location (e.g., 606b) is a second type (e.g., different from the first type) of target location (e.g., a parking spot parallel to traffic) (e.g., a location with a second orientation different from the first orientation), configuring the angle of the first movement component (e.g., 602 and/or 604) to converge to (e.g., be, reach over time, and/or change over time to be): a first target angle at a first point of navigating to the target location and a second target angle at a second point (e.g., the target location or a different location) of navigating to the target location.
  • the target location e.g., 606b
  • the angle of the first movement component e.g., 602 and/or 604
  • the second target angle is different from the first target angle.
  • the second point is different from the first point (e.g., as described above in relation to FIG. 6F).
  • configuring the angle of the first movement component to converge to the first target angle includes configuring the angle of the first movement component to be a first intermediate angle different from the first target angle before reaching the first point.
  • the first intermediate angle is an angle different from an angle of the first movement component when detecting the event.
  • the first intermediate angle is an angle between an angle of the first movement component when detecting the event and the first point.
  • configuring the angle of the first movement component to converge to the second target angle includes configuring the angle of the first movement component to be a second intermediate angle (e.g., different from the first intermediate angle) different from the second target angle before reaching the second point and/or the target location.
  • the second intermediate angle is an angle different from an angle of the first movement component when detecting the event and/or when at the first point.
  • the second intermediate angle is an angle between an angle of the first movement component when detecting the event (e.g., and/or when at the first point) and the second point (e.g., and/or the target location).
  • Configuring the angle of the first movement component to converge to different target angles at different points while navigating to the target location allows the computer system to partially assist a user in reaching a final orientation at the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • configuring the one or more angles of one or more movement components includes, in accordance with a determination that the target location (e.g., 606b) is a third type (e.g., different from the first type and/or the second type) (e.g., the second type) of target location, configuring the angle of the first movement component (e.g., 602 and/or 604) to be controlled (1) in an automatic manner for a first portion of a maneuver (e.g., while navigating to the target location (e.g., after detecting the event)) (e.g., a set and/or course of one or more actions and/or movements along a path) and (2) in a manual manner for a second portion of the maneuver.
  • a third type e.g., different from the first type and/or the second type
  • the angle of the first movement component e.g., 602 and/or 604
  • the second portion is different from the first portion (e.g., as described above in relation to FIG. 7A).
  • the angle of the first movement component is configured to be controlled in an automatic manner
  • the angle of the second movement component is configured to controlled in a manual manner.
  • the angle of the second movement component is configured to controlled in an automatic manner. Configuring the angle of the first movement component to be controlled (1) in an automatic manner for a first portion of a maneuver and (2) in a manual manner for a second portion of the maneuver.
  • the second portion is different from the first portion allows the computer system to adapt to different portions of the maneuver and provide assistance where needed, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system configures (e.g., maintains configuration or changes configuration of) (e.g., based on a distance, location, and/or direction of the target location relative to the computer system) (e.g., based on an angle of the second movement component) one or more angles of one or more movement components (e.g., 602 and/or 604) (e.g., a set of one or more movement components including the first movement component and the second movement component), wherein the first set of one or more criteria includes a criterion that is satisfied in accordance with a determination that the computer system (e.g., 600 and/or 1100) is a first direction relative to the target location (e.
  • an angle of the first movement component (e.g., 602 and/or 604) is configured to be controlled in a manual manner (e.g., and/or while forgoing configuring the angle of the first movement component to be controlled by the computer system) and an angle of the second movement component (e.g., 602 and/or 604) is configured to be controlled in an automatic manner.
  • the fifth set of one or more criteria includes a criterion that is satisfied when the computer system is in the first (e.g., semi-autonomous) mode.
  • the fifth set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location.
  • the fifth set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location
  • a steering mechanism e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof
  • the steering mechanism does not directly control the angle of the first movement component when the fifth set of one or more criteria is satisfied.
  • the angle of the second movement component is reactive to the angle of the first movement component.
  • the angle of the second movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location.
  • a predefined distance e.g. 0-2 feet
  • Controlling different movement components depending on a direction of the computer system relative to the target location allows the computer system to adapt to different orientations and/or approaches to the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system detects misalignment of the second movement component (e.g., 602 and/or 604) relative to the target location (e.g., while the second movement component is being controlled in a manual manner).
  • the computer system in response to detecting misalignment of the second movement component relative to the target location, provides, via one or more output devices (e.g., a speaker, a display generation component, and/or a steering mechanism) in communication with the computer system (e.g., 600 and/or 1100), feedback (e.g., visual, auditory, and/or haptic feedback) with respect to a current angle of the second movement component (e.g., as described above in relation to FIG. 6B).
  • the feedback corresponds to an angle different from the current angle (e.g., suggesting to change the current angle of the second movement component to the angle different from the current angle).
  • Providing feedback with respect to a current angle of the second movement component in response to detecting misalignment of the second movement component relative to the target location allows the computer system to prompt a user when the misalignment occurs and enable the user to fix the misalignment, thereby providing improved feedback and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system detects, via one or more input devices in communication with the computer system (e.g., 600 and/or 1100), a second input.
  • the second input corresponds to a request to stop controlling the first movement component in an automatic manner.
  • the computer system configures an angle of the first movement component to be controlled in a manual manner (e.g., as described above in relation to FIG.
  • Configuring an angle of the first movement component to be controlled in a manual manner instead of an automatic manner in response to detecting input while the angle of the first movement component is controlled in an automatic manner allows the computer system to respond to input by a user and switch modes in an efficient manner, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system detects, via one or more input devices in communication with the computer system (e.g., 600 and/or 1100), an object.
  • object is detected in and/or relative to a direction of motion of the computer system.
  • the computer system in response to detecting the object, configures an angle of the first movement component to be controlled in an automatic manner using a first path, wherein, before detecting the object, configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) in response to detecting the event includes configuring an angle of the first movement component to be controlled in an automatic manner using a second path different from the first path (e.g., as described above in relation to FIG. 6A).
  • Configuring an angle of the first movement component to be controlled in an automatic manner using a different path in response to detecting an object allows the computer system to avoid the object, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system after configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) in response to detecting the event and in conjunction with configuring an angle of the first movement component (e.g., 602 and/or 604) to be controlled in an automatic manner (e.g., and/or in conjunction with automatically modifying a current angle of the first movement component), the computer system causes the computer system (e.g., 600 and/or 1100) to accelerate (e.g., when not going quick enough to reach a particular location within the target location) or deaccelerate (e.g., as described above in relation to FIG.
  • the computer system e.g., 600 and/or 1100
  • accelerate e.g., when not going quick enough to reach a particular location within the target location
  • deaccelerate e.g., as described above in relation to FIG.
  • a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
  • process 1200 optionally includes one or more of the characteristics of the various methods described above with reference to process 900.
  • one or movement components can be configured to be controlled in an automatic and/or manual manner using one or more techniques described above in relation to process 900 where feedback can be provided once the one or more components are configured using one or more techniques described below in relation to process 1200.
  • 10A-10B is a flow diagram illustrating a method (e.g., process 1000) for selectively modifying movement components of a movable computer system in accordance with some embodiments.
  • Some operations in process 1000 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • process 1000 provides an intuitive way for selectively modifying movement components of a movable computer system.
  • Process 1000 reduces the cognitive burden on a user for selectively modifying movement components of a movable computer system, thereby creating a more efficient human-machine interface.
  • process 1000 is performed at a computer system (e.g., 600 and/or 1100) (e.g., as described above with respect to process 900) that is in communication with a first movement component (e.g., 602 and/or 604) (e.g., as described above with respect to process 900) and a second movement component (e.g., 602 and/or 604) different from (e.g., separate from and/or not directly connected to) the first movement component.
  • a computer system e.g., 600 and/or 1100
  • a first movement component e.g., 602 and/or 604
  • a second movement component e.g., 602 and/or 604
  • the computer system detects (1002) a target location (e.g., 606b) (e.g., as described above with respect to process 900) in a physical environment.
  • a target location e.g., 606b
  • the computer system e.g., 600 and/or 1100
  • the computer system automatically modifies (1008) (e.g., as described above with respect to process 900) the first movement component (e.g., 602 and/or 604) (e.g., an angle (e.g., a wheel angle, a direction, and/or any combination thereof) of and/or corresponding to the first movement component, a speed of and/or corresponding to the first movement component, an acceleration of and/or corresponding to the first movement component, a size of and/or corresponding to the first movement component, a shape of and/or corresponding to the first movement component, a temperature
  • the first movement component e.g., 602 and/or 604
  • an angle e.g., a wheel angle, a direction, and/or any combination thereof
  • the computer system While (1004) detecting the target location in the physical environment and in accordance with (1006) the determination that the first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes the criterion that is satisfied when the computer system is operating in the first mode, the computer system forgoes (1010) automatically modifying (e.g., as described above with respect to process 900) the second movement component (e.g., as described above in relation to FIG.
  • the first set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location.
  • the first set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is moving in a third direction (e.g., the same as or different from the first and/or second direction) (e.g., at least partially toward the target location).
  • a steering mechanism e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof in communication with the computer system does not directly control the first movement component.
  • a state of the first movement component is reactive to a state of the second movement component.
  • the first movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location.
  • the computer system While (1004) detecting the target location in the physical environment and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is operating in a second mode (e.g., a full autonomous mode and/or a mode that is more autonomous than the first mode) different from the first mode, the computer system automatically modifies (1012) the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or 604), wherein the second set of one or more criteria is different from the first set of one or more criteria (e.g., as described above in relation to FIG.
  • the computer system e.g., 600 and/or 1100
  • the computer system automatically modifies (1012) the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or
  • the second set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location. In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when the computer system is moving in the third direction.
  • a steering mechanism in communication with the computer system does not directly control the first movement component and/or the second movement component.
  • a state of the first movement component is reactive to a state of the second movement component.
  • the first movement component and/or the second movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location.
  • the computer system While (1004) detecting the target location in the physical environment and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is operating in a third mode (e.g., a manual mode, a non- autonomous mode, and/or a mode that is less autonomous than the first mode and the second mode) different from the second mode and the first mode, the computer system forgoes (1014) automatically modifying the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or 604) (e.g., as described above in relation to FIG.
  • a third mode e.g., a manual mode, a non- autonomous mode, and/or a mode that is less autonomous than the first mode and the second mode
  • the computer system operates in the second mode (e.g., while detecting the target location in the physical environment) in accordance with a determination that the target location is a second type different from the first type.
  • the computer system operates in the third mode (e.g., while detecting the target location in the physical environment) in accordance with a determination that the target location is a third type different from the first type and the second type (e.g., as described above in relation to FIG. 6A).
  • a mode of the computer system is selected based on a type of the target location.
  • the computer system in response to detecting the input corresponding to selection of the respective mode to operate the computer system and in accordance with a determination that the respective mode is the first mode, the computer system operates the computer system in the first mode (e.g., as described above in relation to FIG. 6A).
  • the computer system in response to detecting the input corresponding to selection of the respective mode to operate the computer system and in accordance with a determination that the respective mode is the second mode, the computer system operates the computer system in the second mode (e.g., as described above in relation to FIG. 6A).
  • the computer system detects (1202) a target location (606b, 706, 806, 1108b, and/or 1108a) (e.g., as described above with respect to process 900 and/or process 1000) in a physical environment.
  • a target location (606b, 706, 806, 1108b, and/or 1108a)
  • the output component is moving in a first direction
  • detecting a current location of the computer system relative to the target location e.g., and while the computer system is in a first (e.g., semiautomatic) and/or a third (e.g., manual) mode, as described above with respect to process 1000
  • a first set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is
  • the computer system while detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment and in accordance with a determination that a fifth set of one or more criteria is satisfied, wherein the fifth set of one or more criteria includes a criterion that is satisfied in accordance with detection of an object external to the computer system (e.g., 600 and/or 1100), the computer system provides fifth feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG. 11C).
  • the fifth feedback e.g., visual, auditory, and/or haptic
  • the computer system while detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment and in accordance with a determination that a sixth set of one or more criteria is satisfied, wherein the sixth set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is a first distance from the target location, the computer system provides sixth feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG. 11C).
  • sixth feedback e.g., visual, auditory, and/or haptic
  • the eighth feedback does not change an orientation and/or position of the computer system.
  • the eighth feedback indicates, corresponds to, and/or is with respect to a new location and/or a new orientation with respect to the target location.
  • the eighth feedback is provided internal to an enclosure corresponding to the computer system.
  • the ninth feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the ninth feedback is different from the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, the sixth feedback, and/or the seventh feedback. In some embodiments, the ninth feedback is the same as the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, the sixth feedback, the seventh feedback, and/or the eighth feedback. In some embodiments, in accordance with a determination that the current portion of the movement maneuver is the first portion, the computer system does not provide the ninth feedback.
  • Providing different feedback depending on a current portion of a maneuver allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the ninth feedback is a different type of feedback (e.g., from auditory to visual to haptic to physical rotation) than the eighth feedback (e.g., as described above in relation to FIG. 11C).
  • Providing different types of feedback depending on a current portion of a maneuver allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • FIG. 13 is a flow diagram illustrating a method (e.g., process 1300) for redirecting a movable computer system in accordance with some embodiments. Some operations in process 1300 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted. [0316] As described below, process 1300 provides an intuitive way for redirecting a movable computer system. Process 1300 reduces the cognitive burden on a user for redirecting a movable computer system, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to redirect a movable computer system faster and more efficiently conserves power and increases the time between battery charges.
  • the computer system detects (1302) (e.g., via one or more sensors in communication with the computer system and/or via receiving a message from another computer system different from the computer system) an error (e.g., (1) an instruction of the one or more instructions not followed (2) a difficulty and/or impossibility with respect to a current location (e.g., target location has been blocked, target location is no longer in path of computer system, and/or target location has does not currently satisfy one or more criteria (e.g., is no longer and/or more desirable and/or is no longer and/or more convenient) and navigating to the first target location (e.g., and/or after performing one or more operations corresponding to navigating to the target location)
  • the computer system detects (1302) (e.g., via one or more sensors in communication with the computer system and/or via receiving a message from another computer system different from the computer system) an error (e.g., (1) an instruction of the one or more instructions not followed (2) a difficulty and/or impossibility with respect
  • control is displayed on top of (e.g., at least partially overlays) a user interface displayed when the error is detected.
  • the control is displayed with and/or instead of a user interface displayed when the error is detected.
  • a user interface, displayed when the error is detected is visually changed to include display of the control.
  • detecting, via the input component, a second set of one or more inputs e.g., a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, a gaze direction, and/or any combination there)
  • a second set of one or more inputs e.g., a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, a gaze direction, and/or any combination there)
  • the computer system in response to detecting the second set of one or more inputs: in accordance with a determination that the control corresponds to maintaining the first target location, the computer system initiates a process to maintain the first target location (e.g., updating and/or providing one or more new instructions) (e.g., changing a path to the target location) (e.g., providing one or more new options for navigating to the target location) (e.g., providing a control to confirm that the target location should be maintained) and in accordance with a determination that the control corresponds to changing the first target location, the computer system initiates a process to change the first target location.
  • a process to maintain the first target location e.g., updating and/or providing one or more new instructions
  • changing a path to the target location e.g., providing one or more new options for navigating to the target location
  • the computer system in accordance with a determination that the control corresponds to changing the first target location, the computer system initiates a process to change the first target location.
  • a single control is displayed that, when selected at different portions, either initiates a process to maintain the first target location or initiates a process to change the first target location.
  • a first control is configured to initiate a process to maintain the first target location
  • a second control different from the first control is configured to initiate a process to change the first target location.
  • the control corresponds to a new target location.
  • the process to change the first target location includes displaying a user interface including one or more representations of different target locations.
  • the process to change the first target location includes displaying a user interface including a confirmation element to confirm a new target location.
  • Initiating a process to select a respective target location in response to detecting an error with respect to navigating to the first target location allows the computer system to provide options to react to the error and, in some embodiments, navigate to a different location, thereby providing improved feedback reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the process to select a respective target location includes: providing (e.g., displaying and/or outputting audio) a first control (e.g., 1118) to maintain the first target location and providing (e.g., concurrently with or separate from providing the first control) a second control (1120) to select a new target location different from the first target location.
  • a first control e.g., 1118
  • a second control e.g., concurrently with or separate from providing the first control
  • the second control is different from the first control.
  • Providing two separate controls to select different target locations in response to detecting an error with respect to navigating to the first target location allows the computer system to provide options to react to the error and, in some embodiments, navigate to a different location, thereby providing improved feedback reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system (e.g., 600 and/or 1100) is in communication with a display generation component.
  • providing the second control e.g., 1120
  • providing the second control includes displaying, via the display generation component, an indication corresponding to the new target location (e.g., 1108a and/or 1108b) (e.g., as described above in relation to FIG. 11C) (e.g., a representation of the new target location relative to the first target location) (e.g., an outline and/or other visual indication at location corresponding to the new target location).
  • Displaying an indication corresponding to the new target location when providing two separate controls to select different target locations in response to detecting an error with respect to navigating to the first target location allows the computer system to provide options to react to the error and, in some embodiments, navigate to a different location, thereby providing improved feedback reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system (e.g., 600 and/or 1100) is in communication with a movement component (e.g., as described above with respect to process 900).
  • navigating to the first target location includes automatically causing, by the computer system, the movement component to change operation (e.g., as described above in relation to FIG. 1 ID) (e.g., change to a new direction, orientation, location, speed, and/or acceleration).
  • navigating to the first target location is performed in an at least partial automatic and/or autonomous manner.
  • navigating to the first target location is performed in a partially assisted manner (e.g., a first part of navigating is performed in a manual manner and a second part of navigating is performed in an automatic manner) (e.g., a first movement component is controlled in an automatic manner while a second movement component is controlled in a manual manner).
  • a partially assisted manner e.g., a first part of navigating is performed in a manual manner and a second part of navigating is performed in an automatic manner
  • a first movement component is controlled in an automatic manner while a second movement component is controlled in a manual manner.
  • navigating to the first target location is manual (e.g., navigating to the first target location is fully controlled by a user) (e.g., a direction of navigating to the first target location is fully controlled by a user) (e.g., from the perspective of a user causing the computer system to turn and/or move) (e.g., fully manual and/or without substantial automatic steering).
  • the computer system is in communication with one or more output components (e.g., a display generation component and/or a speaker).
  • navigating to the first target location consists of outputting, via the output component, content (e.g., does not include automatically modifying an angle and/or orientation of one or more movement components (as described above)).
  • the computer system is in communication with a movement component (e.g., as described above with respect to process 900).
  • navigating to the first target location does not include the computer system causing the movement component to be automatically modified.
  • navigating to the first target location includes outputting, via the one or more output components, an indication of a next maneuver to navigate to the target location.
  • detecting the error includes detecting that the computer system (e.g., 600 and/or 1100) is at least a predefined distance from the first target location (e.g., 1108a and/or 1108b) (e.g., as described above in relation to FIG. 11C). In some embodiments, the error is not detected in accordance with a determination that the computer system is within the predefined distance from the first target location.
  • Detecting the error including detecting that the computer system is at least a predefined distance from the first target location allows the computer system to recognize when the computer system has missed and/or passed the first target location and provide a way to fix the error, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • detecting the error includes detecting that a current orientation of the computer system (e.g., 600 and/or 1100) is a first orientation (e.g., an orientation that is not able to be corrected by the computer system using a current path to the first target location) with respect to the first target location (e.g., 1108a and/or 1108b) (e.g., as described above in relation to FIG. 11C).
  • the error is not detected in accordance with a determination that the computer system is a second orientation with respect to the first target location, where the second orientation is different from the first orientation.
  • Detecting the error including detecting that a current orientation of the computer system is a first orientation with respect to the first target location allows the computer system to recognize when the computer system is in an orientation not able to be corrected with a current path and provide a way to fix the error, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system (e.g., 600 and/or 1100) is in communication with an output component.
  • the computer system after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system provides, via the output component, a third control (e.g., 1116) to select a new target location different from the first target location, wherein the new target location is the same type of location as the first target location (e.g., as described above in FIGS. 11C-1 ID) (e.g., the first target location and the new target location are both parking spots with lines defining a respective parking spot).
  • a third control e.g., 1116
  • the computer system while providing the control to select a new target location, does not provide a control to select a new target location that is a different type of location than the first target location.
  • Providing a control to select a new target location that is the same type as the first target location allows the computer system to intelligently provide alternatives, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system (e.g., 600 and/or 1100) is in communication with a second display generation component.
  • the computer system displays, via the second display generation component, a fourth control (e.g., 1116) to select the respective target location (e.g., as described above at FIG. 11C).
  • a fourth control e.g., 1116
  • the computer system while displaying the fourth control to select the respective target location (e.g., 1108a and/or 1108b), the computer system detects, via a second input component in communication with the computer system (e.g., 600 and/or 1100), a verbal input corresponding to selection of the fourth control (e.g., as described above in relation to FIG. 11C).
  • the computer system in response to detecting the verbal input corresponding to selection of the fourth control, the computer system initiates a process to navigate to the respective target location (e.g., as described above in relation to FIG. 1 ID).
  • Allowing verbal input to select a visual control allows the computer system to provide different ways to provide input particularly when some ways, in some embodiments, may be harder to provide (e.g., hands might be occupied) than others, thereby reducing the number of inputs needed to perform an operation and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system (e.g., 600 and/or 1100) is in communication with an audio generation component.
  • the computer system after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system outputs, via the audio generation component, an auditory indication of a fifth control to select the respective target location (e.g., as described above at FIG. 11C).
  • Outputting an auditory indication of a control to select the respective target location allows the computer system to provide different ways to provide output particularly when some ways, in some embodiments, may be harder to receive (e.g., gaze might be occupied such that seeing what is displayed may be harder) than others, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system (e.g., 600 and/or 1100) is in communication with an output component and a second input component.
  • the computer system detects, via the second input component, an input corresponding to selection of a sixth control (e.g., 1118) to maintain the first target location (e.g., 1108a and/or 1108b).
  • a sixth control e.g., 1118
  • the computer system in response to detecting the input corresponding to the selection of the sixth control (1118) to maintain the first target location, the computer system outputs, via the output component, an indication of a new path to the first target location (e.g., as described above in relation to FIGS. 11C and 1 ID). In some embodiments, before outputting the indication of the new path to the first target location (and/or while navigating to the first target location), the computer system outputs, via the output component, an indication of a path to the first target location, where the path is different from the new path.
  • Outputting an indication of a new path to the first target location in response to detecting the input corresponding to the selection of the control to maintain the first target location allows the computer system to correct an error and provide instruction to a user for how to correct the error, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the output component includes a display generation component.
  • outputting, via the output component, the indication of the new path to the first target location includes displaying, via the display generation component, the indication of the new path to the first target location (e.g., as described above in relation to FIGS. 11C and 1 ID).
  • the computer system (e.g., 600 and/or 1100) is in communication with a second input component.
  • the computer system detects, via the second input component, an input (1105c) corresponding to selection of a control (1120) to change the first target location to a second target location different from the first target location.
  • the computer system in response to detecting the input corresponding to the selection of the control to change the first target location to the second target location, the computer system navigates at least partially automatically to the second target location (e.g., as described above in relation to FIG. 1 ID). Navigating at least partially automatically to the second target location in response to detecting the input corresponding to the selection of the control to change the first target location to the second target location allows the computer system to assist with navigation when an error is detected, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
  • process 900 optionally includes one or more of the characteristics of the various methods described above with reference to process 1300.
  • one or movement components can be configured to be controlled in an automatic and/or manual manner using one or more techniques described above in relation to process 900 based on the detection of an error using one or more techniques described above in relation to process 1300.
  • FIGS. 14A-14H illustrate exemplary user interfaces for interacting with different map data, in accordance with some embodiments.
  • a single representation of a user input in a figure (1) includes one or more different types of user input and/or (2) represents different types of user input to result in different operations.
  • a single illustrated user input can be a tap input, a tap-and-hold input, and/or a swipe gesture.
  • FIG. 14A illustrates navigation user interface 1410 for interacting with different map data.
  • Computer system 1400 displays navigation user interface 1410 on touchscreen display 1402.
  • the device being navigated is the device that displays navigation user interface 1410 (e.g., computer system 1400).
  • the device being navigated is a device other than the device that displays navigation user interface 1410.
  • the device being navigated is in communication with the device that displays navigation user interface 1410.
  • Navigation user interface 1410 includes navigation instruction 1410a, map 1410b, and arrival information 1410c.
  • Navigation instruction 1410a indicates a current instruction to a user of navigation user interface 1410.
  • navigation instruction 1410a indicates the instruction textually (e.g., “Turn Right”) and visually (e.g., right turn arrow graphic).
  • Other examples of navigation instructions include “turn left”, “proceed straight”, “continue for 3 kilometers”, and/or “turn around.”
  • Map 1410b includes a visual representation of a geographic location (e.g., the location surrounding the device being navigated) (e.g., computer generated graphic and/or an image captured by one or more cameras). It should be recognized that navigation user interface 1410 can include different, less, and/or more user interface elements than illustrated in FIG. 14 A.
  • a map (e.g., 1410b) is generated based on one or more pieces of map data.
  • map data can describe one or more features of the map, such as the location of roadways, paths, trails, and/or rail lines, terrain/topology data, traffic data and/or other conditions data, building data, and/or graphic elements for displaying the map.
  • Map data can also include data from one or more on-device sensors (e.g., that are part of the device being navigated and/or part of the device displaying navigation user interface 1410) and/or one or more external sensor (e.g., a stationary camera that transmits its data to the device being navigated when they are within a threshold proximity).
  • the sensor data is measured and transmitted in real-time or near-in-time as the device being navigated approaches or is physically present/near the measured area.
  • map data is available from a verified and/or trusted source (e.g., verified by a first-party developer of the navigation application)
  • navigation along a route indicated by the trusted source can be weighed more heavily by the process (e.g., and thus be preferred and/or be more likely to be selected) in making a routing decision as compared to a similar route from an untrusted source.
  • map data from a trusted source can be used to determine an initial route, but during navigation along that route received sensor data can indicate that the route is impassable (e.g., a path is closed, not safe, and/or no longer exists) — the navigation process for determining navigation can take into account the sensor data to override and/or aide the route derived or received from the trusted data source and, for example, select a different route (e.g., perhaps from an unverified data source, depending on the available options).
  • a different route e.g., perhaps from an unverified data source, depending on the available options.
  • map data has (e.g., is associated with) a state.
  • this disclosure will refer to map data as having an associated “state”.
  • This state can, for example, be a function of (e.g., determined in whole or in part by) the type(s) and/or source(s) of data that make up the map data.
  • data that is from a verified source can be considered as having a different state than data from an unverified source.
  • two pieces of data from a verified source can have different states, where a first of such pieces of data is in conflict with sensor data (e.g., obstruction detected on the path) and second of such pieces of data is not in conflict with the sensor data (e.g., path is clear).
  • map data is of a particular state can be based on one or more criteria.
  • the term “state” refers to a classification or identification of map data that satisfies a set of one or more criteria (e.g., classified by the device being navigated, the device displaying navigation user interface 1410, and/or a server in communication with either or both of such devices). How such states are defined (e.g., which set of one or more criteria is used to delineate states) can be different based on the intended use of the map data (e.g., the type of decision being made based on the state).
  • states that represent how recently associated data was updated can be considered by a certain subprocess or decision within a navigation routing process (e.g., in an urban area where traffic level can be highly dynamic), yet not be considered by another subprocess or decision within the navigation routing process (e.g., determining whether the pathway is physically passable (e.g., paved or not) based on the type of navigation (e.g., via car, via bike, and/or on foot)).
  • map data “state” is referred to as a “level,” “category,” or other appropriate phrase that can be recognized by one of ordinary skill in the art.
  • the examples depicted in FIGS. 14A-14H involve user interfaces associated with one of four example states.
  • the four example states are distinct states based on two criteria: (1) whether or not sufficient map data can be retrieved from a storage resource (e.g., memory of computer system 1400 and/or a server), and (2) whether or not the navigation application (and/or a device or server in communication with the navigation application) can determine a recommended path based on the available map data (e.g., from any source).
  • retrieved map data can be considered “sufficient” if it is verified and/or trusted (e.g., comes from a verified source, such as the developer of the navigation application, and/or a source trusted by the navigation application (e.g., an owner of the premises represented by the map data)), and can be considered “insufficient” if no (or not enough) map data can be retrieved, if the retrieved map data is not verified and/or trusted (e.g., lacks a trust and/or verification credential associated with a verified and/or trusted source), if the retrieved map data does not include enough information for determining a recommended path (e.g., on its own), and/or any other appropriate criterion to delineate whether sufficient data could not be retrieved from a data source.
  • a verified source such as the developer of the navigation application, and/or a source trusted by the navigation application (e.g., an owner of the premises represented by the map data)
  • the retrieved map data is not verified and/or trusted (e.g.
  • map data can be derived (e.g., collected and/or created) from one or more sources of data (e.g., other than the storage resource) (e.g., one or more sensor, and/or one or more unverified and/or untrusted source) that is sufficient for determining (e.g., by the navigation application) a recommended path.
  • deriving map data includes creating map data.
  • creating map data can include creating a new map when map data does not exist and/or adding information to an existing map when map data is insufficient, incomplete, and/or incorrect (e.g., outdated).
  • deriving map data includes creating map data with objects, paths, and/or other aspects of a physical environment that are not defined and/or specified in the available map data. For example, sufficient map data may not be available from the storage resource (e.g., criterion (1) is not satisfied); however, the navigation application can derive map data from sources such as on-device cameras and/or other sensors. In some examples, the derived map data is sufficient (e.g., for the navigation application and/or a process and/or device in communication therewith) to determine a recommended path. For example, deriving map data and determining a path based on the derived map data stands in contrast to the device simply receiving map data and then positioning itself within the received map data (e.g., using GPS data).
  • a navigation application can determine a recommended path can be affected by several factors including the external environment and the specific process used to determine a recommended path (e.g., depending on the parameters of such process). For example, a navigation application can require that a path determined by its navigation path determination processes have an associated confidence value above a certain threshold before recommending the route to a user (e.g., as depicted in FIG. 14F using navigation user interface 1410). If enough map data is collected to determine a possible path, but such possible path does not have the requisite confidence value, the possible path would not be recommended and thus second criterion would indicate that the navigation application cannot determine a recommended path.
  • map data can have one of at least four possible states: a first state ⁇ sufficient map data from storage resource; recommended path can be determined based on collected map data ⁇ , a second state ⁇ sufficient map data from storage resource; no recommended path can be determined based on collected map data ⁇ , a third state ⁇ insufficient map data from storage resource; recommended path can be determined based on collected map data ⁇ , and a fourth state ⁇ insufficient map data from storage resource; no recommended path can be determined based on collected map data ⁇ . More, less, and/or different criteria can be used to determine a map data state.
  • a navigation application can use all, some, or none of the possible states. For example, the second state may never (or rarely) logically occur because if sufficient map data is retrieved from a storage resource, then a recommended path should be determinable.
  • computer system 1400 receives data from one or more other computer systems of the same, similar, and/or different type as computer system 1400. For example, another computer system can be navigating an environment using one or more sensors of the other computer system. The other computer system can detect and/or derive information corresponding to the environment using data detected by the one or more sensors. Computer system 1400 can receive the information either directly from the other computer system and/or through another device, such as a server. Such information can be detected near in time and/or location to where computer system 1400 is navigating.
  • map 1410b includes indicator 1412 representing the current position of the device being navigated (e.g., computer system 1400 in this example).
  • Map 1410b also include navigation path 1414a representing the upcoming portion of the navigation (e.g., as determined and suggested by the navigation application).
  • Map 1410b also includes example navigation path 1414b representing a previously travelled portion of the navigation.
  • Navigation path 1414b can have a visual appearance that indicates that a path was traveled, or simply appear with the default visual appearance of the underlying path (e.g., as if no navigation is programmed).
  • FIG. 14A shows indicator 1412 representing the current position of the device being navigated (e.g., computer system 1400 in this example).
  • Map 1410b also include navigation path 1414a representing the upcoming portion of the navigation (e.g., as determined and suggested by the navigation application).
  • Map 1410b also includes example navigation path 1414b representing a previously travelled portion of the navigation.
  • Navigation path 1414b can have a visual appearance that indicates that
  • navigation path 1414a is based on map data associated with the first state ⁇ sufficient map data from storage resource; recommended path can be determined based on collected map datajand has a visual appearance associated with the first state.
  • portion 1414a has solid line borders.
  • the navigation application instructs (e.g., textually by 1410a and graphically by 1414a) a user to turn right at the next juncture.
  • FIG. 14B illustrates navigation user interface 1410 as it appears at a time after the scenario in FIG. 14A, but while the same navigation session (e.g., still navigating to the same destination) is continued.
  • navigation instruction 1410a is updated to display “Proceed Straight”
  • map 1410b is updated to depict a current surrounding geographic area
  • arrival information 1410c remains unchanged.
  • Navigation user interface 1410 in FIG. 14B also includes path confirmation user interface 1420.
  • a path confirmation user interface (e.g., 1420) includes a map area (e.g., 1420a) that includes a recommended navigation path (e.g., 1414a) for upcoming navigation.
  • the path confirmation user interface also includes a message area (e.g., 1420b) indicating (e.g., prompting) that user input is required to continue navigation, a selectable icon (e.g., 1420c) for confirming the recommended path, and a selectable icon (e.g., 1420d) for declining the recommended path.
  • the map data meets criteria for the third state described above ⁇ insufficient map data from storage resource; recommended path can be determined based on collected map data ⁇ .
  • the third state criteria are met because the navigation application does not receive sufficient data from a verified source but is able to collect enough map data from an unverified source and a plurality of sensors on computer system 1400 in order to recommend a navigation path.
  • the collected map data can be used as the basis to recommend a path as illustrated by navigation path 1414a in FIG. 14B (e.g., a recommended turn to the left at the next juncture).
  • the navigation application is configured to prompt for user input confirmation by displaying path confirmation user interface 1420.
  • Prompting a user can be preferable because the confidence of navigation recommendation based on map data from a storage resource (e.g., a verified source) can generally be (or always be) higher than if it comes from an alternative source (e.g., an unverified source), and the prompt serves to attain user consent to proceed with navigation even though confidence may be lower and/or indicate to the user that navigation is occurring in an area of lower confidence data (e.g., requiring more user attention and/or intervention).
  • a storage resource e.g., a verified source
  • an alternative source e.g., an unverified source
  • the prompt serves to attain user consent to proceed with navigation even though confidence may be lower and/or indicate to the user that navigation is occurring in an area of lower confidence data (e.g., requiring more user attention and/or intervention).
  • user input 1421 e.g., a tap gesture
  • map data collected from a source other than the storage resource includes map data received from and/or based on crowdsourced data.
  • the crowdsourced data includes and/or is based on one or more previous navigation routes (e.g., one or more navigation routes successfully traversed by one or more other devices).
  • FIG. 14C illustrates navigation user interface 1410 for interacting with different map data in response to computer system 1400 receiving user input 1421 in FIG. 14B.
  • navigation user interface 1410 now includes updated navigation instruction 1410a (e.g., instructing the user to turn left at the next juncture, matching the confirmed recommended navigation path from FIG. 14B).
  • a navigation path e.g., 1414a
  • navigation path 1414a maintains the visual appearance of having dotted line borders as it appeared in FIG. 14B.
  • FIG. 14D illustrates navigation user interface 1410 after the device being navigated performs the left turn instructed in FIG. 14C.
  • computer system 1400 continually updates the displayed map area to display the real-time location of the device being navigated relative to the map (e.g., represented by indicator 1412 within the map area). This can be performed using location data such as global positioning system (GPS) data.
  • GPS global positioning system
  • a navigation path maintains a visual appearance associated with state of the map data prior to confirmation of the recommend path even after the associated area is traversed. For example, in FIG.
  • navigation path 1414b maintains the visual appearance of having a dotted line border as it had prior to the corresponding portion of the map area having been traversed (e.g., indicator 1412 in FIG. 14C traversed along the navigation path and into the dotted line region, and so in FIG. 14D the navigation path 1414b already traversed keeps the dotted line appearance). Note that even though navigation paths 1414a and 1414b in FIG. 14D both have dotted line borders, they are not necessarily identical. In this example, navigation path 1414a includes shading to indicate the upcoming navigation route, but navigation path 1414b does not include the shading. Navigation path 1414a also keeps the visual appearance associated with the third state.
  • a navigation path changes in a manner that it matches the visual appearance of one or more other states.
  • navigation portion 1414b in FIG. 14D could instead have solid line borders (as in FIG. 14C), which matches the appearance of traversed paths associated with map data having the first state (e.g., all traversed paths can be indicated the same visually, such as with a solid border line).
  • FIG. 14E illustrates navigation user interface 1410 as displayed in response to the navigation application reaching a point where no recommended path can be determined for the device being navigated, and is displayed after the device being navigated continues proceeding forward as instructed in FIG. 14D.
  • the map data for this area can be associated with the fourth state described above ⁇ insufficient map data from storage resource; no recommended path can be determined based on collected map data ⁇ .
  • the device e.g., computer system 1400
  • the device requires user input of a navigation path.
  • FIG. 14E includes a prompt for a user to input a navigation path (asking “How to proceed?”). Additionally, navigation path 1414a is displayed with a visual appearance indicating the user input is required (e.g., displayed as an incomplete segment).
  • computer system 1400 receives user input 1423 (e.g., a swipe gesture to the left) on map 1410b, representing a command to the navigation application for navigation to proceed to the left (e.g., make a left turn).
  • FIG. 14F illustrates navigation user interface 1410 as it appears in response to computer system 1400 receiving user input 1423 in FIG. 14E.
  • navigation user interface 1410 also includes invalid path user interface 1430.
  • an invalid path user interface e.g., 1430
  • an indication that navigation path created or requested based on user input e.g., 1423
  • an option to retry user input e.g., icon 1430b
  • an option to end navigation e.g., icon 1430c.
  • computer system 1400 determines (e.g., based on sensor data) that a left turn is not safe.
  • computer system 1400 receives user input 1431 (e.g., a tap gesture) on icon 1430b for retrying user input of navigation path.
  • receiving user input representing selection of an option to end navigation causes one or more of the following actions: a navigation session ends (e.g., the current trip is ended), a device being navigated stops (e.g., if the device being navigated can receive and act upon relevant instructions), a device being navigated backs up (e.g., and returns to another location) (e.g., if the device being navigated can receive and act upon relevant instructions).
  • FIG. 14G illustrates exemplary navigation user interface 1410 (returned to the same scenario as described in FIG. 14E) as displayed in response to computer system 1400 receiving user input 1431.
  • user input defining a path can include one or more valid gesture types.
  • a valid gesture can be a continuous gesture such as a swipe (as shown in FIG. 14E) for indicating a location and/or direction associated with a desired navigation maneuver (e.g., which may nonetheless define an invalid path as determined by sensor data).
  • an additional or alternative valid gesture can be non-continuous gesture such as a series of inputs defining points along a desired navigation path as shown in FIG. 14G. These points can be interpolated between to determine the desired navigation path.
  • FIG. 14G illustrates exemplary navigation user interface 1410 (returned to the same scenario as described in FIG. 14E) as displayed in response to computer system 1400 receiving user input 1431.
  • user input defining a path can include one or more valid gesture types.
  • a valid gesture can be a continuous gesture
  • computer system 1400 receives user input 1433 and then user input 1435 (e.g., both being a tap gesture) on map 1410b, collectively representing a command for navigation to proceed forward to the location of user input 1433 and then proceed to the right to the location of user input 1435 (e.g., resulting in a right turn).
  • user input defining and/or confirming a navigation path includes voice input. For example, at navigation user interface 1410 in FIG. 14B voice input (“Yes”) can cause the same result as user input 1421, and/or at navigation user interface 1410 in FIG. 14G voice input (“turn right”) can cause the same result as user input 1433 and user input 1435.
  • user input defining a path can include one or more user inputs corresponding to selection on a representation of the intended traversal area (e.g., area in front of the device being navigated).
  • map 1410b can be computer generated graphics and/or include a camera view of what the intended traversal area looks like (e.g., from one or more cameras attached to the device being navigated).
  • FIG. 14H illustrates exemplary navigation user interface 1410 as it appears in response to computer system 1400 receiving user input 1433 and user input 1435 in FIG. 14G.
  • navigation user interface 1410 includes updated navigation instruction 1410a (which now instructs “Turn Right”) and navigation path 1414a in the shape of the path defined by user input 1433 and user input 1435.
  • navigation path 1414a is based on map data associated with the fourth state ⁇ insufficient map data from storage resource; no recommended path can be determined based on collected map data without user inputjand has a visual appearance associated with the fourth state (e.g., appears as a single, solid line).
  • navigation path 1414a can indicate that this portion of the navigation is user defined.
  • navigation paths 1414a and 1414b can behave as described above with respect to the other visual appearances (e.g., navigation path 1414b can be a single, solid line indicating a user defined path has been traversed, or can change to a thicker line having solid borders as shown in FIG. 14C).
  • map data associated with the first state described above does not require user input intervention.
  • map data associated with the third state described above results in the navigation application being able to infer a recommend navigation path, which is presented at a user interface that requires user input intervention to confirm.
  • map data associated with the fourth state described above results in the navigation application not being able to infer a recommend navigation path and instead requires ad-hoc user input intervention to determine a navigation path.
  • the device being navigated while awaiting valid user input to define and/or confirm a navigation path, performs a waiting maneuver (e.g., if it includes movement capability). For example, prior to receiving user input 1421 of FIG. 14B, and/or user input 1433 and user input 1435 of FIG. 14G, the device being navigated can stop moving and wait for instructions. The device being navigated can maintain the waiting maneuver until valid user input is received (e.g., and not resume or continue further movement in response to user input 1423 of FIG. 14E).
  • a waiting maneuver e.g., if it includes movement capability
  • a user interface and/or prompt for requesting user input can be displayed at a threshold (e.g., predetermined) distance away from the location represented by the map data requiring the user input (e.g., a half mile away from where the navigation instruction is needed, such as at the border of a map data state change from first state to third state).
  • a user interface and/or prompt for requesting user input can be displayed at a threshold (e.g., predetermined) time until arrival away from the location represented by the map data requiring the user input (e.g., one minute before arrival at where the navigation instruction is needed, based on current travel speed).
  • the device being navigated corresponds to (e.g., is associated with, logged into, and/or assigned to) a particular user (e.g., user account, such as a user account belonging to the owner of the vehicle).
  • the device being navigated is connected to (e.g., in communication with) a plurality of devices.
  • the device being navigated can be connected to two other devices: a different device of the owner (e.g., a smartphone displaying navigation user interface 1410) and a device of a guest (e.g., a user other than the owner).
  • a user interface and/or prompt for requesting user input is displayed at one or more of the plurality of devices connected to the device being navigated.
  • the owner’s different device can display navigation user interface 1410 prompting for user input whereas the device of the guest does not display navigation user interface 1410.
  • the device being navigated can prompt for input from certain users and/or devices preferentially and/or sequentially.
  • the device being navigated is connected to one other device.
  • the one other device can display a user interface and/or prompt requesting user input depending on whether the one other device corresponds to the owner of the device being navigated (e.g., and/or belongs to a set of users, such as registered users, authorized users, and/or trusted users).
  • the one other device if the one other device is a device of a guest (e.g., not the owner), the one other device does not display navigation user interface 1410. In some embodiments, if the one other device is a different device of the owner, the one other device does display navigation user interface 1410. For example, a device the owner, but not a device of a guest, can be prompted and provide instructions to the device being navigated for navigating through areas with insufficient map data. However, by not prompting certain users (e.g., guests) in the same way as the owner, the device being navigated can be prevented from being navigated through such areas (e.g., which can be a preference of and/or made by the owner).
  • the device being navigated and the device displaying navigation user interfaces are the same device.
  • computer system 1400 displays the user interfaces and is tracking and updating navigation based on its own location and movement.
  • the device being navigated and the device displaying navigation user interfaces are different devices.
  • computer system 1400 displays the user interfaces but is tracking and updating navigation based on the location and movement of another device (e.g., for guiding another smartphone; for guiding a device with autonomous and/or semi- autonomous movement capabilities).
  • the navigation user interfaces are displayed on a shared screen.
  • the navigation interfaces can be displayed on a touchscreen of a vehicle that is attached to computer device 1400 (e.g., a user connects their smartphone via a wire or wirelessly to a computer inside of their vehicle, causing a display of the vehicle to be controlled by an operating system of the smartphone (e.g., like Apple CarPlay)).
  • computer device 1400 e.g., a user connects their smartphone via a wire or wirelessly to a computer inside of their vehicle, causing a display of the vehicle to be controlled by an operating system of the smartphone (e.g., like Apple CarPlay)).
  • FIG. 15 is a flow diagram illustrating a method for interacting with different map data using a computer system in accordance with some embodiments.
  • Process 1500 is performed at a computer system (e.g., system 100).
  • the computer system is in communication with one or more output components.
  • Some operations in process 1500 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • process 1500 provides an intuitive way for interacting with different map data.
  • the method reduces the cognitive burden on a user for interacting with different map data, thereby creating a more efficient human-machine interface.
  • process 1500 is performed at a computer system (e.g., 1400) that is in communication with one or more output components (e.g., 1402) (e.g., a display screen, a touch-sensitive display, a haptic output component, and/or a speaker).
  • the computer system is a watch, a fitness tracking device, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device.
  • the computer system is in communication with one or more input devices (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button).
  • the computer system receives (702) a request (e.g., as described above with respect to FIGS. 14A-14H) to navigate to a first destination (e.g., as described above with respect to FIGS. 14A-14H).
  • the request is received via a map application (e.g., an application configured to provide directions to destinations).
  • receiving the request includes detecting, via a sensor in communication with the computer system, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)).
  • the request is received via a determination by the computer system to navigate to the first destination.
  • the computer system In response to receiving the request (e.g., as described above with respect to FIGS. 14A-14H), the computer system initiates (1504) navigation to the first destination (e.g., as described above with respect to FIGS. 14A-14H) (e.g., displaying navigation interface 1410 as illustrated in FIG. 14A).
  • navigating to the first destination includes providing, via at least one output component of the one or more output components, one or more maneuvers (e.g., directions).
  • navigating to the first destination includes causing a physical component in communication with the computer system to change position.
  • While (706) navigating to the first destination (e.g., as illustrated in FIG. 14A) (e.g., after initiating navigation to the first destination, such as after providing at least one maneuver (e.g., a direction) with respect to navigating to the first destination), in accordance with a determination that an intended traversal area (e.g., represented by 1414a) (e.g., an upcoming traversal area, a next traversal area, a future traversal area, and/or an area for which the computer system has determined to navigate to and/or through) includes a first quality of map data (e.g., represented by navigation path 1414a of FIG.
  • an intended traversal area e.g., represented by 1414a
  • a first quality of map data e.g., represented by navigation path 1414a of FIG.
  • the computer system requests (1508), via the one or more output components, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to an upcoming maneuver (e.g., displaying path confirmation user interface 1420, navigation instruction 1410a of FIG.
  • input e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)
  • an upcoming maneuver e.g., displaying path confirmation user interface 1420, navigation instruction 1410a of FIG.
  • the requesting includes outputting, via a speaker of the one or more output components, an audio request with respect to the next maneuver.
  • the requesting includes displaying, via a display component of the one or more output components, a visual request with respect to the next maneuver.
  • the first quality of map data is determined based on metadata corresponding to the intended traversal area. In some embodiments, the first quality of map data is determined based on a confidence level corresponding to the intended traversal area.
  • While (706) navigating to the first destination, in accordance with a determination that the intended traversal area includes a second quality of map data (e.g., represented by navigation path 1414a of FIG. 14A) (e.g., map data associated with the first state as described with respect to FIGS. 14A-14H) (e.g., predefined map data, map data including one or more potential routes through the intended traversal area, and/or map data determined based on data detected via one or more sensors in communication with the computer system) different from the first quality of map data (e.g., represented by navigation path 1414a of FIG.
  • a second quality of map data e.g., represented by navigation path 1414a of FIG. 14A
  • map data associated with the first state as described with respect to FIGS. 14A-14H e.g., predefined map data, map data including one or more potential routes through the intended traversal area, and/or map data determined based on data detected via one or more sensors in communication with the computer system
  • the computer system forgoes (1510) requesting input with respect to the upcoming maneuver (e.g., forgoing displaying path confirmation user interface 1420, navigation instruction 1410a of FIG. 14E, and/or navigation instruction 1410a of FIG. 14G) (e.g., continuing to display navigation user interface 1410 as in FIG. 14 A).
  • the intended traversal area includes the second quality of map data
  • the first quality of map data is determined to be of lower quality (e.g., includes less data, includes data that corresponds to less detailed map data, and/or does not include data that is included in the second quality of map data) than the second quality of map data.
  • Requesting input with respect to the upcoming maneuver when the intended traversal area includes the first quality of map data provides the user with different functionality depending on the quality of map data for the intended traversal area, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system while navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H), in accordance with the determination that the intended traversal area includes the second quality of map data, performs the upcoming maneuver (e.g., performing the maneuver represented by navigation path 1414a of FIG.
  • a dragging input e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input
  • a non-dragging input e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input
  • the upcoming maneuver e.g., 1414a of FIG. 14A
  • a route to the first destination is selected via input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) and the route includes the upcoming maneuver.
  • a route to the first destination is selected via input and no further include is received with respect to the upcoming maneuver.
  • the second quality of map data was contributed by a third party (e.g., a person or company in control of the intended traversal area and/or a person, company, and entity that has visited, selected, and/or navigated the intended area) and not a manufacturer of the computer system.
  • the second quality of map data is verified by a mapping software performing the upcoming maneuver.
  • the second quality of map data is verified by a user associated with the mapping software. Performing the upcoming maneuver when the intended traversal area includes the second quality of map data provides the user with functionality without the user needing to directly request such functionality, thereby reducing the number of inputs needed to perform an operation and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the intended traversal area includes the first quality of map data and after (e.g., while and/or in conjunction with) a computer-generated path (e.g., 1414a in FIG.
  • the computergenerated path is a recommended path and/or a determined path through the intended traversal area and/or through locations that correspond to the intended traversal area
  • the computer system receives input (e.g., 1421) (e.g., a tap input or, in some examples, a non-tap input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to approval of the computergenerated path.
  • the computer-generated path includes the upcoming maneuver.
  • the computer-generated path is generated without input from a user of the computer system.
  • the computer system while navigating to the first destination, in response to receiving the input, performs the upcoming maneuver (e.g., performing the maneuver represented by navigation path 1414a of FIG. 14B) according to the computer-generated path (e.g., 1414a of FIG. 14B).
  • the computer-generated path in response to receiving input corresponding to rejection of the computer-generated path causes display of a second computer-generated path different from the computer-generated path.
  • Performing the upcoming maneuver according to the computer-generated path when approval of the path is received provides the user the ability to decide whether a path that was generated for the user is what the user wants, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer-generated path is generated based on data captured by one or more sensors that are in communication with the computer system.
  • the one or more sensors are included within and/or attached to a housing includes within and/or has attached the one or more output components.
  • the one or more sensors do not detect a location (e.g., via a global positioning system) but rather detects one or more objects in a physical environment.
  • the computer-generated path is generated based on data captured by a plurality of sensors in communication with the computer system.
  • the one or more sensors includes a camera and the data includes an image captured by the camera.
  • the one or more sensors includes a radar, lidar, and/or another ranging sensor.
  • Generating the computer-generated path based on data captured by one or more sensors that are in communication with the computer system ensures that the computergenerated path is based on current data and not data that was detected previously, thereby adapting to a current context and/or state of a physical environment.
  • the computer-generated path is generated based on data captured by a different computer system separate from the computer system.
  • the different computer system is remote from and/or not physically connected to the computer system.
  • the computer-generated path is generated based on a heat map determined based on data collected from a plurality of different computer systems.
  • the plurality of different computer systems is not in communication with the computer system but rather are in communication with the different computer system that is in communication with the computer system.
  • the different computer system is in wireless communication with the computer system, such as via the Internet.
  • the data is received by the computer system in a message sent by the different computer system.
  • the different computer system generates the computer-generated path, and the computer system receives the computer-generated path from the different computer system.
  • Generating the computergenerated path based on data captured by the different computer system provides the ability for operations to be performed and/or data to be detected by computer systems different from the computer system, thereby offloading such operations to different processors and/or allowing for different types of data to be detected/used when the computer system might not be in communication with such sensors.
  • a third quality of map data e.g., represented by navigation path 1414a of FIG. 14E, and/or 14G
  • the computer system receives input (e.g., 1423, 1433, and/or 1435) (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to a path (e.g., defined by 1423, 1433, and/or 1435) (e.g., a navigation path and/or one or more instructions for navigating with respect to the intended traversal area) with respect to the intended traversal area.
  • input e.g., 1423, 1433, and/or 1435
  • a dragging input or, in some examples, a non-dragging input e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input
  • path e.g., defined by 1423, 1433, and/or
  • the third quality of map data is the second quality of map data.
  • the path is generated based on the input.
  • the third quality of map data is a lower quality of map data than the second quality of map data.
  • the computer system navigates (e.g., with respect to the intended traversal area) via the path (e.g., navigating via 1414a of 14H).
  • the first set of criteria includes a criterion that is met when the path is determined to be navigable by the computer system.
  • the path is determined to be navigable by the computer system based on data captured by one or more sensors in communication with the computer system. In some embodiments, the path is determined to be navigable by the computer system based on one or more objects detected in the intended traversal area.
  • Navigating via the path when the path meets the first set of criteria ensures that the path is accepted by the computer system and that not just any path will be used for navigation, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system forgoes navigating via the path (e.g., and displaying invalid path user interface 1430) (e.g., rejecting the path and, in some examples, requesting input corresponding to a different path), wherein the determination that the path does not meet the first set of criteria is based on data detected by one or more sensors in communication with the computer system.
  • the one or more sensors do not detect a location of the computer system but rather detect a characteristic (e.g., an object, a surface, and/or a path within) of a physical environment.
  • Forgoing navigating via the path when the path does not meet the first set of criteria ensures that not just any path will be used for navigation, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the intended traversal area includes the second quality of map data (e.g., represented by navigation path 1414a of FIG. 14A) (e.g., map data associated with the first state as described with respect to FIGS. 14A-14H) and after performing the upcoming maneuver without receiving input with respect to the upcoming maneuver (e.g., represented by navigation path 1414a of FIG. 14 A), in accordance with a determination a second intended traversal area includes the first quality of map data (e.g., represented by navigation path 1414a of FIG.
  • the computer system requests, via the one or more output components, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to a second upcoming maneuver different from the upcoming maneuver (e.g., displaying path confirmation user interface 1420, navigation instruction 1410a of FIG. 14E, and/or navigation instruction 1410a of FIG. 14G).
  • input e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)
  • input e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input,
  • requesting input with respect to the second upcoming maneuver is in a different form than requesting input with respect to the upcoming maneuver (e.g., one includes providing a suggested path while the other requires a user to identify at least one or more points to use to generate a path).
  • the second intended traversal area is different from the intended traversal area. Requesting input with respect to the second upcoming maneuver after performing the upcoming maneuver without receiving input with respect to the upcoming maneuver ensures that the computer system only requests user input for some maneuvers and not other maneuvers, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • a first path corresponding to the upcoming maneuver has a first visual appearance (e.g., visual appearance of 1414a in FIGS. 14A, 14B, 14D, 14E, 14G, and/or 14H) and a second path corresponding to the second upcoming maneuver has a second visual appearance different (e.g., visual appearance of 1414a in FIGS. 14A, 14B, 14D, 14E, 14G, and/or 14H) from (e.g., a different color, pattern, line weight, line segmentation (e.g., solid lines v. dotted lines), and/or size) the first visual appearance.
  • a first visual appearance e.g., visual appearance of 1414a in FIGS. 14A, 14B, 14D, 14E, 14G, and/or 14H
  • a second visual appearance different e.g., visual appearance of 1414a in FIGS. 14A, 14B, 14D, 14E, 14G, and/or 14H
  • the first visual appearance indicates a first respective quality of map data (e.g., map data associated with the first state, second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H) and the second visual appearance indicates a second respective quality of map data (e.g., map data associated with the first state, second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H) different from the first respective quality of map data.
  • the second upcoming maneuver is the same type of maneuver as the upcoming maneuver (e.g., the same maneuver).
  • process 1600 optionally includes one or more of the characteristics of the various methods described above with reference to process 1500.
  • the computer system of process 1600 can be the computer system of process 1500. For brevity, these details are not repeated below.
  • FIG. 16 is a flow diagram illustrating a method for interacting with different map data using a computer system in accordance with some embodiments.
  • Process 1600 is performed at a computer system (e.g., system 100).
  • the computer system is in communication with one or more output components.
  • Some operations in process 1600 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
  • process 1600 provides an intuitive way for interacting with different map data.
  • the method reduces the cognitive burden on a user for interacting with different map data, thereby creating a more efficient human-machine interface.
  • process 1600 is performed at a computer system (e.g., 1400) that is in communication with one or more output components (e.g., 1402) (e.g., display screen, a touch-sensitive display, a haptic output device, and/or a speaker).
  • the computer system is a watch, a fitness tracking device, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device.
  • the computer system is in communication with one or more input devices (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button).
  • the computer system receives (1602) a request to navigate to a first destination (e.g., a request to display navigation interface 1410 of FIG. 14A).
  • the request is received via a map application (e.g., an application configured to provide directions to destinations).
  • receiving the request includes detecting, via a sensor in communication with the computer system, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)).
  • the request is received via a determination by the computer system to navigate to the first destination.
  • the computer system In response to receiving the request (e.g., a request to display navigation interface 1410 of FIG. 1 A), the computer system initiates (1604) navigation to the first destination (e.g., as described above with respect to FIGS. 14A-14H) (e.g., as illustrated in FIG. 14A).
  • navigating to the first destination includes providing, via at least one output component of the one or more output components, one or more maneuvers (e.g., directions).
  • navigating to the first destination includes causing a physical component in communication with the computer system to change position.
  • the computer system requests (1608), via the one or more output components, input (e.g., 1423, 1433, and/or 1435) (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to the upcoming maneuver.
  • input e.g., 1423, 1433, and/or 1435
  • a non-dragging input e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input
  • requesting includes outputting, via a speaker of the one or more output components, an audio request with respect to the upcoming maneuver.
  • requesting includes displaying, via a display component of the one or more output components, a visual request (e.g., a request for a user to select one or more points for which to include in the upcoming maneuver, a request for a user to draw a path to correspond to the upcoming maneuver, a request for a user to verbally describe the upcoming maneuver, and/or a request for a user to point or otherwise indicate a direction and/or area to include in the upcoming maneuver).
  • a visual request e.g., a request for a user to select one or more points for which to include in the upcoming maneuver, a request for a user to draw a path to correspond to the upcoming maneuver, a request for a user to verbally describe the upcoming maneuver, and/or a request for a user to point or otherwise indicate a direction and/or area to include in the upcoming maneuver.
  • a visual request e.g., a request for a user to select one or more points for which to include in the upcoming maneuver,
  • Requesting input with respect to the upcoming maneuver when the intended traversal area includes inadequate map data provides the user with different functionality depending on the map data for the intended traversal area, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system receives input (e.g., 1423) (e.g., a drag input or, in some examples, a non-drag input (e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to a first path (e.g., 1414a of FIG.
  • input e.g., 1423
  • a drag input or, in some examples, a non-drag input e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input
  • a first path e.g., 1414a of FIG.
  • the input is continuous input including input at a first position and a second position, wherein the path includes the first position and the second position.
  • the input includes a tap and hold gesture that begins at a first position and continues to a second position, where the path includes the first position and the second position.
  • the computer system navigates according to the path.
  • the input includes a drawing of a continuous line in the representation of the intended traversal area. Receiving input corresponding to the first path in the first representation provides the user a precise way for instructing the computer system where to navigate, thereby reducing the number of inputs needed to perform an operation and/or providing additional control options without cluttering the user interface with additional displayed controls.
  • the computer system receives input (e.g., 1433, and/or 1435) (e.g., a drag input or, in some examples, a non-drag input (e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to one or more points (e.g., centroids of 1433, and/or 1435) in a second representation (e.g., navigation user interface 1410 of FIG. 14H) of the intended traversal area, wherein a second path is generated based on the one or more points.
  • input e.g., 1433, and/or 1435
  • input e.g., 1433, and/or 1435
  • a drag input or, in some examples, a non-drag input e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input
  • the one or more points includes a plurality of points, wherein a line between the plurality of points is generated (e.g., using interpolation or some other operation to identify a path between the plurality of points). In some embodiments, the one or more points includes a point, wherein a line between a location of the computer system and the point is generated (e.g., using interpolation or some other operation to identify a path between the plurality of points).
  • the input includes a plurality of distinct input, each distinct input including detection of the distinct input and detection of a release of the distinct input. In some embodiments, the input includes a first input and a second input distinct (e.g., separate) from the first input. Receiving input corresponding to one or more points in the second representation provides the user a precise way for instructing the computer system where to navigate, thereby reducing the number of inputs needed to perform an operation and/or providing additional control options without cluttering the user interface with additional displayed controls.
  • the computer system receives (e.g., via a microphone that is in communication with the computer system) a voice request corresponding to the intended traversal area.
  • the voice request includes one or more verbal instructions for navigating with respect to the intended traversal area. Receiving the voice request corresponding to the intended traversal area provides the user a precise way for instructing the computer system where to navigate, thereby reducing the number of inputs needed to perform an operation and/or providing additional control options without cluttering the user interface with additional displayed controls.
  • the navigation to the first destination is initiated along a third path (e.g., 1414a of FIG. 14H) (e.g., a path through a physical environment and/or a path including one or more directions for navigating the physical environment).
  • a portion of the third path goes through the intended traversal area (e.g., the path is configured to navigate through and/or along the intended traversal area).
  • the path is determined by the computer system.
  • the computer system sends, to a device in communication with the computer system such as a server, a request for the path and, after sending the request, the computer system receives, from the device, the path.
  • the navigation including the portion that requires input to go within provides the user the ability to navigate into areas for which map data accessible by the computer system is inadequate, thereby increasing the number of options available to the user and allowing for the user to save time while navigating to a destination.
  • the navigation to the first destination is initiated along a fourth path (e.g., 1414a of FIG. 14A) (e.g., a path through a physical environment, the path including one or more directions for navigating the physical environment).
  • a fourth path e.g., 1414a of FIG. 14A
  • the path including one or more directions for navigating the physical environment.
  • the fourth path includes a respective portion that does not require an input (e.g., 1421, 1423, 1433, and/or 1435) (e.g., user input) (e.g., one or more respective inputs that are obtained to navigate through the respective portion) (e.g., one or more drag inputs and/or one or more non-drag inputs (e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) to navigate through the respective portion (e.g., the path includes a maneuver to navigate through the portion without a user confirming the maneuver).
  • an input e.g., 1421, 1423, 1433, and/or 1435
  • user input e.g., one or more respective inputs that are obtained to navigate through the respective portion
  • drag inputs and/or one or more non-drag inputs e.g., a tap input, a
  • the path is determined by the computer system.
  • the computer system sends, to a device in communication with the computer system such as a server, a request for the path and, after sending the request, the computer system receives, from the device, the path.
  • the navigation including the portion that does not require input to go through reduces the amount of input required by the user during navigation, thereby reducing the number of inputs needed to perform an operation and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the set of one or more criteria includes a criterion that is met when a determination is made that the computer system is within a first threshold distance (e.g., zero or more) (e.g., 1-10 meters) from the intended traversal area.
  • a first threshold distance e.g., zero or more
  • the first threshold distance is predefined and applied to all navigation and all portions of a navigation by the computer system.
  • the first threshold distance is based the intended traversal area and is different for different intended traversal areas (e.g., different intended traversal areas may be smaller or bigger and require different amount of time to handle) (e.g., different intended traversal areas may include different areas around them for stopping).
  • Requesting input with respect to the upcoming maneuver when the intended traversal area is within the first threshold distance provides the user with options with respect to navigation at a time in which the user is in a position to provide input, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the set of one or more criteria includes a criterion that is met when a determination is made that the computer system is not moving (e.g., based on data detected by a sensor in communication with the computer system and/or based on a current maneuver being performed for navigating) and within a second threshold distance (e.g., zero or more) (e.g., 1-10 meters) from the intended traversal area.
  • a second threshold distance e.g., zero or more
  • the second threshold distance is predefined and applied to all navigation and all portions of a navigation by the computer system.
  • the second threshold distance is based the intended traversal area and is different for different intended traversal areas (e.g., different intended traversal areas may be smaller or bigger and require different amount of time to handle) (e.g., different intended traversal areas may include different areas around them for stopping).
  • the computer system in accordance with a determination that the computer system is moving, the computer system does not request input with respect to the upcoming maneuver. In some embodiments, in accordance with a determination that the computer system is not within the second threshold distance from the intended traversal area, the computer system does not request input with respect to the upcoming maneuver.
  • Requesting input with respect to the upcoming maneuver when the computer system is not moving and within the second threshold distance from the intended traversal area provides the user with options with respect to navigation at a time in which the user is in a position to provide input, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • the computer system receives a set of one or more inputs including one or more inputs (e.g., 1423, 1433, and/or 1435) (e.g., a dragging input or, in some examples, a nondragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to the upcoming maneuver.
  • the set of one or more inputs includes input defining a path for the navigation to take with respect to the intended traversal area.
  • the computer system in response to receiving the set of one or more inputs including the one or more input with respect to the second upcoming maneuver, in accordance with a determination that a path resulting from the set of one or more input does not meet a first set of criteria, the computer system requests (e.g., displaying invalid path user interface 1430 of FIG.
  • the first set of criteria includes a criterion that is met when the path is determined to be safe and/or possible to be navigated by the computer system. In some embodiments, the first set of criteria includes a criterion that is met based on one or more objects identified in a physical environment corresponding to the path.
  • the computer system in accordance with a determination that the path resulting from the set of one or more inputs does not meet the first set of criteria, the computer system forgoes requesting, via the one or more output components, different input with respect to the upcoming maneuver and/or initiates navigation of the upcoming maneuver. Requesting different input when the path does not meet the first set of criteria ensures that not just any path will be used for navigation, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
  • process 1400 optionally includes one or more of the characteristics of the various methods described above with reference to process 1500.
  • the computer system of process 1400 can be the computer system of process 1500. For brevity, these details are not repeated below.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure generally relates to user interfaces.

Description

TECHNIQUES FOR CONFIGURING NAVIGATION OF A DEVICE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Non-Provisional Patent Application Serial No. 18/896,455 entitled “TECHNIQUES FOR CONFIGURING NAVIGATION OF A DEVICE,” filed September 25, 2024, to U.S. Non-Provisional Patent Application Serial No. 18/896,677 entitled “USER INPUT FOR INTERACTING WITH DIFFERENT MAP DATA,” filed September 25, 2024, to U.S. Non-Provisional Patent Application Serial No. 18/896,680 entitled “TECHNIQUES AND USER INTERFACES FOR PROVIDING NAVIGATION ASSISTANCE,” filed September 25, 2024, to U.S. Provisional Patent Application Serial No. 63/541,810 entitled “TECHNIQUES FOR CONFIGURING NAVIGATION OF A DEVICE,” filed September 30, 2023, to U.S. Provisional Patent Application Serial No. 63/541,821 entitled “USER INPUT FOR INTERACTING WITH DIFFERENT MAP DATA,” filed September 30, 2023, and to U.S. Provisional Patent Application Serial No. 63/587,108 entitled “TECHNIQUES AND USER INTERFACES FOR PROVIDING NAVIGATION ASSISTANCE,” filed September 30, 2023, which are incorporated by reference herein in their entireties for all purposes.
BACKGROUND
[0002] Electronic devices are often capable of navigating to destinations. Such destinations can be static (e.g., stationary and/or not dynamically configurable). Such destinations can also be broadly defined such that arrival at the destination is imprecise. Computer systems sometimes provide users with navigation assistance. Such assistance can assist a user in navigating to a target destination. Electronic devices are often capable of navigating to destinations using available map data. While navigating, the electronic device can encounter physical areas with different qualities of map data. The quality of the map data can cause errors resulting in incorrect navigation instructions
SUMMARY
[0003] Some techniques for configuring navigation of a device using electronic devices are generally cumbersome and inefficient. For example, some existing techniques use a complex and time-consuming user interface, which may include multiple key presses or keystrokes. Existing techniques require more time than necessary, wasting user time and device energy. This latter consideration is particularly important in battery-operated devices.
[0004] Accordingly, the present technique provides electronic devices with faster, more efficient methods and interfaces for configuring navigation, interacting with different map data, and/or providing navigation assistance. Such methods and interfaces optionally complement or replace other methods for configuring navigation, interacting with different map data, and/or providing navigation assistance. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For battery-operated computing devices, such methods and interfaces conserve power and increase the time between battery charges, for example, by reducing the number of unnecessary, extraneous, and/or repetitive received inputs and reducing battery usage by a display.
[0005] In some embodiments, a method that is performed at a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the method comprises: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
[0006] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the one or more programs includes instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device. [0007] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the one or more programs includes instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
[0008] In some embodiments, a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the computer system that is in communication with a display component and one or more input devices comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
[0009] In some embodiments, a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the computer system that is in communication with a display component and one or more input devices comprises means for performing each of the following steps: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
[0010] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices. In some embodiments, the one or more programs include instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
[0011] In some embodiments, a method that is performed at a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the method comprises: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
[0012] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the one or more programs includes instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
[0013] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the one or more programs includes instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
[0014] In some embodiments, a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the computer system that is in communication with a display component and one or more input devices comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
[0015] In some embodiments, a computer system that is in communication with a display component and one or more input devices is described. In some embodiments, the computer system that is in communication with a display component and one or more input devices comprises means for performing each of the following steps: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location. [0016] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices. In some embodiments, the one or more programs include instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
[0017] In some embodiments, a method that is performed at a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the method comprises: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
[0018] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the one or more programs includes instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
[0019] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the one or more programs includes instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
[0020] In some embodiments, a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
[0021] In some embodiments, a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the computer system comprises means for performing each of the following steps: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
[0022] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component. In some embodiments, the one or more programs include instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner. [0023] In some embodiments, a method that is performed at a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the method comprises: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
[0024] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
[0025] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
[0026] In some embodiments, a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
[0027] In some embodiments, a computer system that is in communication with a first movement component and a second movement component different from the first movement component is described. In some embodiments, the computer system comprises means for performing each of the following steps: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
[0028] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component. In some embodiments, the one or more programs include instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
[0029] In some embodiments, a method that is performed at a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component is described. In some embodiments, the method comprises: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
[0030] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component is described. In some embodiments, the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
[0031] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component is described. In some embodiments, the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
[0032] In some embodiments, a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
[0033] In some embodiments, a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, is described. In some embodiments, the computer system comprises means for performing each of the following steps: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
[0034] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component. In some embodiments, the one or more programs include instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
[0035] In some embodiments, a method that is performed at a computer system in communication with an input component is described. In some embodiments, the method comprises: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location. [0036] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with an input component is described. In some embodiments, the one or more programs includes instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
[0037] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with an input component is described. In some embodiments, the one or more programs includes instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
[0038] In some embodiments, a computer system in communication with an input component is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
[0039] In some embodiments, a computer system in communication with an input component is described. In some embodiments, the computer system comprises means for performing each of the following steps: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
[0040] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system in communication with an input component. In some embodiments, the one or more programs include instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
[0041] In some embodiments, a method that is performed at a computer system that is in communication with one or more output components is described. In some embodiments, the method comprises: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
[0042] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components is described. In some embodiments, the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
[0043] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components is described. In some embodiments, the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
[0044] In some embodiments, a computer system that is in communication with one or more output components is described. In some embodiments, the computer system that is in communication with one or more output components comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
[0045] In some embodiments, a computer system that is in communication with one or more output components is described. In some embodiments, the computer system that is in communication with one or more output components comprises means for performing each of the following steps: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
[0046] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components. In some embodiments, the one or more programs include instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
[0047] In some embodiments, a method that is performed at a computer system that is in communication with one or more output components is described. In some embodiments, the method comprises: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
[0048] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components is described. In some embodiments, the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
[0049] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components is described. In some embodiments, the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
[0050] In some embodiments, a computer system that is in communication with one or more output components is described. In some embodiments, the computer system that is in communication with one or more output components comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
[0051] In some embodiments, a computer system that is in communication with one or more output components is described. In some embodiments, the computer system that is in communication with one or more output components comprises means for performing each of the following steps: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
[0052] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components. In some embodiments, the one or more programs include instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
[0053] Executable instructions for performing these functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
[0054] Thus, devices are provided with faster, more efficient methods and interfaces for configuring navigation of a device, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces may complement or replace other methods for configuring navigation of a device.
DESCRIPTION OF THE FIGURES
[0055] For a better understanding of the various described embodiments, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0056] FIG. l is a block diagram illustrating a system with various components in accordance with some embodiments.
[0057] FIGS. 2A-2D illustrate exemplary user interfaces for navigating a first device with respect to a second device in accordance with some embodiments.
[0058] FIG. 3 is a flow diagram illustrating methods for navigating a first device with respect to a second device in accordance with some embodiments.
[0059] FIGS. 4A-4G illustrate exemplary user interfaces for configuring a device to navigate to a specific location in accordance with some embodiments.
[0060] FIG. 5 is a flow diagram illustrating methods for configuring a device to navigate to a specific location in accordance with some embodiments.
[0061] FIGS. 6A-6F illustrate exemplary diagrams for navigating a movable computer system to a target destination in accordance with some embodiments. [0062] FIGS. 7A-7C illustrate exemplary diagrams for navigating between objects in a forward manner in accordance with some embodiments.
[0063] FIGS. 8A-8C illustrate exemplary diagrams for navigating between objects in a backward manner in accordance with some embodiments.
[0064] FIG. 9 is a flow diagram illustrating a method for configuring a movable computer system in accordance with some embodiments.
[0065] FIGS. 10A-10B is a flow diagram illustrating a method for selectively modifying movement components of a movable computer system in accordance with some embodiments.
[0066] FIGS. 11 A-l ID illustrate exemplary diagrams for redirecting a movable computer system in accordance with some embodiments.
[0067] FIG. 12 is a flow diagram illustrating a method for providing feedback based on an orientation of a movable computer system in accordance with some embodiments.
[0068] FIG. 13 is a flow diagram illustrating a method for redirecting a movable computer system in accordance with some embodiments.
[0069] FIGS. 14A-14H illustrate exemplary user interfaces for interacting with different map data in accordance with some embodiments.
[0070] FIG. 15 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments.
[0071] FIG. 16 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments.
DETAILED DESCRIPTION
[0072] The following description sets forth exemplary techniques for configuring navigation of a device. This description is not intended to limit the scope of this disclosure but is instead provided as a description of example implementations.
[0073] Users need electronic devices that provide effective techniques for configuring navigation of a device. Efficient techniques can reduce a user’s mental load when configuring navigation of a device. This reduction in mental load can enhance user productivity and make the device easier to use. In some embodiments, the techniques described herein can reduce battery usage and processing time (e.g., by providing user interfaces that require fewer user inputs to operate).
[0074] FIG. 1 provides illustrations of exemplary devices for performing operations herein. FIGS. 2A-6G illustrate exemplary user interfaces for navigating a first device with respect to a second device in accordance with some embodiments. FIG. 3 is a flow diagram illustrating methods of navigating a first device with respect to a second device in accordance with some embodiments. The user interfaces in FIGS. 2A-6G are used to illustrate the processes described below, including the processes in FIG. 3. FIGS. 4A-4D illustrate exemplary user interfaces for configuring a device to navigate to a specific location in accordance with some embodiments. FIG. 5 is a flow diagram illustrating methods of configuring a device to navigate to a specific location in accordance with some embodiments. The user interfaces in FIGS. 4A-4D are used to illustrate the processes described below, including the processes in FIG. 5. FIGS. 6A-6F illustrate exemplary diagrams for navigating a movable computer system to a target destination in accordance with some embodiments. FIGS. 7A-7C illustrate exemplary diagrams for navigating between objects in a forward manner in accordance with some embodiments. FIGS. 8A-8C illustrate exemplary diagrams for navigating between objects in a backward manner in accordance with some embodiments. FIG. 9 is a flow diagram illustrating a method for configuring a movable computer system in accordance with some embodiments. FIGS. 10A-10B is a flow diagram illustrating a method for selectively modifying movement components of a movable computer system in accordance with some embodiments. The diagrams in FIGS. 6A-6F, 7A-7C, and 8A-8C are used to illustrate the processes described below, including the processes in FIGS. 9, 10A- 10B, and 12. FIGS. 11 A-l ID illustrate exemplary diagrams for redirecting a movable computer system in accordance with some embodiments. FIG. 12 is a flow diagram illustrating a method for providing feedback based on an orientation of a movable computer system in accordance with some embodiments. FIG. 13 is a flow diagram illustrating a method for redirecting a movable computer system in accordance with some embodiments. The diagrams in FIGS. 11 A-l ID are used to illustrate the processes described below, including the processes in FIGS. 12-13. FIGS. 14A-14H illustrate exemplary user interfaces for interacting with different map data in accordance with some embodiments. FIG. 15 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments. FIG. 16 is a flow diagram illustrating methods for interacting with different map data in accordance with some embodiments. The user interfaces in FIGS. 14A- 14H are used to illustrate the processes described below, including the processes in FIGS. 15 and 16.
[0075] The processes below describe various techniques for making user interfaces and/or human-computer interactions more efficient (e.g., by helping the user to quickly and easily provide inputs and preventing user mistakes when operating a device). These techniques sometimes reduce the number of inputs needed for a user (e.g., a person and/or a user) to perform an operation, provide clear and/or meaningful feedback (e.g., visual, acoustic, and/or haptic feedback) to the user so that the user knows what has happened or what to expect, provide additional information and controls without cluttering the user interface, and/or perform certain operations without requiring further input from the user. Since the user can use a device more quickly and easily, these techniques sometimes improve battery life and/or reduce power usage of the device.
[0076] In methods described where one or more steps are contingent on one or more conditions having been satisfied, it should be understood that the described method can be repeated in multiple repetitions so that over the course of the repetitions all of the conditions upon which steps in the method are contingent have been satisfied in different repetitions of the method. For example, if a method requires performing a first step if a condition is satisfied, and a second step if the condition is not satisfied, it should be appreciated that the steps are repeated until the condition has been both satisfied and not satisfied, in no particular order. Thus, a method described with one or more steps that are contingent upon one or more conditions having been satisfied could be rewritten as a method that is repeated until each of the conditions described in the method has been satisfied. This multiple repetition, however, is not required of system or computer readable medium claims where the system or computer readable medium contains instructions for performing conditional operations that require that one or more conditions be satisfied before the operations occur. A person having ordinary skill in the art would also understand that, similar to a method with conditional steps, a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the conditional steps have been performed.
[0077] The terminology used in the description of the various embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting. [0078] User interfaces for electronic devices, and associated processes for using these devices, are described below. In some embodiments, the device is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad). In other embodiments, the device is a portable, movable, and/or mobile electronic device (e.g., a processor, a smart phone, a smart watch, a tablet, a fitness tracking device, a laptop, a headmounted display (HMD) device, a communal device, a vehicle, a media device, a smart speaker, a smart display, a robot, a television and/or a personal computing device).
[0079] In some embodiments, the electronic device is a computer system that is in communication with a display component (e.g., by wireless or wired communication). The display component may be integrated into the computer system or may be separate from the computer system. Additionally, the display component may be configured to provide visual output to a display (e.g., a liquid crystal display, an OLED display, or CRT display). As used herein, “displaying” content includes causing to display the content (e.g., video data rendered or decoded by a display controller) by transmitting, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display component to visually produce the content. In some embodiments, visual output is any output that is capable of being perceived by the human eye, including, and not limited to images, videos, graphs, charts, and other graphical representations of data.
[0080] In some embodiments, the electronic device is a computer system that is in communication with an audio generation component (e.g., by wireless or wired communication). The audio generation component may be integrated into the computer system or may be separate from the computer system. Additionally, the audio generation component may be configured to provide audio output. Examples of an audio generation component include a speaker, a home theater system, a soundbar, a headphone, an earphone, an earbud, a television speaker, an augmented reality headset speaker, an audio jack, an optical audio output, a Bluetooth audio output, and/or an HDMI audio output). In some embodiments, audio output is any output that is capable of being perceived by the human ear, including, and not limited to sound waves, music, speech, and/or other audible representations of data.
[0081] In the discussion that follows, an electronic device that includes particular input and output devices is described. It should be understood, however, that the electronic device optionally includes one or more other input and/or output devices, such as physical userinterface devices (e.g., a physical keyboard, a mouse, and/or a joystick).
[0082] FIG. 1 illustrates an example system 100 for implementing techniques described herein. System 100 can perform any of the methods described in FIGS. 3 and/or 4 (e.g., processes 700 and/or 900) and/or portions of these methods.
[0083] In FIG. 1, system 100 includes various components, such as processor(s) 103, RF circuitry(ies) 105, memory(ies) 107, sensors 156 (e.g., image sensor(s), orientation sensor(s), location sensor(s), heart rate monitor(s), temperature sensor(s)), input device(s) 158 (e.g., camera(s) (e.g., a periscope camera, a telephoto camera, a wide-angle camera, and/or an ultra-wide-angle camera), depth sensor(s), microphone(s), touch sensitive surface(s), hardware input mechanism(s), and/or rotatable input mechanism(s)), mobility components (e.g., actuator(s) (e.g., pneumatic actuator(s), hydraulic actuator(s), and/or electric actuator(s)), motor(s), wheel(s), movable base(s), rotatable component(s), translation component s), and/or rotatable base(s)) and output device(s) 160 (e.g., speaker(s), display component s), audio generation component(s), haptic output device(s), display screen(s), projector(s), and/or touch-sensitive display(s)). These components optionally communicate over communication bus(es) 123 of the system. Although shown as separate components, in some implementations, various components can be combined and function as a single component, such as a sensor can be an input device.
[0084] In some embodiments, system 100 is a mobile and/or movable device (e.g., a tablet, a smart phone, a laptop, head-mounted display (HMD) device, and or a smartwatch). In other embodiments, system 100 is a desktop computer, an embedded computer, and/or a server.
[0085] In some embodiments, processor(s) 103 includes one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some embodiments, memory(ies) 107 is one or more non-transitory computer-readable storage mediums (e.g., flash memory and/or random-access memory) that store computer-readable instructions configured to be executed by processor(s) 103 to perform techniques described herein.
[0086] In some embodiments, RF circuitry(ies) 105 includes circuitry for communicating with electronic devices and/or networks (e.g., the Internet, intranets, and/or a wireless network, such as cellular networks and wireless local area networks (LANs)). In some embodiments, RF circuitry(ies) 105 includes circuitry for communicating using near-field communication and/or short-range communication, such as Bluetooth® or Ultra-wideband.
[0087] In some embodiments, display(s) 121 includes one or more monitors, projectors, and/or screens. In some embodiments, display(s) 121 includes a first display for displaying images to a first eye of a user and a second display for displaying images to a second eye of the user. In such embodiments, corresponding images can be simultaneously displayed on the first display and the second display. Optionally, the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides the user with the illusion of depth of the objects on the displays. In some embodiments, display(s) 121 is a single display. In such embodiments, corresponding images are simultaneously displayed in a first area and a second area of the single display for each eye of the user. Optionally, the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display.
[0088] In some embodiments, system 100 includes touch-sensitive surface(s) 115 for receiving user inputs, such as tap inputs and swipe inputs. In some embodiments, display(s) 121 and touch-sensitive surface(s) 115 form touch-sensitive display(s).
[0089] In some embodiments, sensor(s) 156 includes sensors for detecting various conditions. In some embodiments, sensor(s) 156 includes orientation sensors (e.g., orientation sensor(s) 111) for detecting orientation and/or movement of platform 150. For example, system 100 uses orientation sensors to track changes in the location and/or orientation (sometimes collectively referred to as position) of system 100, such as with respect to physical objects in the physical environment. In some embodiments, sensor(s) 156 includes one or more gyroscopes, one or more inertial measurement units, and/or one or more accelerometers. In some embodiments, sensor(s) 156 includes a global positioning sensor (GPS) for detecting a GPS location of platform 150. In some embodiments, sensor(s) 156 includes a radar system, LIDAR system, sonar system, image sensors (e.g., image sensor(s) 109, visible light image sensor(s), and/or infrared sensor(s)), depth sensor(s), rangefinder(s), and/or motion detector(s). In some embodiments, sensor(s) 156 includes sensors that are in an interior portion of system 100 and/or sensors that are on an exterior of system 100. In some embodiments, system 100 uses sensor(s) 156 (e.g., interior sensors) to detect a presence and/or state (e.g., location and/or orientation) of a passenger in the interior portion of system 100. In some embodiments, system 100 uses sensor(s) 156 (e.g., external sensors) to detect a presence and/or state of an object external to system 100. In some embodiments, system 100 uses sensor(s) 156 to receive user inputs, such as hand gestures and/or other air gesture. In some embodiments, system 100 uses sensor(s) 156 to detect the location and/or orientation of system 100 in the physical environment. In some embodiments, system 100 uses sensor(s) 156 to navigate system 100 along a planned route, around obstacles, and/or to a destination location. In some embodiments, sensor(s) 156 include one or more sensors for identifying and/or authenticating a user of system 100, such as a fingerprint sensor and/or facial recognition sensor.
[0090] In some embodiments, image sensor(s) includes one or more visible light image sensor, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide- semiconductor (CMOS) sensors operable to obtain images of physical objects. In some embodiments, image sensor(s) includes one or more infrared (IR) sensor(s), such as a passive IR sensor or an active IR sensor, for detecting infrared light. For example, an active IR sensor can include an IR emitter, such as an IR dot emitter, for emitting infrared light. In some embodiments, image sensor(s) includes one or more camera(s) configured to capture movement of physical objects. In some embodiments, image sensor(s) includes one or more depth sensor(s) configured to detect the distance of physical objects from system 100. In some embodiments, system 100 uses CCD sensors, cameras, and depth sensors in combination to detect the physical environment around system 100. In some embodiments, image sensor(s) includes a first image sensor and a second image sensor different form the first image sensor. In some embodiments, system 100 uses image sensor(s) to receive user inputs, such as hand gestures and/or other air gestures. In some embodiments, system 100 uses image sensor(s) to detect the location and/or orientation of system 100 in the physical environment.
[0091] In some embodiments, system 100 uses orientation sensor(s) for detecting orientation and/or movement of system 100. For example, system 100 can use orientation sensor(s) to track changes in the location and/or orientation of system 100, such as with respect to physical objects in the physical environment. In some embodiments, orientation sensor(s) includes one or more gyroscopes, one or more inertial measurement units, and/or one or more accelerometers.
[0092] In some embodiments, system 100 uses microphone(s) to detect sound from one or more users and/or the physical environment of the one or more users. In some embodiments, microphone(s) includes an array of microphones (including a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space (e.g., inside system 100 and/or outside of system 100) of the physical environment.
[0093] In some embodiments, input device(s) 158 includes one or more mechanical and/or electrical devices for detecting input, such as button(s), slider(s), knob(s), switch(es), remote control(s), joystick(s), touch-sensitive surface(s), keypad(s), microphone(s), and/or camera(s). In some embodiments, input device(s) 158 include one or more input devices inside system 100. In some embodiments, input device(s) 158 include one or more input devices (e.g., a touch- sensitive surface and/or keypad) on an exterior of system 100.
[0094] In some embodiments, output device(s) 160 include one or more devices, such as display(s), monitor(s), projector(s), speaker(s), light(s), and/or haptic output device(s). In some embodiments, output device(s) 160 includes one or more external output devices, such as external display screen(s), external light(s), and/or external speaker(s). In some embodiments, output device(s) 160 includes one or more internal output devices, such as internal display screen(s), internal light(s), and/or internal speaker(s).
[0095] In some embodiments, environmental controls 162 includes mechanical and/or electrical systems for monitoring and/or controlling conditions of an internal portion (e.g., cabin) of system 100. In some embodiments, environmental controls 162 includes fan(s), heater(s), air conditioner(s), and/or thermostat(s) for controlling the temperature and/or airflow within the interior portion of system 100.
[0096] In some embodiments, mobility component(s) includes mechanical and/or electrical components that enable a platform to move and/or assist in the movement of the platform. In some embodiments, mobility system 164 includes powertrain(s), drivetrain(s), motor(s) (e.g., an electrical motor), engine(s), power source(s) (e.g., battery(ies)), transmission(s), suspension system(s), speed control system(s), and/or steering system(s). In some embodiments, one or more elements of mobility component(s) are configured to be controlled autonomously or manually (e.g., via system 100 and/or input device(s) 158).
[0097] In some embodiments, system 100 performs monetary transactions with or without another computer system. For example, system 100, or another computer system associated with and/or in communication with system 100 (e.g., via a user account described below), is associated with a payment account of a user, such as a credit card account or a checking account. To complete a transaction, system 100 can transmit a key to an entity from which goods and/or services are being purchased that enables the entity to charge the payment account for the transaction. As another example, system 100 stores encrypted payment account information and transmits this information to entities from which goods and/or services are being purchased to complete transactions.
[0098] System 100 optionally conducts other transactions with other systems, computers, and/or devices. For example, system 100 conducts transactions to unlock another system, computer, and/or device and/or to be unlocked by another system, computer, and/or device. Unlocking transactions optionally include sending and/or receiving one or more secure cryptographic keys using, for example, RF circuitry(ies) 105.
[0099] In some embodiments, system 100 is capable of communicating with other computer systems and/or electronic devices. For example, system 100 can use RF circuitry(ies) 105 to access a network connection that enables transmission of data between systems for the purpose of communication. Example communication sessions include phone calls, e-mails, SMS messages, and/or videoconferencing communication sessions.
[0100] In some embodiments, videoconferencing communication sessions include transmission and/or receipt of video and/or audio data between systems participating in the videoconferencing communication sessions, including system 100. In some embodiments, system 100 captures video and/or audio content using sensor(s) 156 to be transmitted to the other system(s) in the videoconferencing communication sessions using RF circuitry(ies) 105. In some embodiments, system 100 receives, using the RF circuitry(ies) 105, video and/or audio from the other system(s) in the videoconferencing communication sessions, and presents the video and/or audio using output device(s) 160, such as display(s) 121 and/or speaker(s). In some embodiments, the transmission of audio and/or video between systems is near real-time, such as being presented to the other system(s) with a delay of less than 0.1, 0.5, 1, or 3 seconds from the time of capturing a respective portion of the audio and/or video.
[0101] In some embodiments, the system 100 generates tactile (e.g., haptic) outputs using output device(s) 160. In some embodiments, output device(s) 160 generates the tactile outputs by displacing a moveable mass relative to a neutral position. In some embodiments, tactile outputs are periodic in nature, optionally including frequency(ies) and/or amplitude(s) of movement in two or three dimensions. In some embodiments, system 100 generates a variety of different tactile outputs differing in frequency(ies), amplitude(s), and/or duration/ numb er of cycle(s) of movement included. In some embodiments, tactile output pattern(s) includes a start buffer and/or an end buffer during which the movable mass gradually speeds up and/or slows down at the start and/or at the end of the tactile output, respectively.
[0102] In some embodiments, tactile outputs have a corresponding characteristic frequency that affects a “pitch” of a haptic sensation that a user feels. For example, higher frequency(ies) corresponds to faster movement(s) by the moveable mass whereas lower frequency(ies) corresponds to slower movement(s) by the moveable mass. In some embodiments, tactile outputs have a corresponding characteristic amplitude that affects a “strength” of the haptic sensation that the user feels. For example, higher amplitude(s) corresponds to movement over a greater distance by the moveable mass, whereas lower amplitude(s) corresponds to movement over a smaller distance by the moveable mass. In some embodiments, the “pitch” and/or “strength” of a tactile output varies over time.
[0103] In some embodiments, tactile outputs are distinct from movement of system 100. For example, system 100 can includes tactile output device(s) that move a moveable mass to generate tactile output and can include other moving part(s), such as motor(s), wheel(s), axel(s), control arm(s), and/or brakes that control movement of system 100. Although movement and/or cessation of movement of system 100 generates vibrations and/or other physical sensations in some situations, these vibrations and/or other physical sensations are distinct from tactile outputs. In some embodiments, system 100 generates tactile output independent from movement of system 100 For example, system 100 can generate a tactile output without accelerating, decelerating, and/or moving system 100 to a new position. [0104] In some embodiments, system 100 detects gesture input(s) made by a user. In some embodiments, gesture input(s) includes touch gesture(s) and/or air gesture(s), as described herein. In some embodiments, touch-sensitive surface(s) 115 identify touch gestures based on contact patterns (e.g., different intensities, timings, and/or motions of objects touching or nearly touching touch-sensitive surface(s) 115). Thus, touch-sensitive surface(s) 115 detect a gesture by detecting a respective contact pattern. For example, detecting a finger-down event followed by detecting a finger-up (e.g., liftoff) event at (e.g., substantially) the same position as the finger-down event (e.g., at the position of a user interface element) can correspond to detecting a tap gesture on the user interface element. As another example, detecting a finger-down event followed by detecting movement of a contact, and subsequently followed by detecting a finger-up (e.g., liftoff) event can correspond to detecting a swipe gesture. Additional and/or alternative touch gestures are possible.
[0105] In some embodiments, an air gesture is a gesture that a user performs without touching input device(s) 158. In some embodiments, air gestures are based on detected motion of a portion (e.g., a hand, a finger, and/or a body) of a user through the air. In some embodiments, air gestures include motion of the portion of the user relative to a reference. Example references include a distance of a hand of a user relative to a physical object, such as the ground, an angle of an arm of the user relative to the physical object, and/or movement of a first portion (e.g., hand or finger) of the user relative to a second portion (e.g., shoulder, another hand, or another finger) of the user. In some embodiments, detecting an air gesture includes detecting absolute motion of the portion of the user, such as a tap gesture that includes movement of a hand in a predetermined pose by a predetermined amount and/or speed, or a shake gesture that includes a predetermined speed or amount of rotation of a portion of the user.
[0106] In some embodiments, detecting one or more inputs includes detecting speech of a user. In some embodiments, system 100 uses one or more microphones of input device(s) 158 to detect the user speaking one or more words. In some embodiments, system 100 parses and/or communicates information to one or more other systems to determine contents of the speech of the user, including identifying words and/or obtaining a semantic understanding of the words. For example, system processor(s) 103 can be configured to perform natural language processing to detect one or more words and/or determine a likely meaning of the one or more words in the sequence spoken by the user. Additionally or alternatively, in some embodiments, the system 100 determines the meaning of the one or more words in the sequence spoken based upon a context of the user determined by the system 100.
[0107] In some embodiments, system 100 outputs spatial audio via output device(s) 160. In some embodiments, spatial audio is output in a particular position. For example, system 100 can play a notification chime having one or more characteristics that cause the notification chime to be generated as if emanating from a first position relative to a current viewpoint of a user (e.g., “spatializing” and/or “spatialization” including audio being modified in amplitude, filtered, and/or delayed to provide a perceived spatial quality to the user).
[0108] In some embodiments, system 100 presents visual and/or audio feedback indicating a position of a user relative to a current viewpoint of another user, thereby informing the other user about an updated position of the user. In some embodiments, playing audio corresponding to a user includes changing one or more characteristics of audio obtained from another computer system to mimic an effect of placing an audio source that generates the play back of audio within a position corresponding to the user, such as a position within a three-dimensional environment that the user moves to, spawns at, and/or is assigned to. In some embodiments, a relative magnitude of audio at one or more frequencies and/or groups of frequencies is changed, one or more filters are applied to audio (e.g., directional audio filters), and/or the magnitude of audio provided via one or more channels are changed (e.g., increased or decreased) to create the perceived effect of the physical audio source. In some embodiments, the simulated position of the simulated audio source relative to a floor of the three-dimensional environment matches an elevation of a head of a participant providing audio that is generated by the simulated audio source, or is a predetermined one or more elevations relative to the floor of the three-dimensional environment. In some embodiments, in accordance with a determination that the position of the user will correspond to a second position, different from the first position, and that one or more first criteria are satisfied, system 100 presents feedback including generating audio as if emanating from the second position.
[0109] In some embodiments, system 100 communicates with one or more accessory devices. In some embodiments, one or more accessory devices is integrated with system 100. In some embodiments, one or more accessory devices is external to system 100. In some embodiments, system 100 communicates with accessory device(s) using RF circuitry(ies) 105 and/or using a wired connection. In some embodiments, system 100 controls operation of accessory device(s), such as door(s), window(s), lock(s), speaker(s), light(s), and/or camera(s). For example, system 100 can control operation of a motorized door of system 100. As another example, system 100 can control operation of a motorized window included in system 100. In some embodiments, accessory device(s), such as remote control(s) and/or other computer systems (e.g., smartphones, media players, tablets, computers, and/or wearable devices) functioning as input devices control operations of system 100. For example, a wearable device (e.g., a smart watch) functions as a key to initiate operation of an actuation system of system 100. In some embodiments, system 100 acts as an input device to control operations of another system, device, and/or computer, such as system 100 functioning as a key to initiate operation of an actuation system of a platform associated with another system, device, and/or computer.
[0110] In some embodiments, digital assistant(s) help a user perform various functions using system 100. For example, a digital assistant can provide weather updates, set alarms, and perform searches locally and/or using a network connection (e.g., the Internet) via a natural -language interface. In some embodiments, a digital assistant accepts requests at least partially in the form of natural language commands, narratives, requests, statements, and/or inquiries. In some embodiments, a user requests an informational answer and/or performance of a task using the digital assistant. For example, in response to receiving the question “What is the current temperature?,” the digital assistant answers “It is 30 degrees.” As another example, in response to receiving a request to perform a task, such as “Please invite my family to dinner tomorrow,” the digital assistant can acknowledge the request by playing spoken words, such as “Yes, right away,” and then send the requested calendar invitation on behalf of the user to each family member of the user listed in a contacts list for the user. In some embodiments, during performance of a task requested by the user, the digital assistant engages with the user in a sustained conversation involving multiple exchanges of information over a period of time. Other ways of interacting with a digital assistant are possible to request performance of a task and/or request information. For example, the digital assistant can respond to the user in other forms, e.g., displayed alerts, text, videos, animations, music, etc. In some embodiments, the digital assistant includes a client-side portion executed on system 100 and a server-side portion executed on a server in communication with system 100. The client-side portion can communicate with the server through a network connection using RF circuitry(ies)105. The client-side portion can provide client-side functionalities, input and/or output processing and/or communication with the server, for example. In some embodiments, the server-side portion provides server-side functionalities for any number client-side portions of multiple systems.
[OHl] In some embodiments, system 100 is associated with one or more user accounts. In some embodiments, system 100 saves and/or encrypts user data, including files, settings, and/or preferences in association with particular user accounts. In some embodiments, user accounts are password-protected and system 100 requires user authentication before accessing user data associated with an account. In some embodiments, user accounts are associated with other system(s), device(s), and/or server(s). In some embodiments, associating one user account with multiple systems enables those systems to access, update, and/or synchronize user data associated with the user account. For example, the systems associated with a user account can have access to purchased media content, a contacts list, communication sessions, payment information, saved passwords, and other user data. Thus, in some embodiments, user accounts provide a secure mechanism for a customized user experience.
[0112] Attention is now directed towards embodiments of user interfaces (“U ’) and associated processes that are implemented on an electronic device, such as system 100.
[0113] FIGS. 2A-2D illustrate exemplary user interfaces for navigating a first device with respect to a second device in accordance with some embodiments. The user interfaces in FIGS. 2A-2D are used to illustrate the processes described below, including the processes in FIG. 3. Throughout the user interfaces, user input is illustrated using a circular shape with dotted lines (e.g., touch user input 214 in FIG. 2A). It should be recognized that the user input can be any type of user input, including a tap on touch-sensitive screen, a button press, a gaze toward a control, a voice request with an identification of a control, a gesture made by a user and captured by a camera, and/or any other affirmative action performed by a user. In some examples, a single representation of a user input in a figure (1) includes one or more different types of user input and/or (2) represents different types of user input to result in different operations. For example, a single illustrated user input can be a tap input, a tap-and-hold input, and/or a swipe gesture. [0114] FIG. 2 A illustrates user interface 210 for navigating a first device with respect to a second device using computer system 200 in accordance with some embodiments. In this example, computer system 200 includes a touchscreen display 202. In some embodiments, computer system 200 is, or includes one or more of the features of, system 100 described above.
[0115] In FIG. 2A, computer system 200 displays user interface 210 on touchscreen display 202. User interface 210 includes navigation control user interface element 212. User interface 210 is a lock screen interface, displaying time and date, as well as navigation control user interface element 212 presented as an overlay or notification. In other examples, a user interface that includes navigation control user interface element 212 can include a maps or navigation application interface (e.g., such that navigation control user interface element 212 is a native interface inside of such application), or any other application or operating system interface (e.g., overlaid as a notification). Navigation control user interface element 212 includes an indication that another device (a “first” device in this example) is navigating with respect to computer system 200 (a “second” device in this example) where it states that: “Device is being navigated with respect to you.” The use of the phrase “you” indicates that the first device is navigating with respect to the current user of computer system 200 (e.g., based on the user being logged in), or is navigating with respect to the current device on which the notification is being displayed (e.g., computer system 200, regardless of user affiliation). Navigation control user interface element 212 can include one or more controls (e.g., affordances, buttons, and/or icons) or be configured to receive user input some other way, for causing one or more actions. In this example, the entire displayed area of navigation control user interface element 212 can receive user input to cause an action. In FIG. 2 A, computer system 200 receives a touch user input 214 (e.g., a tap, a tap- and-hold, or a hard press) on an operative portion (e.g., the displayed area) of navigation control user interface element 212.
[0116] The example illustrated in FIG. 2A-2D is applicable to many different scenarios. In some embodiments, the first device is associated with a different user than the second device. For example, the first device can have been instructed to navigate with respect to the second device. In some embodiments, the instruction originates from the first device (e.g., by a user of the first device (e.g., “follow that device”)), and/or the second device (e.g., by a user of the second device (e.g., “follow me”)). In some embodiments, the instruction can originate from another device (e.g., third device) that is not the first or second device. The second device can belong to a member of a particular group, (e.g., of devices (e.g., “my devices”), of users (e.g., family group, friend group, or any arbitrarily defined group), or any other permitted user that the first device user would like to navigate with respect to (e.g., a recent contact, a message recipient or sender, a contact that has shared their location, or the like)).
[0117] In some embodiments, the first device is associated with the same user as the second device. For example, the user of the second device can instruct one of their own devices (e.g., associated with their same user account) that has the ability to change position (e.g., a toy and/or a drone) to navigate to the user’s current device (e.g., smartphone) location or the location of another device. Navigating with respect to another device can include providing and/or receiving directions to (or being led to) a location corresponding to the other device. In some embodiments, the location corresponding to the other device is the location of the other device (e.g., the same location). In some embodiments, the location corresponding to the other device is a location within a predetermined distance from the other device (e.g., a different location, such as a safe area near the other device). For example, the first device can navigate to a location adjacent to the second device, so that the devices are close enough that a user could go to the first device when needed but not so close that the first device is on top of or collides with the user (e.g., holding the second device). In some embodiments, the device being navigated can receive location information and/or step-by- step instructions to the other device, so that it will end up at the location of the device being navigated to. In some embodiments, the device being navigated to (or another device) can provide location information and/or step-by-step instructions that periodically update so that the device being navigated can follow and/or eventually reach the device being navigated to. The device being navigated can receive updated location information of the target device by direct communication (e.g., with each other) or via one or more intermediate systems (e.g., a notification server).
[0118] FIG. 2B illustrates computer system 200 in response to receiving touch user input 214. In this example, a user of computer system 200 would like to control navigation of the first device to navigate with respect to a different, third device (e.g., not computer system 200). In response to touch user input 214, computer system 200 displays navigation control user interface elements 216 and 218. Also, in response to touch user input 214, computer system 200 alters the display of user interface element 210 by dimming or darkening in order to emphasize that action is being taken with respect to interface elements 212, 216, and 218.
[0119] Navigation control user interface element 216 includes an indication that navigation of the first device can be changed to another device (e.g., computer system), where it states: “Change navigation to Kyle”. In this example, the other device (e.g., a “third” device in this example) is identified by the name of a user associated with the third device (e.g., the user named “Kyle” in this example). As shown, user interface element 216 indicates an option to transfer navigation to another particular device. In some embodiments, navigation control user interface element 216 can indicate or provide a plurality of options for selecting one of a group of devices to which navigation can be transferred (e.g., by stating instead “Change navigation to another user or device,” which when selected can display a plurality of user or device options). In some embodiments, the indication that navigation of the first device can be changed to another device (e.g., computer system) can be an icon and/or identifier of a user account (e.g., corresponding to a contact from a contacts application and/or an address book application). In some embodiments, the indication that navigation of the first device can be changed to another device can be an icon and/or identifier of a specific device (e.g., determined using a communication channel, such an identifier of a device that is broadcast via a Bluetooth channel to other devices when in range). In some embodiments, information used for determining another device is retrieved from one or more local and/or remote resources (e.g., from a cloud storage service and/or a location service).
[0120] User interface 210 also includes navigation control user interface element 218, which includes an indication that navigation with respect to the second device can be stopped, where it states: “Stop navigating with respect to you”. Here, “you” indicates that the current device is being used as the target navigation for the first device. For example, user input on navigation control user interface element 218 can cause navigation with respect to computer system 200 to stop (e.g., and display of interface elements 212, 216, and 218 to cease). In FIG. 2B, computer system 200 receives a touch user input 220 (e.g., a tap, a tap- and-hold, or a hard press) on an operative portion (e.g., any portion in this example) of navigation control user interface element 216.
[0121] FIG. 2C illustrates computer system 200 in response to receiving touch user input 220. In this example, a user of computer system 200 would like to transfer the first device to navigate with respect to a different, third device (e.g., not computer system 200). In response to touch user input 220, computer system 200 displays navigation control user interface element 222 and ceases displaying navigation control user interface element 212. Also, in response to touch user input 220, computer system 200 causes the first device to cease navigating with respect to computer system 200 and begin navigating with respect to the third device. As illustrated in FIG. 2C, navigation control user interface element 222 includes an indication that navigation of the first device has been changed to another device (e.g., another computer system), where it states: “Device is being navigated with respect to Kyle.” In this example, the other device is associated with the user identified as “Kyle.”
[0122] In the example of FIG. 2C, the first device and the second device are associated with one or more user accounts (e.g., the same account and/or different accounts) that are not the same as (and do not include) the Kyle user account. Stated differently, the Kyle account corresponds to a different user account than the owner of the first device and the second devices. In this example, navigation with respect to the third device will result in navigating with respect to a device corresponding to (e.g., owned and/or managed) by a different user account than of the first device and second device. In some embodiments, designating the device associated with Kyle as the target of the first device’s navigation results in the user account of Kyle and/or Kyle’s device being designated a “guest” user/device of the second device. That is, when Kyle’s device is made the target of navigation, Kyle’s device can be granted (e.g., by the first device and/or by the second device, or users associated therewith) the right to perform one or more operations for controlling navigation of the first device. For example, the third device can be granted one or more of the abilities to: cease navigation with respect to themselves/their device (e.g., “don’t navigate with respect to me”), return the navigation target to the user and/or device that sent it to them (e.g., “navigate with respect to the second device again”), or assign navigation to another user or associated device (e.g., “don’t navigate with respect to me, navigate with respect to a fourth (different) device instead”). This grant of rights to the third device can be temporary (e.g., expires after predefined amount of time, or after a condition occurs or is met). In this example, the second device was not designated a “guest” because it corresponds to the same user account as the first device (and/or the user account and/or the second device are already established as an administrator (e.g., having a non-guest privilege level) for the first device). The first, second, and/or third devices can each be different types of devices. In this example, the second device (computer system 200) is a smartphone, the first device is a wearable device (that moves via user movement), and the third device is a laptop computer.
[0123] In FIG. 2C, computer system 200 receives a touch user input 224 (e.g., a tap, a tap-and-hold, or a hard press) on an operative portion (e.g., any portion in this example) of user interface element 222.
[0124] FIG. 2D illustrates computer system 200 in response to receiving touch user input 224. In response to touch user input 224, computer system 200 displays navigation control user interface elements 226 and 228. Also, in response to touch user input 224, computer system 200 alters the display of user interface 310 by dimming or darkening in order to emphasize that action is being taken with respect to interface elements 222, 226, and 228. Navigation control user interface element 226 includes an indication that the navigation target of the first device can be changed (back) to the second device (e.g., computer system 200), where it states: “Change navigation to you.” For example, a user input (such as 224) on navigation control user interface element 226 would cause computer system 200 to return to the state shown in FIG. 2A, where it displays navigation control user interface element 212 indicating that the first device is navigating with respect to computer system 200 (e.g., represented as “you”).
[0125] Navigation control user interface element 228 includes an indication that navigation of the first device with respect to the third device (e.g., computer system 200) can be stopped, where it states: “Stop navigating with respect to Kyle”. For example, a user input (such as 224) on user interface element 228 would cease navigation of the first device with respect to the third device associated with Kyle (e.g., navigation instructions would cease at the first device). For example, in response to user input on user interface element 228, computer system 200 can display user interface 210 without displaying navigation control user interface element 212 (e.g., just display a normal lock screen).
[0126] FIG. 3 is a flow diagram illustrating a method for navigating a first device with respect to a second device using a computer system in accordance with some embodiments. Process 300 is performed at a computer system (e.g., system 100). The computer system is in communication with a display component and one or more input devices. Some operations in process 300 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted. [0127] As described below, process 300 provides an intuitive way for navigating a first device with respect to a second device. The method reduces the cognitive burden on a user for navigating a first device with respect to a second device, thereby creating a more efficient human-machine interface. For battery operated computing devices, enabling a user to configure navigation of a device faster and more efficiently conserves power and increases the time between battery charges.
[0128] In some embodiments, process 300 is performed at a computer system (e.g., 200) that is in communication with a display component (e.g., 202) (e.g., a display screen and/or a touch-sensitive display) and one or more input devices (e.g., 202) (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button). In some embodiments, the computer system is a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device. In some embodiments, the computer system is in communication with one or more output devices (e.g., a display screen, a touch-sensitive display, a haptic output device, and/or a speaker).
[0129] The computer system displays (302), via the display component, a first indication (e.g., 212 of FIG. 2A) that a first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to a second device (e.g., 200) different from the first device. In some embodiments, the first indication is displayed on a lock screen of the computer system (e.g., a user interface of the computer system that is configured to be allowed to perform less operations than an unlocked screen of the computer system) (e.g., the lock screen is displayed when the computer system is in a locked state (e.g., the computer system is powered on and operational but ignores most, if not all, input)). In some embodiments, the first indication is displayed in a user interface of a mapping and/or navigation application. In some embodiments, the first device is different from the computer system. In some embodiments, the second device is the computer system. In some embodiments, the second device is different from the computer system. In some embodiments, the computer system is logged into a first user account. In some embodiments, the first device is logged into the first user account. In some embodiments, the first device is logged into a user account different from the first user account. In some embodiments, the second device is logged into the first user account. In some embodiments, the second device is logged into a user account different from the first user account. In some embodiments, navigating with respect to the second device includes navigating to locations corresponding to a current location of the second device as the second device moves. In some embodiments, navigating with respect to the second device includes following the second device.
[0130] While the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to the second device, the computer system receives (304), via the one or more input devices, a request (e.g., 220) to have the first device navigate with respect to a third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) instead of the second device (e.g., 200), wherein the third device is different from the first device (e.g., the device referenced in 212 of FIGS. 2A-2D). In some embodiments, the request is received after or while displaying the first indication. In some embodiments, the third device is different from the computer system. In some embodiments, the request corresponds to input directed to a user interface including the first indication. In some embodiments, the third device is logged into a user account different from the first user account. In some embodiments, the third device is logged into the first user account.
[0131] In response to receiving the request, the computer system displays (306), via the display component, a second indication (e.g., 222 of FIGS. 2C and/or 2D) that the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B). In some embodiments, the computer system forgoes navigating with respect to the second device in response to receiving the request. In some embodiments, the second indication is different from the first indication. In some embodiments, the second indication is displayed in the user interface of the mapping and/or navigation application. Allowing the computer system to receive a request to cause the first device to navigate with respect to the third device instead of the second device while the first device is navigating with respect to the second device provides the user the ability to change navigation targets easily and/or efficiently without requiring additional steps to stop following the second device and/or establish a connection with the third device before initiating navigation with respect to the third device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0132] In some embodiments, in response to receiving the request (e.g., 220), the computer system ceases to display the first indication (e.g., 212). In some embodiments, in response to receiving the request, the computer system displays an indication that the first device is not navigating with respect to the second device different from the first device. In some embodiments, in response to receiving the request, the computer system displays an indication that the first device is navigating with respect to the third device different from the second device. Ceasing to display the first indication when switching from navigating with respect to the second device to the third device provides the user with feedback about the state of the computer system, thereby providing improved visual feedback to the user.
[0133] In some embodiments, the computer system (e.g., 200) includes the second device (e.g., 200). In some embodiments, the computer system is the second device. In some embodiments, the computer system includes the first device. In some embodiments, the computer system is the first device. In some embodiments, the computer system is the second device and not the first device. In some embodiments, the computer system is not the first device or the second device. The computer system including the second device (e.g., the device for which the first device is no longer navigating with respect to after receiving the request) provides the user with feedback about the state of the first device, thereby providing improved visual feedback to the user.
[0134] In some embodiments, receiving the request (e.g., 220) to have the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) navigate with respect to the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) includes detecting input (e.g., 220) (e.g., a tap gesture, a long-press gesture, a verbal request and/or command, a physical button press, an air gesture, and/or a rotation of a physical input mechanism) directed to a control (e.g., 216) that includes an indication of the third device. In some embodiments, the indication includes an indication of a user associated with the third device. Having the control (e.g., the control that causes the first device to navigate with respect to the third device instead of the second device) include the indication of the third device provides the user with feedback about the state of the first device and information for how the control will affect the first device, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or providing improved visual feedback to the user.
[0135] In some embodiments, while the first device is navigating with respect to the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B), the computer system displays, via the display component, a second control (e.g., 226) that includes an indication of the second device (e.g., 200), wherein the second control is different from the control (e.g., 216). In some embodiments, while displaying the second control, the computer system receives input (e.g., input on 226) (e.g., a tap gesture, a long-press gesture, a verbal request and/or command, a physical button press, an air gesture, and/or a rotation of a physical input mechanism) directed to the second control. In some embodiments, in response to receiving the input directed to the second control, the computer system displays, via the display component, a third indication (e.g., display navigation control user interface element 212 as in FIG. 2A) (e.g., the first indication or a different indication) that the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) is navigating with respect to the second device. In some embodiments, in response to receiving the input directed to the second control, forgoing displaying the second indication. Displaying the second control while the first device is navigating with respect to the third device provides the user the ability to change navigation targets easily and/or efficiently without requiring additional steps to stop following the third device and/or establish a connection with the second device before initiating navigation with respect to the second device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0136] In some embodiments, in response to receiving the request, the computer system classifies the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) as a guest user (e.g., a user that is not associated with the first device and/or an account that is associated with the first device) of the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) (e.g., without classifying the third device as a guest user of the second device). In some embodiments, the first device is classified as a different type of user of the first device than a guest user. In some embodiments, classifying the third device as a guest user of the first device configures the third device to be able to perform one or more first operations with respect to the first device, wherein the second device is configured to be able to perform one or more second operations with respect to the first device, wherein the one or more second operations includes at least one different operation than the one or more first operations. Classifying the third device as a guest user provides the user the ability to change navigation targets with different devices without needing to classify the different devices as administrators and/or take ownership of the first device, thereby improving security. [0137] In some embodiments, the third device is classified as the guest user of the first device (e.g., the device referenced in 212 of FIGS. 2A-2D) for a predefined amount of time (e.g., 1-45 minutes). In some embodiments, the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) is no longer classified as a guest user of the first device after the predefined amount of time has lapsed. In some embodiments, the predefined amount of time is set by a non-guest user that is associated with the first device. Classifying the third device as a guest user for the predefined amount of time and no longer classifying the third user as the guest user after the predefined amount of time provides a time limit for such classification that prevents the third device from taking over the first device, thereby improving security.
[0138] In some embodiments, the second device (e.g., 200) is a different type (e.g., a phone, a watch, a speaker, a device that can move without assistance (e.g., a device with a movement mechanism, such as a wheel, pulley, axel, engine, and/or a motor), and/or a device that cannot move without assistance) of device than the first device. In some embodiments, the third device (e.g., device associated with Kyle referenced in 216 of FIG. 2B) is a different type of device than the first device (e.g., the device referenced in 212 of FIGS. 2A-2D). In some embodiments, the second device includes one or more capabilities that the first device does not include. In some embodiments, the first device includes one or more capabilities that the second device does not include. In some embodiments, the first device is in communication with a component that the second device is not in communication with. In some embodiments, the second device is in communication with a component that the first device is not in communication with. In some embodiments, the third device includes one or more capabilities that the first device does not include. In some embodiments, the first device includes one or more capabilities (e.g., the first device is able to move without assistance while the third device is not able to move without assistance, the first device includes a component and/or sensor that the third device does not include, and/or the first device is able to output content of a particular type that the third device is not able to output) that the third device does not include. In some embodiments, the first device is in communication with a component that the third device is not in communication with. In some embodiments, the third device is in communication with a component that the first device is not in communication with. Having the second and third device be different types of devices than the first device allows the user to use different types of devices as targets for navigation for the first device without all of the devices needing to be the same type of device, thereby reducing friction when controlling different devices and/or allowing personal devices to control other types of devices.
[0139] Note that details of the processes described above with respect to process 300 (e.g., FIG. 3) are also applicable in an analogous manner to the methods described below/above. For example, process 500 optionally includes one or more of the characteristics of the various methods described above with reference to process 300. For example, the respective device of process 500 can be the first device of process 300. For brevity, these details are not repeated below.
[0140] FIGS. 4A-4G illustrate exemplary user interfaces for configuring a device to navigate to a specific location in accordance with some embodiments. FIG. 5 is a flow diagram illustrating methods for configuring a device to navigate to a specific location in accordance with some embodiments. The user interfaces in FIGS. 4A-4G are used to illustrate the processes described below, including the processes in FIG. 5. Throughout the user interfaces, user input is illustrated using a circular shape with dotted lines (e.g., user input 416 in FIG. 4 A). It should be recognized that the user input can be any type of user input, including a tap on touch- sensitive screen, a button press, a gaze toward a control, a voice request with an identification of a control, a gesture made by a user and captured by a camera, and/or any other affirmative action performed by a user. In some examples, a single representation of a user input in a figure (1) includes one or more different types of user input and/or (2) represents different types of user input to result in different operations. For example, a single illustrated user input can be a tap input, a tap-and-hold input, and/or a swipe gesture.
[0141] FIG. 4 A illustrates user interface 410 for configuring a device to navigate to a specific location within a physical environment using computer system 200 in accordance with some embodiments. In this example, computer system 200 includes one or more of the features described above with respect to FIGS. 2A-2D.
[0142] In FIG. 4A, computer system 200 displays, on touchscreen display 202, user interface 410, which includes a representation 412 of a physical space and a representation 414 of a target device located within the physical space. In this example, the “target” device is the device for which navigation is configured using the interfaces described with respect to FIGS. 4A-4G. In some embodiments, the target device corresponding to the representation of the respective device is a particular vehicle corresponding to a particular unique identifier. In some embodiments, the target device corresponding to the representation of the device is a respective device (e.g., a smartphone, a laptop, and/or a wearable device) being used with the navigation application.
[0143] In some embodiments, computer system 200 receives (e.g., captured by one or more other devices, or captured by computer system 200 (e.g., via imaging and/or scanning equipment such as one or more cameras and one or more depth sensors)) data (e.g., images and/or video) representing a physical environment. For example, a user of computer system 200 can use one or more connected camera, lidar, radar, and/or other depth sensor to scan their garage and/or create (or cause creation of) representation 412, a digital multidimensional (e.g., 3-D, 2-D) representation of their garage. In this example, representation 412 includes objects 412a and 412b, representing objects in the physical space that occupy portions of floor space 412c. Representation 412 also includes floor space 412c representing an area of the physical space on which a target device can be configured to navigate to (e.g., if no other objects or devices occupy such space). In some embodiments, user interface 410 is an interface of an application (e.g., a navigation application, a device configuration application) or of an operating system of the device (e.g., a lock screen interface).
[0144] In the scenario depicted in FIG. 4A, a user of computer system 200 scans their garage without a target device located inside of it, and subsequently views their respective representations 412 (garage) and 414 (target device). For example, a user can use computer system 200 to capture one or more images and/or depth measurements from within the garage, which are then used to create representation 412 (e.g., stitched together into a model). In some embodiments, after (e.g., in response to) scanning the garage, computer system 200 displays a representation of the garage (e.g., representation 412). In some embodiments, representation 412 is an image of the garage that is a composite of one or more images (e.g., taken during the scan).
[0145] After initially scanning the garage without the target device, computer system 200 can display representation 412 of the garage. After scanning, the user interface (representation 412) might not initially have a representation of the target device within it. In some embodiments, a user of computer system 200 scans the target device in a separate scan (e.g., a second scan). In some embodiments, a user of computer system 200 selects (e.g., via user input received by computer system 200) a representation of the target device (e.g., selects by providing identifying information and/or dimensions). In some embodiments, once respective representations for the garage and the target device are attained, the target device is assigned to a particular location (e.g., area) within the garage (e.g., that is determined to be an optimal location based on the respective dimensions of the garage and the target device) It should be recognized that other embodiments include the user of computer system 200 scanning their garage with the target device inside of it.
[0146] FIG. 4 A depicts representation 414 at an example first position (of representation 412). However, in this example a user of computer system 200 desires to configure a different position of the target device represented by representation 414 within the garage represented by representation 412, so that future navigation of the target device will navigate to the configured different (e.g., second) position. In other words, at some time in the future the user wants to instruct computer system 200 to navigate to the location “Home” while driving their car (e.g., represented by representation 414) and cause a navigation function to remember a precise navigation location configured using user interface 410 (and subsequently navigate representation 414 to the configured location). Techniques for such user interfaces are described below.
[0147] In FIG. 4A, computer system 200 receives a user input 416 (e.g., a tap, a tap-and- hold (e.g., with movement), or a hard press) on representation 414. As shown in FIG. 4A, user input 416 includes movement to the left (e.g., a tap-and-hold input, followed by a drag to the left). In some embodiments, user interface 410 does not allow invalid movement of a target device representations. In this example, because representation 414 is already as close to the barrier (e.g., wall) of representation 412, representation 414 does not move further to the left. In some embodiments, an indication is provided that indicates an invalid movement (e.g., to the left in FIG. 4A), such as foregoing displaying the instructed movement (e.g., stops representation 414 at a safe distance from the left wall) and/or outputting one or more of a sound, audible message, haptic, or visual notification.
[0148] In FIG. 4B, computer system 200 receives user input 418 (e.g., a tap, a tap-and- hold (e.g., with movement), or a hard press) on representation 414. As shown in FIG. 4B, user input 418 includes movement to the right (e.g., a tap-and-hold, followed by a drag to the right). In contrast to FIG. 4A and user input 416, because there is unoccupied space on floor space 412c to the right of representation 414, representation 414 can move to the right (e.g., be dragged by user input 418) because it is a valid movement. [0149] FIG. 4C illustrates computer system 200 in response to receiving user input 418 in accordance with some embodiments. In response to touch user input 418, computer system 200 displays representation 414 shifted to the right with respect to floor space 412c in representation 412. In this example, the representation of object 412b establishes a rightward barrier for placement of representation 414 within representation 412. For instance, object 412b can represent shelving that a target device, represented by 414, cannot occupy — thus, user interface 410 and representation 412 will not allow representation 414 to be placed occupying the same space as object 412b. In some embodiments, user interface 410 includes one or more affordances for accepting (e.g., configuring, saving) a precise navigation position represented by representation 414 and/or for not accepting the precise navigation position. For example, in FIG. 4C, user interface 410 includes accept affordance 410a (for accepting the current position of 414 as the precise navigation position for the target device represented by representation 414). In this example, user interface 410 also includes cancel affordance 410b (for rejecting the current position of 414 as the precise navigation position for the target device represented by representation 414). In some embodiments, selection of cancel affordance 410b causes user interface 410 to cease to be displayed. In some embodiments, selection of cancel affordance 410b causes the target device to be configured to navigate to a precise navigation position that was configured prior to displaying user interface 410 (e.g., prior to beginning a process for editing the precise navigation position). In FIG. 4C, computer system 200 receives a touch user input 420 (e.g., a tap, a tap-and-hold, or a hard press) on accept affordance 410a. In response to touch user input 420 (e.g., after completion of the input), computer system 200 configures a precise navigation position to be associated with representation 414 at the “second” position, which is shown in FIG. 4C shifted to the right with respect to floor space 412c in representation 412.
[0150] FIG. 4D illustrates navigation user interface 422 in accordance with some embodiments. Navigation user interface 422 includes map portion 422a (representing a geographic area), indicator 422b (representing a current location of computer system 200 within map portion 422a), and home affordance 422c (representing a saved/configured precise navigation position at the user’s configured “Home” location). In this example, after configuring a precise navigation location for their vehicle inside of their home garage, the user of computer system 200 desires to navigate their vehicle home to the configured precise navigation location (represented by home affordance 422c). Exemplary techniques for performing such actions in accordance with some embodiments are now described. In FIG. 4D, computer system 200 receives a touch user input 423 (e.g., a tap, a tap-and-hold, or a hard press) on home affordance 422c.
[0151] FIG. 4E illustrates computer system 200 in response to receiving touch user input 423 in accordance with some embodiments. In response to touch user input 423, computer system 200 displays navigation user interface 422 as shown in FIG. 4E. In FIG. 4E, the appearance of navigation user interface 422 has changed because the navigation application is performing an active navigation instruction process. As shown in FIG. 4E, navigation user interface 422 includes map portion 422a and indicator 422b (e.g., updated to an arrow to indicate current position and direction of travel), as well as navigation instruction field 422d (which includes a current navigation instruction (e.g., “Go Straight”)).
[0152] In some embodiments, upon reaching or nearing the precise navigation location (e.g., associated with “Home”), the navigation user interface can change to (or be replaced by) a precise navigation view. FIG. 4F illustrates navigation user interface 422 arranged in a precision navigation view, in accordance with some embodiments, and includes representation 412 of the physical space of the user’s garage. As shown in FIG. 4F, navigation user interface 422 includes map portion 422a and indicator 422b (e.g., optionally updated to include an indication of the current vehicle’s dimensions (e.g., the rectangular shaped portion) and direction of travel (e.g., the arrow)). Also, in FIG. 4F, navigation user interface 422 includes an updated navigation instruction field 422d, instructing that navigation should proceed to the right (“Proceed to right”), and (optionally) a precision navigation target 424. In some embodiments, precision navigation target 424 indicates where the user of the navigation user interface should place the vehicle or object being navigated (e.g., park the car). In this example, precision navigation target 424 is an area or shape that corresponds to the scanned representation 414 of the vehicle (from FIGS. 4A-4C). However, precision navigation target 424 can be any suitable indicator for indicating a location (e.g., a point or shape in space within representation 412, which can or cannot correspond to a point on 422b or 414 that should be correspondingly aligned by moving the represented vehicle (e.g., guiding the user to line up the two points)).
[0153] FIG. 4G illustrates navigation completion notification 432 in accordance with some embodiments. Computer system 200 displays navigation completion notification 432 in response to a determination (e.g., after detecting and/or determining, or by receiving an indication from one or more other devices) that the vehicle (e.g., represented by representations 414 and/or 422b) has reached the precision navigation target 424 (e.g., is sufficiently within or near precision navigation target 424, according to some criteria such as distance between points, area of vehicle within precision navigation target 424, or any other suitable criteria). Navigation completion notification 432 indicates arrival at the location selected for navigation (“Home” selected in FIG. 4D), where it states: “Arrived Home.” As shown in FIG. 4G, computer system 200 displays navigation completion notification on a lock screen interface 430 and ceases displaying a navigation interface (e.g., 410 and/or 422). In this example, once precision navigation has completed, the computer system 200 automatically ceases displaying an interface with a full map, representations of a physical space or object(s), and/or navigation instructions, and in its place displays a lock screen (or home screen, or other default or idle state screen) interface with a notification that the journey is complete. In some embodiments, completion of the precise navigation successfully causes the target device to change operation from a first manner (e.g., powered on, in a particular active state) to a second manner (e.g., powered off, or in an idle/inactive/low-power state). In some embodiments, computer system 200 can transmit a message or command that causes the target device to change operation to the second manner of operation. In some embodiments, the target device automatically enters the second manner of operation upon reaching the configured precise navigation location.
[0154] In some embodiments, the second device (e.g., 200) is used during subsequent navigation of the first device (e.g., target device). For example, computer system 200 can be a smartphone that detects it is being used with the user’s vehicle (e.g., based on connectivity with the vehicle, such as via Bluetooth or a wired connection), and intelligently knows to use the configured precise location for that vehicle (or any vehicle, depending on configuration settings). In such an example, computer system 200 is used to navigate, as illustrated by the examples in FIGS. 4D-4G.
[0155] In some embodiments, the second device (e.g., 200) is not used during subsequent navigation of the first device (e.g., target device). In some embodiments, the first device navigates itself to the configured precise location (e.g., in response to receiving an instruction to do so (e.g., from user input and/or from another device)). For example, computer system 200 can be a smartphone that is used to configure the precise location, but the first (e.g., target) device is a device with the ability move itself (e.g., using wheels, tracks, and/or rotors) and perform some level of spatial location and mapping (e.g., alone or assisted by other devices). Thus, as an example, after receiving an instruction to navigate to the configured precise location, a target device that is an autonomous robotic lawnmower can return to a particular place in the garage (e.g., in a safe location that will facilitate charging (e.g., near a power outlet)). The lawnmower can use one or more onboard functions that facilitate location awareness (e.g., GPS, camera, radar, spatial maps, etc.) to navigate to the configured location without needing further intervention by a user or computer system 200 (e.g., to display step- by-step instructions).
[0156] FIG. 5 is a flow diagram illustrating a method for configuring a device to navigate to a specific location using a computer system in accordance with some embodiments. Process 500 is performed at a computer system (e.g., system 100). The computer system is in communication with a display component and one or more input devices. Some operations in process 500 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0157] As described below, process 500 provides an intuitive way for configuring a device to navigate to a specific location. The method reduces the cognitive burden on a user for configuring a device to navigate to a specific location, thereby creating a more efficient human-machine interface. For battery operated computing devices, enabling a user to configure a device to navigate to a specific location faster and more efficiently conserves power and increases the time between battery charges.
[0158] In some embodiments, process 500 is performed at a computer system (e.g., 200) that is in communication with a display component (e.g., 202) (e.g., a display screen and/or a touch-sensitive display) and one or more input devices (e.g., 202) (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button). In some embodiments, the computer system is a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device. In some embodiments, the computer system is in communication with one or more output devices (e.g., a display screen, a touch-sensitive display, a haptic output device, and/or a speaker).
[0159] After capture of (e.g., after the computer system or a different computer system captures) one or more images (e.g., radar, lidar, and/or optical images) of a location (e.g., physical space describe with respect to FIG. 4A) (e.g., a location (e.g., a destination, a destination location, a home location, and/or an arrival location) within a physical environment), the computer system displays (502), via the display component, a representation (e.g., 414) (e.g., a graphical representation, a line, a path, a textual representation, and/or a symbolic representation) of a respective device (e.g., device represented by 414) (e.g., a fitness tracking device, a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, a vehicle, and/or a personal computing device) at a first position (e.g., position of 414 in FIG. 4A and/or 4B) within a representation (e.g., 412) of the location (e.g., location represented by 412), wherein the representation of the location is generated based on the one or more images. In some embodiments, the computer system is in communication with one or more cameras. In some embodiments, the one or more cameras are attached to and/or within a housing of the computer system. In some embodiments, the computer system, via one or more cameras in communication with the computer system, captures the one or more images. In some embodiments, the computer system detects, via the one or more input devices, input corresponding to selection of a user-interface element; and in response to detecting the input, initiates a scanning process (e.g., captures, via one or more cameras in communication with the one or more input devices, the one or more images). In such examples, the scanning process is initiated before displaying the vehicle representation. In some embodiments, the computer system is the respective device. In some embodiments, the computer system is different from the respective device.
[0160] The computer system receives (504), via the one or more input devices, a set of one or more inputs (e.g., 416 and/or 418), wherein the set of one or more inputs includes an input (e.g., dragging input and/or non-dragging input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to a request to move the representation of the respective device from the first position (e.g., position of 414 in FIG. 4A and/or 4B) to a second position (e.g., position of 414 in FIG. 4C) within the representation of the location, and wherein the second position is different from the first position. In some embodiments, the input corresponding to the request is received (e.g., and/or detected) while displaying the representation of the location and/or the representation of the respective device.
[0161] In response to (506) (e.g., based on and/or in conjunction with) receiving the set of one or more inputs (e.g., 416 and/or 418) (e.g., the input corresponding to the request) and in accordance with a determination that a first set of criteria are met (e.g., a valid movement as described with respect to FIG. 4B), the computer system displays (508), via the display component, the representation (e.g., 414) of the respective device (e.g., device represented by 414) at the second position (e.g., position of 414 in FIG. 4C) (and, in some examples, ceasing display of the representation of the respective device at the first position and/or no longer displaying a representation of the respective device at the first position). In some embodiments, the first set of criteria includes a criterion that is met when the second position is determined to be a valid position. In some embodiments, the first set of criteria includes a criterion that is met when the second position is determined to be navigable to by the respective device.
[0162] In response to (506) receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are met, the computer system configures (510) the respective device (e.g., device represented by 414) in a first manner, such that the respective device is caused to be navigated to a specific location (e.g., 424) corresponding to the second position (e.g., position of 414 in FIG. 4C) when the respective device is caused to be navigated to the location (e.g., location represented by 412) (e.g., without being navigated to a specific location corresponding to the first position when the respective device is used to be navigated to the location). In some embodiments, the representation of the respective device is displayed at the second position in response to a first input of the set of one or more inputs and a navigation application is configured to navigate the respective device to the second position in response to a second input (e.g., an input corresponding to accepting the representation of the respective device at the second position) detected after displaying the representation of the respective device at the second position. In some embodiments, the respective device is configured concurrently with displaying the representation of the respective device at the second position. In some embodiments, the respective device corresponding to the representation of the respective device is a particular vehicle corresponding to a particular unique identifier. In some embodiments, the respective device corresponding to the representation of the respective device is a respective device being used with the navigation application. In some embodiments, the respective device is caused to be navigated to a specific location corresponding to the first position when the respective devices is used to be navigated to the location before receiving the set of one or more inputs. Displaying the representation of the respective device at the first position within the representation of the location after capture of the one or more images of the location provides the user with a user interface to visualize the location with reference to the respective device, thereby providing improved visual feedback to the user. Allowing the computer system to receive an input corresponding to a request to move the representation of the respective device from the first position to the second position within the representation of the location provides the user control with where to place the respective device within the location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user. Displaying the respective device at the second location and configuring the respective device such that the respective device is caused to be navigated to the specific location corresponding to the second position when the respective device is caused to be navigated to the location provides the user with control with respect to navigating the respective device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without requiring further user input, and/or providing improved visual feedback to the user.
[0163] In some embodiments, the respective device (e.g., device represented by 414) is a different type (e.g., phone, watch, speaker, a device that can move without assistance (e.g., a device with a movement mechanism, such as a wheel, pulley, axel, engine, and/or a motor), and/or a device that cannot move without assistance) of device than the computer system. In some embodiments, the respective device includes one or more capabilities that the computer system does not include. In some embodiments, the computer system includes one or more capabilities that the respective device does not include. In some embodiments, the computer system is in communication with a component that the respective device is not in communication with. In some embodiments, the respective device is in communication with a component that the computer system is not in communication with. Having the respective device be a different type of devices than the computer system allows the user to use different types of devices to configure the respective device, thereby reducing friction when configuring the respective device and/or allowing personal devices to configure other types of devices.
[0164] In some embodiments, before receiving the set of one or more inputs (e.g., 416 and/or 418), the computer system configures the respective device (e.g., device represented by 414), such that the respective device is caused to be navigated to a location (e.g., a particular and/or specific location) corresponding to the first position in conjunction with (e.g., when, before, immediately before, after, and/or immediately after) the respective device is caused to be navigated to the location. Configuring the respective device before receiving the set of one or more inputs such that the respective device is caused to be navigated to the location corresponding to the first position provides the user with control with respect to navigating the respective device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0165] In some embodiments, in response to (e.g., based on and/or in conjunction with) receiving the set of one or more inputs (e.g., 416 and/or 418) (e.g., the input corresponding to the request) (e.g., one or more dragging inputs or, in some examples, one or more nondragging inputs (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)), the computer system configures the respective device (e.g., device represented by 414) in a second manner, such that the respective device transitions to a reduced power state (e.g., as described with respect to FIG. 4G) (e.g., a low-power or off state) when at the location corresponding to the second position (e.g., position of 414 in FIG. 4C), wherein the second manner is different from the first manner. Configuring the respective device such that the respective device transitions to the reduced power when at the location corresponding to the second position provides the user with control of operations performed by the respective device, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0166] In some embodiments, after configuring the respective (e.g., device represented by 414) device in response to receiving the set of one or more inputs (e.g., 416 and/or 418) and in accordance with a determination that the respective device has arrived at the specific location (e.g., 424 of FIG. 4F) corresponding to the second position (e.g., position of 414 in FIG. 4C), the computer system displays, via the display component, a notification (e.g., 432) that the respective device has reached the location. In some embodiments, the notification includes an indication that the respective device has reached the specific location corresponding to the second position. Displaying the notification that the respective device has reached the location when the respective device has arrived at the specific location corresponding to the second position provides the user with information with respect to a state of the respective device, thereby reducing the number of inputs needed to perform an operation, performing an operation when a set of conditions has been met without requiring further user input, and/or providing improved visual feedback to the user.
[0167] In some embodiments, in response to (e.g., based on and/or in conjunction with) receiving the set of one or more inputs (e.g., 416 and/or 418) (e.g., the input corresponding to the request) and in accordance with a determination that the first set of criteria are not met, the computer system forgoes configuring (e.g., as described above with respect to user input 416 of FIG. 4 A) the respective device in the first manner (and, in some examples, in the second manner). In some embodiments, in response to receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are not met, forgoing displaying the representation of the respective device at the second position. In some embodiments, in response to receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are not met, displaying, via the display component, an indication that the second position is not a valid position. In some embodiments, in response to receiving the set of one or more inputs and in accordance with the determination that the first set of criteria are not met, maintaining display of the representation of the respective device at the first position. In some embodiments, the first set of criteria are not met when the specific location corresponding to the second position is determined not to be a safe and/or possible location for navigation. Forgoing configuring the respective device in the first manner when the first set of criteria are not met prevents the user from being able to configure the respective device to navigate to any location and instead require that a location meet the first set of criteria, thereby reducing the number of inputs needed to perform an operation, performing an operation when a set of conditions has been met without requiring further user input, and/or providing improved visual feedback to the user.
[0168] In some embodiments, before displaying the representation (e.g., 412 of FIG. 4A) of the location, the computer system receives a request to capture an image (e.g., as described above with respect to FIG. 4A). In some embodiments, the computer system is in communication with one or more cameras, and the request to capture the image is a request to capture the image via the one or more cameras. In some embodiments, in response to receiving the request, the computer system causes capture (e.g., as described above with respect to FIG. 4A) (e.g., and/or initiating a scan), via a camera in communication with the computer system, of a first image, wherein the one or more images includes the first image. In some embodiments, in response to receiving the request, the computer system captures a plurality of images that includes the first image. In some embodiments, receiving the request to capture the image includes detecting an input (e.g., a tap input or, in some examples, a non-tap input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) directed to a user interface displayed via the computer system. Capturing the first image that is used to generate the representation using the camera that is in communication with the computer system provides the user to ensure that the representation is for the right location, thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback to the user.
[0169] Note that details of the processes described above with respect to process 500 (e.g., FIG. 5) are also applicable in an analogous manner to the methods described below/above. For example, process 300 optionally includes one or more of the characteristics of the various methods described above with reference to process 500. For example, the respective device of process 500 can be the first device of process 300. For brevity, these details are not repeated below.
[0170] FIGS. 6A-6F illustrate exemplary diagrams for navigating a movable computer system to a target destination in accordance with some embodiments. The diagrams in these figures are used to illustrate the processes described below, including the processes in FIGS. 9, 10A-10B, and 12.
[0171] In some embodiments, one or more of the diagrams of FIGS. 6A-6F are displayed by a display of movable computer system 600 and serve as a visual aid to assist a user in navigating to the target destination. In some embodiments, one or more of the diagrams of FIGS. 6A-6F are representative of different positions of movable computer system 600 while navigating to the target destination and are not displayed by a display of movable computer system 600.
[0172] FIGS.6A-6D illustrate movable computer system 600 and set of parking spots 606. In some embodiments, movable computer system 600 is a vehicle, such as an automobile (e.g., sedan, coupe, scooter, or truck). However, it should be recognized that the following discussion is equally applicable to other types of movable computer systems, such as a trailer, a skateboard, an airplane, and/or a boat.
[0173] In some embodiments, movable computer system 600 includes (1) a back set of wheels (e.g., one or more wheels) that is coupled to rear half 602 of movable computer system 600 and (2) a front set of wheels (e.g., one or more wheels) that is coupled to front half 604 of movable computer system 600. In some embodiments, the back set of wheels includes two or more wheels. In some embodiments, the front set of wheels includes two or more wheels. In some embodiments, movable computer system 600 is configured for steering with the back set of wheels and the front set of wheels (e.g., four-wheel steering when two wheels are coupled to the back of movable computer system 600 and two wheels are coupled to the front of movable computer system 600).
[0174] In some embodiments, the back set of wheels and/or the front set of wheels are configured to be independently controlled. In such embodiments, a direction of the back set of wheels and/or the front set of wheels can be changed (e.g., rotated) independently. In some embodiments, the back set of wheels can be steered together and the front set of wheels can be steered together such that steering of the back set of wheels is independent of steering the front set of wheels. In some embodiments, each wheel in the back set of wheels can be steered independently and each wheel in the front set of wheels can be steered independently.
[0175] As illustrated in FIGS. 6A-6D, set of parking spots 606 includes target parking spot 606b. In some embodiments, target parking spot 606b is a parking spot that has been identified (e.g., by movable computer system 600 and/or by a user of movable computer system 600) as the target destination of movable computer system 600. That is, in FIGS. 6A- 6D, movable computer system 600 is navigating to target parking spot 606b. In some embodiments, through FIGS. 6A-6D, movable computer system 600 causes the back set of wheels to converge on a single angle as movable computer system 600 navigates to target parking spot 606b (e.g., an angle that is parallel to target parking spot 606b, such as illustrated by arrow 608fl in FIG. 6E).
[0176] In some embodiments, target parking spot 606b is identified as the target destination by a user (e.g., an owner (e.g., inside and/or outside of movable computer system 600), a driver, and/or a passenger) of movable computer system 600. For example, the user can identify target parking spot 606b as the target destination by (1) gazing at target parking spot 606b for a predetermined amount of time (e.g., 1-30 seconds), (2) pointing movable computer system 600 towards target parking spot 606b, (3) providing input on a representation of target parking spot 606b, and/or (4) inputting a location (e.g., GPS coordinates and/or an address) that corresponds to and/or includes target parking spot 606b into a navigation application installed on movable computer system 600 and/or another computer system (e.g., a personal device of the user) in communication with movable computer system 600. These examples should not be construed as limiting and other techniques can be used for identifying the target parking spot for the moveable computer system.
[0177] In some embodiments, target parking spot 606b is identified as the target destination in response to movable computer system and/or another computer system (e.g., the personal device of the user) detecting an input (e.g., a voice command, a tap input, a hardware button press, and/or an air gesture). In some embodiments, target parking spot 606b is identified as the target destination when a determination is made that a set of wheels (e.g., the front set of wheels and/or the back set of wheels) of movable computer system 600 is rotated by the user to an angle towards target parking spot 606b. In some embodiments, target parking spot 606b is identified as the target destination when a determination is made that a set of wheels (e.g., the front set of wheels and/or the back set of wheels) of movable computer system 600 is rotated by the user to an angle away from target parking spot 606b (e.g., while movable computer system 600 is within a predefined distance from target parking spot 606b).
[0178] In some embodiments, target parking spot 606b is identified as the target destination via one or more sensors of movable computer system 600. For example, one or more cameras of movable computer system 600 can identify that target parking spot 606b is vacant and/or closest (e.g., when movable computer system 600 determines to identify a parking spot, such as in response to detecting input corresponding to a request to park) and thus identify target parking spot 606b as the target destination. For example, one or more depth sensors of movable computer system 600 can identify that a size of target parking spot 606b is large enough to accommodate movable computer system 600 and thus identify target parking spot 606b as the target destination.
[0179] In some embodiments, movable computer system 600 is configurable to operate in one of three different modes as movable computer system 600 approaches target parking spot 606b. While movable computer system 600 is in a first mode (e.g., a manual mode), both the back set of wheels and the front set of wheels are configured to be controlled by the user of movable computer system 600. While movable computer system 600 is in a second mode (e.g., a semi-automatic mode), the back set of wheels or the front set of wheels is configured to be controlled by the user while the other set of wheels is configured to not be controlled by the user (e.g., the other set of wheels is configured to be controlled by movable computer system 600 and not the user). In some embodiments, while operating in the second mode, movable computer system 600 can change which set of wheels is being controlled by the user and which set of wheels is not being controlled by the user. In some embodiments, the change for which set of wheels is being controlled by the user is based on positioning of movable computer system 600 (e.g., where movable computer 600 is located and/or oriented) and/or positioning of movable computer system 600 relative to a target destination (e.g., how close and/or in what direction the target destination is relative to movable computer system 600). For example, if movable computer system 600 leaves a densely occupied area, the front set of wheels and/or the back set of wheels can transition from being configured to be controlled by the user to not being controlled by the user, or if movable computer system 600 enters a densely occupied area, the front set of wheels and/or the back set of wheels can transition from being configured to not be controlled by the user to being configured to be controlled by the user. While movable computer system 600 is in a third mode (e.g., an automatic mode), the back set of wheels and the front set of wheels are configured to not be controlled by the user (e.g., the back set of wheels and front set wheels are configured to be controlled by movable computer system 600 and not the user).
[0180] In some embodiments, movable computer system 600 transitions between different modes as movable computer system 600 approaches target parking spot 606b. For example, movable computer system 600 can transition from the first mode to the third mode or second mode once movable computer system 600 is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from target parking spot 606b. In some embodiments, movable computer system 600 transitions to a mode (e.g., the first mode, the second mode, or the third mode) based on a target destination of the moveable object. For example, if the target destination is in a densely populated area, movable computer system 600 can transition to the first mode, or if the target destination is in an open field, movable computer system 600 can transition to the third mode. In some embodiments, movable computer system 600 transitions to a mode based on one or more conditions (e.g., wind, rain, and/or brightness) of a physical environment. For example, if the physical environment is experiencing heavy rain, movable computer system 600 can transition to the first mode, or if the physical environment is experiencing an above average amount of brightness, movable computer system 600 can transition to the third mode. In some embodiments, movable computer system 600 transitions to a mode based on data (e.g., amount of data, and/or type of data) about a physical environment that is accessible to movable computer system 600. For example, if movable computer system 600 does not have access to data regarding a physical environment, movable computer system 600 can transition to the first mode of movable computer system 600, or if movable computer system 600 has access to data regarding a physical environment, movable computer system 600 can transition to the third mode of movable computer system 600. In some embodiments, movable computer system 600 transitions to a mode of movable computer system 600 in response to movable computer system 600 detecting an input. For example, if movable computer system 600 detects that the front set of wheels and/or the back set of wheels are manually rotated in a particular direction, movable computer system 600 can transition to the first mode or the second mode. As an additional example, movable computer system 600 can transition to a mode in response to detecting an input that corresponds to the depression of a physical input mechanism of movable computer system 600 and/or in response to movable computer system 600 detecting a change in the conditions of the physical environment (e.g., change in brightness level, noise level, and/or amount of precipitation in the physical environment).
[0181] In some embodiments, while movable computer system 600 is in the first mode, the second mode, and/or the third mode, characteristics (e.g., speed, acceleration, and/or direction of travel) of the movement of movable computer system 600 change without intervention from the user. For example, a speed of movable computer system 600 can decrease when a hazard (e.g., pothole and/or construction site) is detected. For another example, the speed of movable computer system 600 can decrease as movable computer system 600 gets within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from target parking spot 606b. For another example, a direction of travel of movable computer system 600 can change when movable computer system 600 detects an object in a path of movable computer system 600.
[0182] In some embodiments, while the back set of wheels is configured to not be controlled by the user, the positioning of the back set of wheels is changed in response to detection of a current path of movable computer system 600. For example, the back set of wheels can be controlled to change the current path of movable computer system 600 when it is determined that the current path is incorrect. In some embodiments, while the back set of wheels is configured to not be controlled by the user, the positioning of the back set of wheels is changed based on detection of weather conditions in the physical environment (e.g., precipitation, a wind level, a noise level, and/or a brightness level of the physical environment). In some embodiments, the back set of wheels is configured to not be controlled by the user when a determination is made that movable computer system 600 is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) of target parking spot 606b. In some embodiments, the back set of wheels is configured to not be controlled by the user when a determination is made that the back set of wheels is at a predetermined angle with respect to target parking spot 606b.
[0183] In some embodiments, prior to movable computer system 600 navigating to the target destination, being within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination, and/or detecting input requesting for movable computer system 600 to control at least one movement component, the user is able to control both the front set of wheels and the back set of wheels. In some embodiments, prior to movable computer system 600 navigating to the target destination, being within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1- 10 seconds) from the target destination, and/or detecting input requesting for control of at least one movement component, the user is not able to control the front set of wheels and the back set of wheels (e.g., the front set of wheels and the back set of wheels are being automatically controlled by movable computer system 600, such as without requiring user input). In some embodiments, as movable computer system 600 navigates to the target destination, is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination, and/or detects input requesting for movable computer system 600 to control at least one movement component, the user of movable computer system 600 controls the position of both the back set of wheels and the front set of wheels. In some embodiments, as movable computer system 600 navigates to the target destination, is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination, and/or detects input requesting for control of at least one movement component, the user is not able to control the position of the front set of wheels and the back set of wheels. In some embodiments, the front set of wheels or the back set of wheels is configured to be controlled by the user based on the direction of travel of movable computer system 600. For example, if movable computer system 600 is moving forward (e.g., as shown in FIG. 6A), the front set of wheels can be configured to be controlled by the user, or if movable computer system 600 is moving in a reverse direction (e.g., the opposite of the direction of direction indicator 620 in FIG. 6 A), the back set of wheels can be configured to be controlled by the user. In some embodiments, the front set of wheels or the back set of wheels is configured to be controlled by the user based on the direction that the user is looking. For example, if the user is looking towards the front set of wheels, the front set of wheels can be configured to be controlled by the user while the back set of wheels is configured to not be controlled by the user, or if the user is looking towards the back set of wheels, the back set of wheels is configured to be controlled by the user while the front set of wheels is configured to not be controlled by the user.
[0184] As illustrated in FIG. 6A, direction indicator 620 in pointing to the right of movable computer system 600. In some embodiments, direction indicator 620 indicates the direction that movable computer system 600 is currently traveling. Accordingly, at FIG. 6A, movable computer system 600 is moving along a path that is perpendicular to target parking spot 606b.
[0185] At FIG. 6A, the front set of wheels is configured to be controlled by the user of movable computer system 600 while the back set of wheels is not configured to be controlled by the user of movable computer system 600 (e.g., the positioning of the back set of wheels is fixed and/or the positioning of the back set of wheels is controlled by movable computer system 600). That is, as movable computer system 600 navigates to a target destination (and/or is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) from the target destination), the user of movable computer system 600 is not able to directly control the set of wheels that is furthest from the target destination and the user is able to directly control the set of wheels that is closest to the target destination. It should be recognized that, in other embodiments, the user is able to directly control the set of wheels that is furthest from the target destination and the user is not able to directly control the set of wheels that is closest to the target destination.
[0186] At FIG. 6 A, movable computer system 600 detects an input (e.g., a voice command, the rotation of a steering mechanism, the depression of a physical input mechanism, and/or a hand gesture) that corresponds to a request to rotate the front set of wheels towards target parking spot 606b.
[0187] At FIG. 6B, in response to movable computer system 600 detecting the input that corresponds to the request to rotate the front set of wheels, the front set of wheels is rotated such that the front set of wheels is directed towards (e.g., pointed towards and/or facing) target parking spot 606b. While the back set of wheels is configured to not be controlled by the user and the front set of wheels is configured to be controlled by the user, the angle (and/or the position) of the back set of wheels relative to target parking spot 606b is based on an angle (and/or position) of the front set of wheels relative to target parking spot 606b. For example, movable computer system 600 can set different angles (and/or positions) of the back set of wheels depending on the angle of the front set of wheels relative to target parking spot 606b. In some embodiments, the angle of the back set of wheels is set (e.g., by movable computer system 600 and/or another computer system that is in communication with movable computer system 600) such that movable computer system 600 navigates along the most efficient, comfortable, and/or safest path to reach target parking spot 606b. In some embodiments, the angle of the back set of wheels is set based on a relative position of movable computer system 600 with respect to target parking spot 606b (e.g., the angle of the back set of wheels with respect to target parking spot 606b gradually decreases as a greater amount of movable computer system 600 is positioned within target parking spot 606b). In some embodiments, the angle of the back set of wheels is set based on the positioning of one or more external objects (e.g., individuals, animals, construction signs, and/or road conditions, such as potholes and/or accumulation of water) that are in a navigation path of movable computer system 600. For example, the angle of the back set of wheels can be adjusted such that movable computer system 600 does not contact and/or come within a threshold distance (e.g., .1 feet -5 feet) of an external object.
[0188] At FIG. 6B, as indicated by direction indicator 620, movable computer system 600 is navigating in a direction that is angled towards target parking spot 606b. In some embodiments, as movable computer system 600 navigates towards target parking spot 606b, movable computer system 600 accelerates and/or decelerates (e.g., without detecting an input from the user) to better align and/or to stop movable computer system 600 within target parking spot 606b. [0189] In some embodiments, movable computer system 600 provides (e.g., auditory, visual, and/or tactile) feedback based on a determination that movable computer system 600 is not aligned with target parking spot 606b. For example, movable computer system 600 can provide a tone through one or more playback devices that are in communication with movable computer system 600, display a flashing user interface via one or more displays that are in communication with movable computer system 600, and/or vibrate one or more hardware elements of movable computer system 600 when a determination is made that movable computer system 600 is not aligned within target parking spot 606b (1) after movable computer system 600 has come to rest within target parking spot 606b or (2) while navigating to target parking spot 606b but before after movable computer system 600 has come to rest within target parking spot 606b.
[0190] In some embodiments, movable computer system 600 provides (e.g., auditory, visual, and/or tactile) feedback based on a determination that movable computer system 600 will be misaligned within target parking spot 606b if movable computer system 600 continues along the current path of movable computer system 600. For example, movable computer system 600 can cause a steering mechanism of movable computer system 600 to rotate, vibrate at least a portion of the steering mechanism, apply a braking mechanism to the front set of tires and/or the back set of tires, and/or display a warning message, via a display of movable computer system 600, when a determination is made that the angle of approach of movable computer system 600 with respect to target parking spot 606b is too steep or shallow.
[0191] In some embodiments, feedback can grow in intensity as misalignment between movable computer system 600 and target parking spot 606b grows and/or persists. In some embodiments, movable computer system 600 can provide a series of different types of feedback (e.g., first visual feedback, then audio feedback, then haptic feedback) as misalignment between movable computer system 600 and target parking spot 606b grows and/or persists.
[0192] In some embodiments, movable computer system 600 stops providing feedback based on a determination (e.g., a determination made by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that movable computer system 600 transitions from being and/or will be misaligned with target parking spot 606b to being and/or will be aligned with target parking spot 606b. [0193] After FIG. 6B and before FIG. 6C, movable computer system 600 detects an input (e.g., a voice command, the rotation of a steering mechanism, the depression of a physical input mechanism, and/or a hand gesture) that corresponds to a request to rotate the front set of wheels to be parallel with target parking spot 606b. In some embodiments, after FIG. 6B and before FIG. 6C, movable computer system 600 causes the back set of wheels to change direction such that the back set of wheels is parallel with target parking spot 606b.
[0194] At FIG. 6C, in response to detecting the input that corresponds to a request to rotate the front set of wheels to be parallel with target parking spot 606b, the front set of wheels are rotated such that the front set of wheels are parallel with target parking spot 606b. At FIG. 6C, both the back set of wheels and the front set of wheels are parallel to target parking spot 606b. At FIG. 6C, as indicated by direction indicator 620, movable computer system 600 moves in a direction that is parallel to target parking spot 606b. In some embodiments, movable computer system 600 performs one or more operations (e.g., unlocks doors of movable computer system 600, powers off an air conditioning device of movable computer system 600, closes one or more windows of movable computer system 600, decreases a speed of movable computer system 600 (e.g., gradually decreases to a stop), and/or increases a speed of movable computer system 600) when a determination is made that movable computer system 600 is parallel to target parking spot 606b.
[0195] In some embodiments, a mode (e.g., the first mode, the second mode, and/or the third mode as described above) of movable computer system 600 is based on the orientation of movable computer system 600 relative to target parking spot 606b. For example, movable computer system 600 can transition from the second mode to the first mode or the third mode when a determination is made that movable computer system 600 is parallel to target parking spot 606b.
[0196] At FIG. 6D, as indicated by the absence of direction indicator 620, movable computer system 600 comes to rest within target parking spot 606b. At FIG. 6D, movable computer system 600 is correctly aligned within target parking spot 606b. In some embodiments, movable computer system 600 comes to rest within target parking spot 606b without detecting that the user has caused a brake to be applied to the front set of wheels and/or the back set of wheels. In some embodiments, movable computer system 600 performs one or more operations (e.g., unlocks doors of movable computer system 600, powers of an air conditioning device of movable computer system 600 and/or closes one or more windows of movable computer system 600) when a determination is made that movable computer system 600 has come to rest within target parking spot 606b.
[0197] In some embodiments, movable computer system 600 transitions between different modes of movable computer system 600 when a determination is made that movable computer system 600 has come to rest within target parking spot 606b. For example, movable computer system 600 can transition from the second mode to the third mode to allow movable computer system 600 make any adjustments to the positioning of movable computer system 600. For another example, movable computer system 600 can transition from the second mode to the first mode to allow the user to rotate the front set of wheels and/or the back set of wheels after movable computer system 600 has stopped. In some embodiments, movable computer system 600 transitions, without user intervention, between respective drive states (e.g., reverse, park, neutral, and/or drive) when a determination is made that movable computer system 600 has come to rest within target parking spot 606b. In some embodiments, after movable computer system 600 comes to rest within target parking spot 606b, movable computer system 600 rotates the front set of wheels and/or the back set of wheels to respective angles (e.g., based on a current context, such as an incline of a surface and/or weather) without user intervention. In some embodiments, rotating the front set of wheels and/or the back set of wheels to the respective angles helps prevent movable computer system 600 from moving (e.g., because of weather conditions (e.g., ice and/or rain) and/or because of a slope of target parking spot 606b) while movable computer system 600 is at rest within target parking spot 606b.
[0198] FIG. 6E illustrates diagram 608, which includes set of arrows 640 and set of arrows 642. In some embodiments, set of arrows 640 and set of arrows 642 correspond to movable computer system 600 navigating to target parking spot 606b where movable computer system 600 does not deviate from a navigation path of movable computer system 600.
[0199] At FIG. 6E, set of arrows 640 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 606b (e.g., an upward facing arrow indicates that the back set of wheels is directed away from target parking spot 606b and a downward facing arrow indicates that the back set of wheels is directed towards target parking spot 606b). In some embodiments, the back set of wheels is configured to not be controlled by a user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 640 as discussed above. In some embodiments, movable computer system 600 causes the back set of wheels to converge on a single target angle (e.g., the angle of arrow 608fl) throughout diagram 608. For example, the single target angle can be parallel to sides of target parking spot 606b.
[0200] At FIG. 6E, set of arrows 642 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 606b (e.g., an upward facing arrow indicates that the front set of wheels is directed away from target parking spot 606b and a downward facing arrow indicates that the front set of wheels is directed towards target parking spot 606b). In some embodiments, the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 642 as discussed above.
[0201] Turning the attention to each individual arrow included in set of arrows 640 and set of arrows 642, arrow 608al and arrow 608a2 correspond to a first point in time where the back set of wheels and the front set of wheels are perpendicular to target parking spot 606b (e.g., movable computer system 600 is approaching target parking spot 606b). In some embodiments, because movable computer system 600 is configured for four-wheel steering, the back set of wheels is not in fixed positional relationship with movable computer system 600. That is, the back set of wheels is configured to turn independent of the direction of travel of movable computer system 600 (e.g., and/or the front set of wheels). Accordingly, arrow 608al (e.g., and the remaining arrows in set of arrows 640) does not represent a fixed positional relationship between movable computer system 600 and the back set of wheels. Arrow 608b 1 and arrow 608b2 correspond to a second point in time, that follows the first point in time, where movable computer system 600 is turning into target parking spot 606b. At the second point in time the back set of wheels is angled away from target parking spot 606b and the front set of wheels is angled towards target parking spot 606b 1. As explained above, movable computer system 600 is configured for four-wheel steering. Accordingly, when movable computer system 600 makes turns at low speeds, the first set of wheels can be directed in an opposite direction than the second set of wheels to reduce the turning radius of movable computer system 600. In some embodiments, when movable computer system 600 is configured for two-wheel steering, the back set of wheels and movable computer system 600 have a fixed positional relationship. In examples where the back set of wheels and the body of movable computer system 600 have a fixed positional relationship, the arrows included in set of arrows 640 can be directed in a direction that mimics the direction of travel of movable computer system 600.
[0202] Arrow 608c 1 and arrow 608c2 correspond to a third point in time that follows the second point in time where movable computer system 600 continues to turn into target parking spot 606b. At the third point in time the back set of wheels is angled towards target parking spot 606b 1 and the front set of wheels is parallel to target parking spot 606b 1. Arrow 608dl and arrow 608d2 correspond to a fourth point in time that follows the third point in time where movable computer system 600 navigates towards the rear of target parking spot 606b 1. At the fourth point in time both the front set of wheels and the back set of wheels are parallel to target parking spot 606b. Arrow 608el and arrow 608e2 correspond to a fifth point in time that follows the fourth point in time where movable computer system 600 continues to navigate towards the rear of target parking spot 606b 1. At the fifth point in time both the front set of wheels and the back set of wheels are parallel to target parking spot 606b as movable computer system 600 pulls further into target parking spot 606b. Arrow 608fl and arrow 608f2 correspond to a sixth point in time that follows the fifth point in time as movable computer system 600 comes to a rest within target parking spot 606b. At the sixth point in time both the front set of wheels and the back set of wheels are parallel to target parking spot 606b as movable computer system 600
[0203] At FIG. 6E, at each respective position of the back set of wheels that is represented by the arrows included in set of arrows 640, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 606b. Because a determination is made that continuing along the current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 606b, at each position represented by a respective arrow included in set of arrows 640, movable computer system 600 causes the back set of wheels to be positioned at an angle such that the back set of wheels does not cause movable computer system 600 to deviate from the current path of movable computer system 600.
[0204] In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 608el and arrow 608fl, movable computer system 600 deaccelerates in response to the user applying pressure to a brake pedal of movable computer system 600. In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 608el and arrow 608fl, movable computer system 600 deaccelerates without user intervention.
[0205] FIG. 6F illustrates diagram 610, which includes set of arrows 650 and set of arrows 652. In some embodiments, set of arrows 650 and set of arrows 652 correspond to movable computer system 600 navigating to another parking spot that is different from target parking spot 606b where movable computer system 600 deviates from a navigation path of movable computer system 600.
[0206] At FIG. 6F, set of arrows 650 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of the other parking spot (e.g., an upward facing arrow indicates that the back set of wheels is directed away from the other parking spot and a downward facing arrow indicates that the back set of wheels is directed towards the other parking spot). In some embodiments, the back set of wheels is configured to not be controlled by a user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 650 as discussed above. In some embodiments, movable computer system 600 causes the back set of wheels to converge on a single target angle (e.g., the angle of arrow 61 Ofl) throughout diagram 610. For example, the single target angle can be parallel to sides of the other parking spot.
[0207] At FIG. 6F, set of arrows 652 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of the other parking spot (e.g., an upward facing arrow indicates that the front set of wheels is directed away from the other parking and a downward facing arrow indicates that the front set of wheels is directed towards the other parking spot). In some embodiments, the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 652 as discussed above.
[0208] In some embodiments, the positioning of the front set of wheels as movable computer system 600 navigates to the other parking spot at FIG. 6F mimics the positioning of the front set of wheels as movable computer system 600 navigates to target parking spot 606b at FIG. 6E. Accordingly, at FIG. 6F, set of arrows 652 is the same as set of arrows 642 at FIG. 6E. [0209] At FIG. 6F, at each respective position of the back set of wheels that is represented by arrows 610al-610dl, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot. Because a determination is made that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot, movable computer system 600 causes the back set of wheels to be positioned at an angle at each of the positions represented by arrows 610al-610dl that does not cause movable computer system 600 to deviate from the navigation path (e.g., the same path of movable computer system 600 at FIG. 6E). In some embodiments, movable computer system 600 causes the back set of wheels to be positioned at an angle that does not cause movable computer system 600 to deviate from the navigation path based on a determination that if movable computer system 600 continues along the navigation path of movable computer system 600 then movable computer system 600 will not come into contact with and/or be within a predefined distance of an external object and/or be aligned with the other parking spot.
[0210] Between the positioning of the back set of wheels that corresponds to arrow 61 Odl and arrow 610el, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot, movable computer system 600 causes the back set of wheels to be adjusted to an angle such that causes movable computer system 600 to deviate from the navigation path to a new path. That is, when a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot, the positioning of the back set of wheels (e.g., the set of wheels that is configured to not be controlled by the user) is adjusted, without user intervention, such that movable computer system 600 deviates from the navigation path to the new path. In some embodiments, the angle of the back set of wheels is (e.g., by movable computer system 600 and/or another computer system that is in communication with movable computer system 600) adjusted to an angle to offset an error made by the user in controlling the front set of wheels. Accordingly, the orientation of arrow 610el at FIG. 6F is different than the orientation of arrow 608el at FIG. 6E. More specifically, at FIG. 6E, the back set of wheels is parallel to target parking spot 606b at arrow 608el, and at FIG. 6F, the back set of wheels is angled to the left of the other parking spot. The back set of wheels is angled at arrow 610el such that rear half 602 of movable computer system 600 is moved to the left within the other parking spot.
[0211] At FIG. 6F, at the position of the back set of wheels that is represented by arrow 61 Of 1 , a determination is made that (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot (and/or reach the single target angle). Because a determination is made that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within the other parking spot, movable computer system 600 causes the back set of wheels to be positioned at the single target angle.
[0212] FIGS. 7A-7C illustrate exemplary diagrams for navigating between objects in a forward manner in accordance with some embodiments. The diagrams in these figures are used to illustrate the processes described below, including the processes in FIGS. 9, 10A- 10B, and 12.
[0213] FIG. 7A includes a diagram that illustrates movable computer system 600 navigating towards target parking spot 706. At FIG. 7A, target parking spot 706 is a parking spot that is parallel to the direction of travel of movable computer system 600.
[0214] In some embodiments, the diagram of FIG. 7A is displayed by a display of movable computer system 600 and serves as a visual aid to assist a user in navigating to the target destination. In some embodiments, the diagram of FIG. 7A is representative of a position of movable computer system 600 while navigating to the target destination and is not displayed by a display of movable computer system 600.
[0215] As illustrated in FIG. 7A, target parking spot 706 is positioned between object 702 and object 704. In some embodiments, object 702 and object 704 are inanimate objects such as automobiles, construction signs, trees, and/or road hazards, such as a pot hole and/or a speed bump. In some embodiments, object 702 and object 704 are animate objects, such as an individual and/or an animal.
[0216] At FIG. 7A, direction indicator 720 indicates the path that movable computer system 600 will travel to arrive at target parking spot 706. Accordingly, as indicated by direction indicator 720, movable computer system 600 will travel forward before angling downwards towards target parking spot 706.
[0217] At FIG. 7A, movable computer system 600 causes the back set of wheels to converge on a first angle as movable computer system 600 travels in the forward direction towards target parking spot 706 (e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708dl) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708el) as movable computer system 600 angles downwards towards target parking spot 706.
[0218] In some embodiments, as explained above, as movable computer system 600 navigates towards target parking spot 706, the set of wheels of movable computer system 600 that is closest to target parking spot 706 is configured to be controlled by the user of movable computer system 600. At FIG. 7A, a determination is made that the front set of wheels is positioned closer to target parking spot 706 than the back set of wheels. At FIG. 7A, because a determination is made that the front set of wheels is positioned closer to target parking spot 706 than the back set of wheels, the front set of wheels is configured to be controlled by the user and the back set of wheels is configured to not be controlled by the user as movable computer system 600 navigates towards target parking spot 706. In some embodiments, the front set of wheels is configured to not be controlled by the user when a determination is made that movable computer system 600 is within a predetermined distance (e.g., .1-50 feet) and/or a predetermined time (e.g., 1-10 seconds) of object 702, object 704, and/or target parking spot 706. In some embodiments, as movable computer system 600 navigates towards target parking spot 706, the front set of wheels is configured to not be controlled by the user of movable computer system 600 and the back set of wheels is configured to be controlled by the user of movable computer system 600 when a determination is made that the back set of wheels is positioned closer to target parking spot 706 than the front set of wheels. [0219] In some embodiments, a navigation path of movable computer system 600 and/or a speed of movable computer system 600 changes (e.g., without detecting a user input) when a determination is made that the positioning of object 702 and/or object 704 changes (e.g., object 702 and/or object 704 moves (1) towards and/or moves away from movable computer system 600 and/or (2) relative to parking spot 706).
[0220] FIG. 7B illustrates diagram 708, which includes set of arrows 740 and set of arrows 742. In some embodiments, set of arrows 740 and set of arrows 742 correspond to movable computer system 600 navigating to target parking spot 706 where movable computer system 600 does not deviate from a navigation path of movable computer system 600.
[0221] At FIG. 7B, set of arrows 740 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 706 (e.g., a rightward facing arrow indicates that the back set of wheels is directed towards target parking spot 706, an upward facing arrow indicates that the back set of wheels is directed away from target parking spot 706, and a downward facing arrow indicates that the back set of wheels is directed towards target parking spot 706) (e.g., a horizontal arrow indicates that the back set of wheels is parallel to target parking spot 706 and a vertical arrow indicates that the back set of wheels is perpendicular to target parking spot 706). In some embodiments, the back set of wheels is configured to not be controlled by a user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 740 as discussed above. In some embodiments, movable computer system 600 causes the back set of wheels to converge on a first angle as movable computer system 600 travels in the forward direction towards target parking spot 706 (e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708d 1 ) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708el) as movable computer system 600 angles downwards towards target parking spot 706.
[0222] At FIG. 7B, set of arrows 742 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 706 (e.g., a rightward facing arrow indicates that the front set of wheels is directed towards target parking spot 706, an upward facing arrow indicates that the front of wheels is directed away from target parking spot 706, and a downward facing arrow indicates that the front set of wheels is directed towards target parking spot 706) (e.g., a horizontal arrow indicates that the front set of wheels is parallel to target parking spot 706 and a vertical arrow indicates that the back set of wheels is perpendicular to target parking spot 706). In some embodiments, the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 742 as discussed above.
[0223] At FIG. 7B, at each position of the back set of wheels that is represented by the arrows included in set of arrows 740, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the respective path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 706. Because a determination is made that continuing along the respective path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 706 at each position represented by a respective arrow included in set of arrows 740, movable computer system 600 causes the back set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the navigation path of movable computer system 600.
[0224] In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 708dl and arrow 708el, movable computer system 600 deaccelerates in response to the user applying pressure to a brake pedal of movable computer system 600. In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 708dl and arrow 708el, movable computer system 600 deaccelerates without user intervention.
[0225] FIG. 7C illustrates diagram 710, which includes set of arrows 750 and set of arrows 752. In some embodiments, set of arrows 750 and set of arrows 752 correspond to movable computer system 600 navigating to another parking spot that is different from target parking spot 706 where movable computer system 600 deviates from a navigation path of movable computer system 600.
[0226] At FIG. 7C, set of arrows 750 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of the other parking spot (e.g., a rightward facing arrow indicates that the back set of wheels is directed towards the other parking spot, an upward facing arrow indicates that the back set of wheels is directed away from the other parking spot and a downward facing arrow indicates that the back set of wheels is directed towards the other parking spot) (e.g., a horizontal arrow indicates that the back set of wheels is parallel to the other parking spot and a vertical arrow indicates that the back set of wheels is perpendicular to the other parking spot). In some embodiments, the back set of wheels is configured to not be controlled by the user throughout at least a portion of set of arrows 750 as discussed above. In some embodiments, movable computer system 600 causes the back set of wheels to converge on a first angle as movable computer system 600 travels in the forward direction towards target parking spot 706 (e.g., an angle that is perpendicular or approximately perpendicular to curb 700, such as illustrated by arrow 708dl) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to curb 700, such as illustrated by arrow 708el) as movable computer system 600 angles downwards towards target parking spot 706.
[0227] At FIG. 7C, set of arrows 752 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of the other parking spot (e.g., an upward facing arrow indicates that the back set of wheels is directed away from the other parking spot and a downward facing arrow indicates that the back set of wheels is directed towards the other parking spot) (e.g., a horizontal arrow indicates that the front set of wheels is parallel to the other parking spot and a vertical arrow indicates that the back set of wheels is perpendicular to the other parking spot) as movable computer system 600 navigates to the other parking spot. In some embodiments, the front set of wheels is configured to be controlled by the user throughout at least a portion of set of arrows 752 as discussed above.
[0228] For FIG. 7C, a length of the other parking spot is shorter in length than target parking spot 706 at FIGS. 7A-7B. Accordingly, performing the same navigation sequence that was performed at FIG. 7B will cause movable computer system 600 to be misaligned within the other parking spot. As illustrated in FIG. 7C, the positioning of the front set of wheels as movable computer system 600 navigates to the other parking spot mimics the positioning of the front set of wheels as movable computer system 600 navigates to target parking spot 706 at FIG. 7B. Accordingly, at FIG. 7C, set of arrows 752 is the same as set of arrows 742 at FIG. 7B.
[0229] At FIG. 7C, at the positions of the back set of wheels that is represented by arrow 710al and arrow 710b 1 , a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within the other parking spot, movable computer system 600 causes the back set of wheels to be positioned at an angle that does not cause movable computer system 600 to deviate from the navigation path of movable computer system 600 at the positions of the back set of wheels that correspond to arrow 710al and arrow 71 Obi.
[0230] Between the positioning of the back set of wheels that corresponds to arrow 71 Obi and arrow 710cl, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot, movable computer system 600 causes the back set of wheels to be adjusted to an angle that causes movable computer system 600 to deviate from the navigation path to a new path.
[0231] That is, as explained above, when a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be misaligned within the other parking spot, the positioning of the respective set of wheels that is configured to not be controlled by the user is adjusted, without user intervention, such that movable computer system 600 deviates from the navigation path to the new path. Accordingly, the orientation of arrow 710cl at FIG. 7C is different than the orientation of arrow 708cl at FIG. 7B. More specifically, at arrow 710cl, the back set of wheels is angled towards the rear of the other parking spot such that movable computer system 600 is moved towards the rear of the other parking spot while, at arrow 708c 1, the back set of wheels is angled towards the front of target parking spot 706 such that movable computer system 600 is moved towards the front of target parking spot 706.
[0232] At FIG. 7C, at the position of the back set of wheels that is represented by arrows 71 Odl and 710el, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within the other parking spot. Because a determination is made that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within the other parking spot (and/or reach the second target angle), movable computer system 600 causes the back set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the new path at arrows 71 Odl and 710el (and/or reach the first target angle and the second target angle, respectively).
[0233] FIGS. 8A-8C illustrate exemplary diagrams for navigating between objects in a backward manner in accordance with some embodiments. The diagrams in these figures are used to illustrate the processes described below, including the processes in FIGS. 9, 10A- 10B, and 12.
[0234] FIG. 8A includes diagram 800 that illustrates movable computer system 600 navigating towards target parking spot 806. At FIG. 8A, target parking spot 806 is a parking spot that is parallel to the direction of travel of movable computer system 600 (e.g., the current direction of travel of movable computer system 600 and/or a previous direction of travel of movable computer system 600).
[0235] In some embodiments, the diagram of FIG. 8 A is displayed by a navigation application of movable computer system 600 and serves as a visual aid to assist a user in navigating to the target destination. In some embodiments, the diagram of FIG. 8 A is representative of a position of movable computer system 600 while navigating to the target destination and is not displayed by a navigation application of movable computer system 600.
[0236] As illustrated in FIG. 8 A, target parking spot 806 is positioned between object 802 and object 804. In some embodiments, object 802 and object 804 are inanimate objects such as automobiles, construction signs, trees, and/or road hazards, such as a pothole or and/or a speed bump. In some embodiments, object 802 and object 804 are animate objects, such as an individual and/or an animal.
[0237] At FIG. 8A, direction indicator 820 indicates the path that movable computer system 600 will travel to arrive at target parking spot 806. Accordingly, as indicated by direction indicator 820, movable computer system 600 will travel in a reverse direction before angling downwards at an angle (e.g., a 90-degree angle or an angle that is substantially 90 degrees) towards target parking spot 806.
[0238] In some embodiments, as explained above, as movable computer system 600 navigates towards target parking spot 806, the set of wheels of movable computer system 600 that is closest to target parking spot 806 is configured to be controlled by a user of movable computer system 600. At FIG. 8A, a determination is made (e.g., by movable computer system 600 and/or by a computer system that is in communication with movable computer system 600) that the back set of wheels is positioned closer to target parking spot 806 than the front set of wheels. At FIG. 8A, because a determination is made that the back set of wheels is positioned closer to target parking spot 806 than the front set of wheels, the back set of wheels is configured to be controlled by the user and the front set of wheels is configured to not be controlled by the user as movable computer system 600 navigates towards target parking spot 806. In some embodiments, a navigation path of movable computer system 600 and/or a speed of movable computer system 600 changes (e.g., without detecting a user input) when a determination is made that the positioning of object 702 and/or object 704 changes (e.g., object 702 and/or object 704 moves (1) towards and/or moves away from movable computer system 600 and/or (2) relative to parking spot 706).
[0239] FIG. 8B illustrates diagram 808, which includes set of arrows 840 and set of arrows 842. In some embodiments, set of arrows 840 and set of arrows 842 correspond to movable computer system 600 navigating to target parking spot 806 where movable computer system 600 does not deviate from a navigation path of movable computer system 600.
[0240] At FIG. 8B, set of arrows 840 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the back set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the back set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the back set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the back set of wheels is perpendicular with target parking spot 806). In some embodiments, the back set of wheels is configured to be controlled by a user throughout at least a portion of set of arrows 840 as discussed above. [0241] At FIG. 8B, set of arrows 842 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the front set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the front set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the front set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the front set of wheels is perpendicular with target parking spot 806). In some embodiments, the front set of wheels is configured to not be controlled by the user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 842 as discussed above. In some embodiments, movable computer system 600 causes the front set of wheels to converge on a first angle as movable computer system 600 travels in the backward direction towards target parking spot 806 (e.g., an angle that is perpendicular or approximately perpendicular to curb 800, such as illustrated by arrow 808c2) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to curb 800, such as illustrated by arrow 808d2) as movable computer system 600 angles downwards towards target parking spot 806.
[0242] At FIG. 8B, at each respective position of the front set of wheels that is represented by the arrows included in set of arrows 842, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 806. Because a determination is made that continuing along the navigation path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 806, movable computer system 600 causes the front set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the navigation path. In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 808c 1 and arrow 808dl, movable computer system 600 deaccelerates in response to the user applying pressure to a brake pedal of movable computer system 600. In some embodiments, between the positioning of movable computer system 600 that corresponds to arrow 808c 1 and arrow 808dl, movable computer system 600 deaccelerates without user intervention. [0243] FIG. 8C illustrates diagram 810, which includes set of arrows 850 and set of arrows 852. In some embodiments, set of arrows 850 and set of arrows 852 correspond to movable computer system 600 navigating to target parking spot 806 where movable computer system 600 deviates from a navigation path of movable computer system 600. It should be recognized that the deviation in FIG. 8C is a result of an error by the user rather than a different parking spot, as described above with respect to FIGS. 6E-6F and 7B-7C.
[0244] At FIG. 8C, set of arrows 850 is a sequence of arrows that represents the positioning of the back set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the back set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the back set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the back set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the back set of wheels is perpendicular with target parking spot 806). In some embodiments, the back set of wheels is configured to be controlled by a user throughout at least a portion of set of arrows 850 as discussed above.
[0245] At FIG. 8C, set of arrows 852 is a sequence of arrows that represents the positioning of the front set of wheels relative to the position of target parking spot 806 (e.g., a downward facing arrow indicates that the front set of wheels is directed towards from target parking spot 806 and a leftward facing arrow indicates that the front set of wheels is directed towards target parking spot 806) (e.g., a horizontal arrow indicates that the front set of wheels is parallel with target parking spot 806 and a vertical arrow indicates that the front set of wheels is perpendicular with target parking spot 806). In some embodiments, the front set of wheels is configured to not be controlled by the user (e.g., and/or be controlled by movable computer system 600 instead of the user) throughout at least a portion of set of arrows 852 as discussed above. In some embodiments, movable computer system 600 causes the front set of wheels to converge on a first angle as movable computer system 600 travels in the backward direction towards target parking spot 806 (e.g., an angle that is perpendicular or approximately perpendicular to a curb, such as similar to arrow 808d2 in FIG. 8B) and movable computer system 600 causes the back set of wheels to converge on a second angle (e.g., an angle that is parallel or substantially parallel to the curb, such as illustrated by arrow 810e2) as movable computer system 600 angles downwards towards target parking spot 806. [0246] The positioning of the back set of wheels as movable computer system 600 navigates to target parking spot 806 at FIG. 8C does not mimic the positioning of the back set of wheels as movable computer system 600 navigates to target parking spot 806 at FIG. 8B. More specifically, arrow 808b 1 in FIG. 8B indicates that the back set of wheels is angled towards target parking spot for a second point in time while arrow 81 Obi in FIG. 8C indicates that the back set of wheels is perpendicular to target parking spot 806 for a second point in time. Accordingly, movable computer system 600 navigates along a different path to target parking spot 806 at FIG. 8B in contrast to the path movable computer system 600 navigates along at FIG. 8C.
[0247] At FIG. 8C, at both respective positions of the front set of wheels that is represented by arrow 810a2 and 810b2, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along a current path of movable computer system 600 will cause movable computer system 600 to be correctly aligned within target parking spot 806. Because a determination is made that if movable computer system 600 continues along the current path of movable computer system 600 then movable computer system 600 will be correctly aligned within target parking spot 806, movable computer system 600 causes the front set of wheels to be positioned such that movable computer system 600 does not deviate from its current path.
[0248] Between the positioning of the front set of wheels that corresponds to arrow 810b2 and arrow 810c2, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the current path of movable computer system 600 will cause movable computer system 600 to be misaligned within target parking spot 806. Because a determination is made that continuing along the current path of movable computer system 600 will cause movable computer system 600 to be misaligned within target parking spot 806, movable computer system 600 causes the front of wheels to be adjusted to an angle that causes movable computer system 600 to deviate from the current path to a new path.
[0249] Accordingly, the orientation of arrow 810c2 at FIG. 8C is different than the orientation of arrow 808c2 at FIG. 8B. More specifically, at arrow 810c2, the front set of wheels is perpendicular with respect to the position of target parking spot 806 such that movable computer system 600 is moved perpendicular to target parking spot 806 while, at arrow 808c2, the front set of wheels is angled towards the rear of target parking spot 806 such that movable computer system 600 is moved at an angle with respect to target parking spot 806.
[0250] At FIG. 8C, at the position of the front set of wheels that is represented arrows 810d2 and 810e2, a determination is made (e.g., by movable computer system 600 and/or by another computer system that is in communication with movable computer system 600) that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within target parking spot 806. Because a determination is made that continuing along the new path of movable computer system 600 will cause movable computer system 600 to be aligned within target parking spot 806, movable computer system 600 causes the front set of wheels to be positioned at an angle such that movable computer system 600 does not deviate from the new path of movable computer system 600.
[0251] FIG. 9 is a flow diagram illustrating a method (e.g., process 900) for configuring a movable computer system in accordance with some embodiments. Some operations in process 900 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0252] As described below, process 900 provides an intuitive way for configuring a movable computer system. Process 900 reduces the cognitive burden on a user for configuring a movable computer system, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to configure a movable computer system faster and more efficiently conserves power and increases the time between battery charges.
[0253] In some embodiments, process 900 is performed at a computer system (e.g., 600 and/or 1100) that is in communication with a first movement component (e.g., 602 and/or 604) (e.g., an actuator, a wheel, and/or an axel) and a second movement component (e.g., 602 and/or 604) different from (e.g., separate from and/or not directly connected to) the first movement component. In some embodiments, the computer system is a watch, a phone, a tablet, a processor, a head-mounted display (HMD) device, a vehicle, and/or a personal computing device. In some embodiments, the computer system is in communication with one or more cameras (e.g., one or more telephoto, wide angle, and/or ultra-wide-angle cameras). In some embodiments, the first movement component is located on a first side of the computer system. In some embodiments, the second movement component is located on a second side different and/or opposite from the first side. In some embodiments, the first side of the computer system is the front and/or front side of the computer system and the second side of the computer system is the back and/or back side of the computer system and/or vice- versa. In some embodiments, the first movement component primarily causes a change in orientation of the first side of the computer system, causes the first side of the computer system to change position more than the second side of the computer system changes position, and/or impacts the first side of the computer system more than the second side of the computer system. In some embodiments, the second movement component primarily causes a change in orientation of the second side of the computer system, causes the second side of the computer system to change position more than the first side of the computer system, and/or impacts the second side of the computer system more than the first side of the computer system changes the position.
[0254] While detecting a target location (e.g., 606b) (e.g., the destination, a target destination, a stopping location, a parking spot, a demarcated area, and/or a pre-defined area) in a physical environment (e.g., and while the first movement component is moving in a first direction and/or the second movement component is moving in a second direction (e.g., the same as or different from the first direction)) (e.g., and/or in response to detecting a current location of the computer system relative to the target location), the computer system detects (902) an event with respect to the target location (e.g., as described above in relation to FIG. 6A). In some embodiments, detecting the event includes detecting that the computer system is within a predefined distance from the target location. In some embodiments, detecting the event includes detecting, via an input component in communication with the computer system, an input corresponding to a request to assist navigation to the target location. In some embodiments, detecting the event includes detecting a current angle of the first and/or second movement component.
[0255] In response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied (e.g., the first set of one or more criteria is different from the respective set of one or more criteria), the computer system configures (904) (e.g., maintains configuration or changes configuration of) (e.g., based on a distance, location, and/or direction of the target location relative to the computer system) (e.g., based on an angle of the second movement component) one or more angles of one or more movement components (e.g., 602 and/or 604) (e.g., a set of one or more movement components including the first movement component and the second movement component), wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle (e.g., 906) (e.g., a wheel angle, and/or a direction) of the first movement component (e.g., 602 and/or 604) is configured to be controlled in an automatic manner (e.g., an automatically and/or autonomously controlled manner) (e.g., by the computer system) (e.g., the angle corresponding to the first movement component is modified without detecting user input corresponding to a request to modify the angle corresponding to the first movement component and/or the angle corresponding to the first movement component is not modified directly in accordance with detected user input) and an angle (e.g., 908) of the second movement component (e.g., 602 and/or 604) is configured to be controlled in a manual manner (e.g., a manually controlled manner) different from the automatic manner (e.g., in response to detecting input, the computer system modifies the angle of the first movement component and/or the angle of the second movement component in accordance with the input) (e.g., and/or while forgoing configuring the angle of the second movement component to be controlled by the computer system). In some embodiments, the target location is detected via one or more sensors (e.g., a camera, a depth sensor, and/or a gyroscope) in communication with the computer system (e.g., one or more sensors of the computer system). In some embodiments, the target location is detected via (e.g., based on and/or using) a predefined map of the physical environment. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first (e.g., semi-autonomous) mode. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is moving in a third direction (e.g., the same as or different from the first and/or second direction) (e.g., at least partially toward the target location). In some embodiments, a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof) in communication with the computer system does not directly control the angle of the first movement component when the first set of one or more criteria is satisfied. In some embodiments, the steering mechanism does not directly control the angle of the first movement component when the first set of one or more criteria is satisfied. In some embodiments, the angle of the first movement component is reactive to the angle of the second movement component. In some embodiments, the angle of the first movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location. In some embodiments, the manual manner is the first manner. In some embodiments, the automatic manner is the first manner. In some embodiments, the first manner is the manual manner and is not the automatic manner. In some embodiments, in response to detecting the change with respect to the computer system and the target location and in accordance with a determination that a second set of one or more criteria, different from the first set of one or more criteria is satisfied, the angle (e.g., a wheel angle, and/or a direction) of the first movement component and the angle of the second movement component continues to be controlled in the first manner. In some embodiments, in response to detecting the change with respect to the computer system and the target location and in accordance with a determination that a second set of one or more criteria, the computer system forgoes configuring the angle of the first movement component to be controlled in the automatic manner. In some embodiments, the event is detected while navigating to a destination in the physical environment. In some embodiments, the event is detected while the angle of the first movement component and the angle of the second movement component are configured to be controlled in a first manner (e.g., manually (e.g., by a user of the computer system and/or by a person), semi -manually, semi-autonomously, and/or fully autonomously (e.g., by one or more computer systems and not by a person and/or user of the computer system) (e.g., by the computer system and/or a user of the computer system)). In some embodiments, configuring the angle of the first movement component and the angle of the second movement component to be controlled in the first manner includes forgoing configuring the angle of the first movement component and/or the angle of the second movement component to be controlled by the computer system. In some embodiments, configuring the angle of the first movement component and the angle of the second movement component to be controlled in the first manner includes configuring the angle of the first movement component and/or the angle of the second movement component to be controlled based on input (e.g., user input) detected via one or more sensors in communication with the computer system. In some embodiments, the angle of the first movement component and the angle of the second movement component are configured to be controlled in the first manner when the computer system is configured to be at least partially manually controlled. In some embodiments, the angle of the first movement component and the angle of the second movement component are configured to be controlled in the first manner when the computer system is at least a predefined distance from the destination. In some embodiments, the angle of the first movement component and the angle of the second movement component are configured to be controlled in the first manner when the computer system is within a predefined distance from the destination. In some embodiments, in response to detecting the event and in accordance with a determination that a third set of one or more criteria is satisfied, configuring the angle of the first movement component and/or the angle of the second movement component to be manually controlled. In some embodiments, in response to detecting the event and in accordance with a determination that a fourth set of one or more criteria is satisfied, configuring the angle of the first movement component and/or the angle of the second movement component to be controlled (e.g., automatically, autonomously, and/or at least partially based on a portion (e.g., a detected object and/or a detected symbol) of the physical environment) by the computer system. In some embodiments, navigating includes displaying one or more navigation instructions corresponding to the destination. In some embodiments, navigating includes, at a first time, automatically controlling the first movement component and/or the second movement component based on a determined path to the destination. Causing an angle of the first movement component to be controlled in an automatic manner and an angle of the second movement component to be controlled in a manual manner in response to detecting an event and the first set of one or more criteria being satisfied allows the computer system to partially assist a user in reaching the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0256] In some embodiments, after configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) (e.g., and/or while an angle of the first movement component is configured to be controlled in an automatic manner and an angle of the second movement component is configured to be controlled in a manual manner), the computer system detects a current angle of the second movement component (e.g., 602 and/or 604). In some embodiments, the current angle of the second movement component is set based on input detected via one or more input devices (e.g., a camera and/or a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof)) in communication with the computer system. In some embodiments, in response to detecting the current angle of the second movement component and in accordance with a determination that the current angle of the second movement component is a first angle, the computer system automatically modifies (e.g., based on the current angle of the second movement component) a current angle of the first movement component (e.g., 602 and/or 604) to be a second angle (e.g., from an angle to a different angle) (e.g., the first angle or a different angle) (e.g., without automatically modifying a current angle of the second movement component) (e.g., as described above in relation to FIG. 6B). In some embodiments, in response to detecting the current angle of the second movement component, the current angle of the first movement component is automatically modified a first amount in accordance with a determination that the current angle of the second movement component is the first angle. In some embodiments, in response to detecting the current angle of the second movement component and in accordance with a determination that the current angle of the second movement component is a third angle different from the first angle, the computer system automatically modifies (e.g., based on the current angle of the second movement component) the current angle of the first movement component to be a fourth angle (e.g., the second angle or an angle different from the second angle) different from the second angle (e.g., as described above in relation to FIG. 6B) (e.g., without automatically modifying a current angle of the second movement component). In some embodiments, the current angle of the first movement component is automatically modified in accordance with and/or based on the current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified to compensate for, match, offset, be opposite of the current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified relative to the target location (e.g., such that the computer system is directed, positioned, and/or oriented to head to the target location). In some embodiments, in response to detecting the current angle of the second movement component, the current angle of the first movement component is automatically modified a second amount different from the first amount in accordance with a determination that the current angle of the second movement component is the third angle. Automatically modifying a current angle of the first movement component based on a current angle of the second movement component allows the computer system to adapt the current of the first movement component (which, in some embodiments, is being automatically controlled) to the current angle of the second movement component (which, in some embodiments, is being manually controlled), thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0257] In some embodiments, after configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) (e.g., and/or while an angle of the first movement component is configured to be controlled in an automatic manner and an angle of the second movement component is configured to be controlled in a manual manner), the computer system detects a current location of the computer system (e.g., 600 and/or 1100). In some embodiments in response to detecting the current location of the computer system and in accordance with a determination that the current location of the computer system is a first orientation (e.g., direction and/or heading) (and/or location) relative to the target location (e.g., 606b), the computer system automatically modifies a current angle of the first movement component (e.g., 602 and/or 604) to be a fifth angle (e.g., from an angle to a different angle) (e.g., without automatically modifying a current angle of the second movement component) (e.g., as described above in relation to FIG. 6B). In some embodiments, in response to detecting the current location of the computer system, the current angle of the first movement component is automatically modified a third amount in accordance with a determination that the current location of the computer system is the first orientation relative to the target location. In some embodiments, \ In response to detecting the current location of the computer system and in accordance with a determination that the current location of the computer system is a second orientation relative to the target location, wherein the second orientation is different from the first orientation, the computer system automatically modifies (e.g., based on the second orientation) the current angle of the first movement component to be a sixth angle different from the fifth angle (e.g., as describe above in relation to FIG. 6B) (e.g., without automatically modifying a current angle of the second movement component). In some embodiments, the current angle of the first movement component is automatically modified in accordance with and/or based on the current location of the computer system. In some embodiments, the current angle of the first movement component is automatically modified to compensate for, match, offset, be opposite of a current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified relative to the target location (e.g., such that the computer system is directed, positioned, and/or oriented to head to the target location). In some embodiments, in response to detecting the current location of the computer system, the current angle of the first movement component is automatically modified a fourth amount different from the third amount in accordance with a determination that the current location of the computer system is the second orientation relative to the target location. Automatically modifying the current angle of the first movement component based on a current location of the computer system relative to the target location allows the computer system to automatically align the first movement component with the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0258] In some embodiments, after configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) (e.g., and/or while an angle of the first movement component is configured to be controlled in an automatic manner and an angle of the second movement component is configured to be controlled in a manual manner), the computer system detects a current location of an object external to (e.g., and/or separate and/or different from) the computer system (e.g., 600 and/or 1100). In some embodiments, in response to detecting the current location of the object external to the computer system and in accordance with a determination that the current location of the object is a first location, the computer system automatically modifies a current angle of the first movement component (e.g., 602 and/or 604) to be a seventh angle (e.g., from an angle to a different angle) (e.g., without automatically modifying a current angle of the second movement component) (e.g., as described above in relation to FIG. 6B). In some embodiments, in response to detecting the current location of the object, the current angle of the first movement component is automatically modified a fifth amount in accordance with a determination that the current location of the object is the first location. In some embodiments, in response to detecting the current location of the object external to the computer system and in accordance with a determination that the current location of the object is a second location different from the first location, the computer system automatically modifies (e.g., based on the second location) the current angle of the first movement component to be an eighth angle different from the seventh angle (e.g., as described above in relation to FIG. 6B) (e.g., without automatically modifying a current angle of the second movement component). In some embodiments, the current angle of the first movement component is automatically modified in accordance with and/or based on a current location of the computer system. In some embodiments, the current angle of the first movement component is automatically modified to compensate for, match, offset, be opposite of a current angle of the second movement component. In some embodiments, the current angle of the first movement component is automatically modified relative to the target location (e.g., such that the computer system is directed, positioned, and/or oriented to head to the target location). In some embodiments, in response to detecting the current location of the object, the current angle of the first movement component is automatically modified a sixth amount different from the fifth amount in accordance with a determination that the current location of the object is the second location. Automatically modifying the current angle of the first movement component based on a current location of an object external to the computer system allows the computer system to avoid the object, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0259] In some embodiments, before detecting the event with respect to the target location (e.g., 606b), the computer system detects, via one or more input devices (e.g., the first movement component, the second movement component, a different movement component, a camera, a touch-sensitive surface, a physical input mechanism, a steering mechanism, and/or another computer system separate from the computer system) in communication with (e.g., of and/or integrated with) the computer system (e.g., 600 and/or 1100), an input (e.g., a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, and/or a gaze direction) on a location corresponding to the target location and/or a control corresponding to the target location) corresponding to selection of the target location from one or more available locations (e.g., one or more known locations and/or detected locations, such as a location in a map and/or detected via a sensor of the computer system), wherein the event occurs while navigating to the target location (e.g., as described above in relation to FIG. 6A). In some embodiments, after and/or in response to detecting the input corresponding to selection of the target location, the computer system navigates to the target location. Causing an angle of the first movement component to be controlled in an automatic manner and an angle of the second movement component to be controlled in a manual manner while navigating to the target location allows the computer system to partially assist a user in reaching the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0260] In some embodiments, the input corresponds to (e.g., manually maintaining when within a threshold distance from the target location, modifying, and/or changing) an angle of the second movement component (e.g., 602 and/or 604) (e.g., as described above in relation to FIG. 6A).
[0261] In some embodiments, after configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604): an angle of a third movement component (e.g., 602 and/or 604) is configured to be controlled in the automatic manner (e.g., based on configuring the one or more angles); and an angle of a fourth movement component (e.g., 602 and/or 604) is configured to be controlled in the manual manner (e.g., based on configuring the one or more angles). In some embodiments, the third movement component is different from the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or 604). In some embodiments, the fourth movement component is different from the first movement component, the second movement component, and the third movement component (e.g., as described above in relation to FIGS. 6A and 6B). In some embodiments, the third movement component is automatically modified differently than the first movement component when configured to be controlled in the automatic manner. Causing angles of multiple movement component to be controlled in an automatic manner and angles of multiple movement component to be controlled in a manual manner in response to detecting an event and the first set of one or more criteria being satisfied allows the computer system to partially assist a user in reaching the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0262] In some embodiments, configuring the one or more angles of one or more movement components (e.g., 602 and/or 604) includes, in accordance with a determination that the target location (e.g., 606b) is a first type of target location (e.g., a parking spot perpendicular to traffic) (e.g., a location with a first orientation), configuring the angle of the first movement component (e.g., 602 and/or 604) to converge to (e.g., be, reach over time, and/or change over time to be) a target angle at the target location (e.g., as described above in relation to FIG. 6A). In some embodiments, configuring the angle of the first movement component to converge to the target angle at the target location includes configuring the angle of the first movement component to be an intermediate angle different from the target angle before reaching the target location. In some embodiments, the intermediate angle is an angle different from an angle of the first movement component when detecting the event. In some embodiments, the intermediate angle is an angle between an angle of the first movement component when detecting the event and the target angle. Configuring the angle of the first movement component to converge to a target angle at the target location allows the computer system to partially assist a user in reaching the target angle at the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0263] In some embodiments, configuring the one or more angles of one or more movement components (e.g., 602 and/or 604) includes, in accordance with a determination that the target location (e.g., 606b) is a second type (e.g., different from the first type) of target location (e.g., a parking spot parallel to traffic) (e.g., a location with a second orientation different from the first orientation), configuring the angle of the first movement component (e.g., 602 and/or 604) to converge to (e.g., be, reach over time, and/or change over time to be): a first target angle at a first point of navigating to the target location and a second target angle at a second point (e.g., the target location or a different location) of navigating to the target location. In some embodiments, the second target angle is different from the first target angle. In some embodiments, the second point is different from the first point (e.g., as described above in relation to FIG. 6F). In some embodiments, configuring the angle of the first movement component to converge to the first target angle includes configuring the angle of the first movement component to be a first intermediate angle different from the first target angle before reaching the first point. In some embodiments, the first intermediate angle is an angle different from an angle of the first movement component when detecting the event. In some embodiments, the first intermediate angle is an angle between an angle of the first movement component when detecting the event and the first point. In some embodiments, configuring the angle of the first movement component to converge to the second target angle includes configuring the angle of the first movement component to be a second intermediate angle (e.g., different from the first intermediate angle) different from the second target angle before reaching the second point and/or the target location. In some embodiments, the second intermediate angle is an angle different from an angle of the first movement component when detecting the event and/or when at the first point. In some embodiments, the second intermediate angle is an angle between an angle of the first movement component when detecting the event (e.g., and/or when at the first point) and the second point (e.g., and/or the target location). Configuring the angle of the first movement component to converge to different target angles at different points while navigating to the target location allows the computer system to partially assist a user in reaching a final orientation at the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0264] In some embodiments, configuring the one or more angles of one or more movement components (e.g., 602 and/or 604) includes, in accordance with a determination that the target location (e.g., 606b) is a third type (e.g., different from the first type and/or the second type) (e.g., the second type) of target location, configuring the angle of the first movement component (e.g., 602 and/or 604) to be controlled (1) in an automatic manner for a first portion of a maneuver (e.g., while navigating to the target location (e.g., after detecting the event)) (e.g., a set and/or course of one or more actions and/or movements along a path) and (2) in a manual manner for a second portion of the maneuver. In some embodiments, the second portion is different from the first portion (e.g., as described above in relation to FIG. 7A). In some embodiments, at least partially while the angle of the first movement component is configured to be controlled in an automatic manner, the angle of the second movement component is configured to controlled in a manual manner. In some embodiments, at least partially while the angle of the first movement component is configured to be controlled in a manual manner, the angle of the second movement component is configured to controlled in an automatic manner. Configuring the angle of the first movement component to be controlled (1) in an automatic manner for a first portion of a maneuver and (2) in a manual manner for a second portion of the maneuver. In some embodiments, the second portion is different from the first portion allows the computer system to adapt to different portions of the maneuver and provide assistance where needed, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0265] In some embodiments, in response to detecting the event and in accordance with a determination that a fifth set of one or more criteria is satisfied, wherein the fifth set of one or more criteria is different from the first set of one or more criteria (e.g., the fifth set of one or more criteria is different from the respective set of one or more criteria), the computer system configures (e.g., maintains configuration or changes configuration of) (e.g., based on a distance, location, and/or direction of the target location relative to the computer system) (e.g., based on an angle of the second movement component) one or more angles of one or more movement components (e.g., 602 and/or 604) (e.g., a set of one or more movement components including the first movement component and the second movement component), wherein the first set of one or more criteria includes a criterion that is satisfied in accordance with a determination that the computer system (e.g., 600 and/or 1100) is a first direction relative to the target location (e.g., 606b) when (e.g., and/or at the time of) detecting the event, wherein the fifth set of one or more criteria includes a criterion that is satisfied in accordance with a determination that the computer system is a second direction relative to the target location when (e.g., and/or at the time of) detecting the event, wherein the second direction is different from (e.g., opposite of) the first direction, and wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the fifth set of one or more criteria is satisfied (e.g., as described above at FIGS. 7A and 8A): an angle of the first movement component (e.g., 602 and/or 604) is configured to be controlled in a manual manner (e.g., and/or while forgoing configuring the angle of the first movement component to be controlled by the computer system) and an angle of the second movement component (e.g., 602 and/or 604) is configured to be controlled in an automatic manner. In some embodiments, the fifth set of one or more criteria includes a criterion that is satisfied when the computer system is in the first (e.g., semi-autonomous) mode. In some embodiments, the fifth set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location. In some embodiments, the fifth set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location In some embodiments, a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof) in communication with the computer system does not directly control the angle of the second movement component when the fifth set of one or more criteria is satisfied. In some embodiments, the steering mechanism does not directly control the angle of the first movement component when the fifth set of one or more criteria is satisfied. In some embodiments, the angle of the second movement component is reactive to the angle of the first movement component. In some embodiments, the angle of the second movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location. Controlling different movement components depending on a direction of the computer system relative to the target location allows the computer system to adapt to different orientations and/or approaches to the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0266] In some embodiments, after detecting the event and while navigating to the target location (e.g., 606b) (e.g., and/or while an angle of the first movement component is configured to be controlled in an automatic manner and an angle of the second movement component is configured to be controlled in a manual manner), the computer system detects misalignment of the second movement component (e.g., 602 and/or 604) relative to the target location (e.g., while the second movement component is being controlled in a manual manner). In some embodiments, in response to detecting misalignment of the second movement component relative to the target location, the computer system provides, via one or more output devices (e.g., a speaker, a display generation component, and/or a steering mechanism) in communication with the computer system (e.g., 600 and/or 1100), feedback (e.g., visual, auditory, and/or haptic feedback) with respect to a current angle of the second movement component (e.g., as described above in relation to FIG. 6B). In some embodiments, the feedback corresponds to an angle different from the current angle (e.g., suggesting to change the current angle of the second movement component to the angle different from the current angle). Providing feedback with respect to a current angle of the second movement component in response to detecting misalignment of the second movement component relative to the target location allows the computer system to prompt a user when the misalignment occurs and enable the user to fix the misalignment, thereby providing improved feedback and/or performing an operation when a set of conditions has been met without requiring further user input.
[0267] In some embodiments, while an angle of the first movement component (e.g., 602 and/or 604) is configured to be controlled in an automatic manner and before reaching the target location (e.g., 606b) (e.g., and, in some embodiments, while automatically modifying a current angle of the first movement component), the computer system detects, via one or more input devices in communication with the computer system (e.g., 600 and/or 1100), a second input. In some embodiments, the second input corresponds to a request to stop controlling the first movement component in an automatic manner. In some embodiments, in response to detecting the second input, the computer system configures an angle of the first movement component to be controlled in a manual manner (e.g., as described above in relation to FIG. 6A). Configuring an angle of the first movement component to be controlled in a manual manner instead of an automatic manner in response to detecting input while the angle of the first movement component is controlled in an automatic manner allows the computer system to respond to input by a user and switch modes in an efficient manner, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0268] In some embodiments, while an angle of the first movement component (e.g., 602 and/or 604) is configured to be controlled in an automatic manner and before reaching the target location (e.g., 606b) (e.g., and, in some embodiments, while automatically modifying a current angle of the first movement component), the computer system detects, via one or more input devices in communication with the computer system (e.g., 600 and/or 1100), an object. In some embodiments, object is detected in and/or relative to a direction of motion of the computer system. In some embodiments, in response to detecting the object, the computer system configures an angle of the first movement component to be controlled in an automatic manner using a first path, wherein, before detecting the object, configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) in response to detecting the event includes configuring an angle of the first movement component to be controlled in an automatic manner using a second path different from the first path (e.g., as described above in relation to FIG. 6A). Configuring an angle of the first movement component to be controlled in an automatic manner using a different path in response to detecting an object allows the computer system to avoid the object, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0269] In some embodiments, after configuring the one or more angles of the one or more movement components (e.g., 602 and/or 604) in response to detecting the event and in conjunction with configuring an angle of the first movement component (e.g., 602 and/or 604) to be controlled in an automatic manner (e.g., and/or in conjunction with automatically modifying a current angle of the first movement component), the computer system causes the computer system (e.g., 600 and/or 1100) to accelerate (e.g., when not going quick enough to reach a particular location within the target location) or deaccelerate (e.g., as described above in relation to FIG. 6A) (e.g., in response to detecting that the computer system is within a predefined distance of (e.g., 0-5 feet) the target location) (e.g., while the second movement component is configured to be controlled in a manual manner). Causing the computer system to accelerate or decelerate when automatically controlling an angle of the first movement components allows the computer system to ensure that the computer system is going the right speed to reach and not exceed the target location, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input. In some embodiments, a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
[0270] Note that details of the processes described above with respect to process 900 (e.g., FIG. 9) are also applicable in an analogous manner to other methods described herein. For example, process 1200 optionally includes one or more of the characteristics of the various methods described above with reference to process 900. For example, one or movement components can be configured to be controlled in an automatic and/or manual manner using one or more techniques described above in relation to process 900 where feedback can be provided once the one or more components are configured using one or more techniques described below in relation to process 1200. For brevity, these details are not repeated below. [0271] FIGS. 10A-10B is a flow diagram illustrating a method (e.g., process 1000) for selectively modifying movement components of a movable computer system in accordance with some embodiments. Some operations in process 1000 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0272] As described below, process 1000 provides an intuitive way for selectively modifying movement components of a movable computer system. Process 1000 reduces the cognitive burden on a user for selectively modifying movement components of a movable computer system, thereby creating a more efficient human-machine interface. For battery- operated computing devices, enabling a user to use a movable computer system faster and more efficiently conserves power and increases the time between battery charges.
[0273] In some embodiments, process 1000 is performed at a computer system (e.g., 600 and/or 1100) (e.g., as described above with respect to process 900) that is in communication with a first movement component (e.g., 602 and/or 604) (e.g., as described above with respect to process 900) and a second movement component (e.g., 602 and/or 604) different from (e.g., separate from and/or not directly connected to) the first movement component.
[0274] The computer system detects (1002) a target location (e.g., 606b) (e.g., as described above with respect to process 900) in a physical environment.
[0275] While (1004) detecting the target location in the physical environment and in accordance with (1006) a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is operating in a first mode (e.g., a semi -autonomous mode and/or a partially autonomous mode), the computer system automatically modifies (1008) (e.g., as described above with respect to process 900) the first movement component (e.g., 602 and/or 604) (e.g., an angle (e.g., a wheel angle, a direction, and/or any combination thereof) of and/or corresponding to the first movement component, a speed of and/or corresponding to the first movement component, an acceleration of and/or corresponding to the first movement component, a size of and/or corresponding to the first movement component, a shape of and/or corresponding to the first movement component, a temperature of and/or corresponding to the first movement component) (e.g., the first movement component is modified without detecting user input corresponding to a request to modify the first movement component) (e.g., as described above in relation to FIG. 6A).
[0276] While (1004) detecting the target location in the physical environment and in accordance with (1006) the determination that the first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes the criterion that is satisfied when the computer system is operating in the first mode, the computer system forgoes (1010) automatically modifying (e.g., as described above with respect to process 900) the second movement component (e.g., as described above in relation to FIG. 6A) (e.g., 602 and/or 604) (e.g., an angle (e.g., a wheel angle, a direction, and/or any combination thereof) of and/or corresponding to the second movement component, a speed of and/or corresponding to the second movement component, an acceleration of and/or corresponding to the second movement component, a size of and/or corresponding to the second movement component, a shape of and/or corresponding to the second movement component, a temperature of and/or corresponding to the second movement component). In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the computer system is moving in a third direction (e.g., the same as or different from the first and/or second direction) (e.g., at least partially toward the target location). In some embodiments, a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof) in communication with the computer system does not directly control the first movement component. In some embodiments, a state of the first movement component is reactive to a state of the second movement component. In some embodiments, the first movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location.
[0277] While (1004) detecting the target location in the physical environment and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is operating in a second mode (e.g., a full autonomous mode and/or a mode that is more autonomous than the first mode) different from the first mode, the computer system automatically modifies (1012) the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or 604), wherein the second set of one or more criteria is different from the first set of one or more criteria (e.g., as described above in relation to FIG. 6A). In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location. In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when the computer system is moving in the third direction. In some embodiments, a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof) in communication with the computer system does not directly control the first movement component and/or the second movement component. In some embodiments, a state of the first movement component is reactive to a state of the second movement component. In some embodiments, the first movement component and/or the second movement component is continued to be automatically modified until the computer system is a predefined distance (e.g., 0-2 feet) from the target location.
[0278] While (1004) detecting the target location in the physical environment and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is operating in a third mode (e.g., a manual mode, a non- autonomous mode, and/or a mode that is less autonomous than the first mode and the second mode) different from the second mode and the first mode, the computer system forgoes (1014) automatically modifying the first movement component (e.g., 602 and/or 604) and the second movement component (e.g., 602 and/or 604) (e.g., as described above in relation to FIG. 6A), wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria. In some embodiments, the third set of one or more criteria includes a criterion that is satisfied when the computer system is within a predefined distance from and/or direction to the target location. In some embodiments, the third set of one or more criteria includes a criterion that is satisfied when input was detected that corresponds to a request to navigate to the target location. In some embodiments, the third set of one or more criteria includes a criterion that is satisfied when the computer system is moving in the third direction. In some embodiments, a steering mechanism (e.g., a steering wheel, a steering yoke, an input device, a touch screen, a physical hardware device, and/or any combination thereof) in communication with the computer system directly controls the first movement component and/or the second movement component. In some embodiments, a state of the first movement component is not reactive to a state of the second movement component. In some embodiments, a state of the second movement component is not reactive to a state of the first movement component. The computer system operating in three different modes that each have a different amount of automatic modification of movement components allows the computer system to adjust to different situations and assist in different amounts depending on a current situation, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or performing an operation when a set of conditions has been met without requiring further user input.
[0279] In some embodiments, while the computer system (e.g., 600 and/or 1100) is operating in the first mode and while navigating to the target location (e.g., 606b) (e.g., and/or while performing a maneuver (e.g., automatically modifying the first movement component)), the computer system detects a first event (e.g., input corresponding to a request to change a mode that the computer is currently operating, input directed to one or more input devices in communication with the computer system, and/or input corresponding to manually changing a current angle of the second movement component). In some embodiments, in response to detecting the first event, the computer system automatically modifies the second movement component (e.g., 602 and/or 604) In some embodiments, in response to detecting the first event, the computer system forgoes automatically modifying the first movement component (e.g., 602 and/or 604) (e.g., as described above in relation to FIG. 6A). In some embodiments, in response to detecting the first event, the computer system causes the computer system to operate in the second mode or the third mode. In some embodiments, while the computer system is operating in the second mode and while navigating to the target location (e.g., and/or while performing a maneuver (e.g., automatically modifying the first movement component or the second movement component)), the computer system detects a second event (e.g., input corresponding to a request to change a mode that the computer is currently operating, input directed to one or more input devices in communication with the computer system, and/or input corresponding to manually changing a current angle of the first movement component and/or the second movement component). In some embodiments, in response to detecting the second event, the computer system forgoes automatically modifying the first movement component. In some embodiments, in response to detecting the second event, the computer system forgoes automatically modifying the second movement component (e.g., as described above in relation to FIG. 6A). In some embodiments, in response to detecting the second event, the computer system causes the computer system to operate in the first mode or the third mode. In some embodiments, while the computer system is operating in the third mode and while detecting the target location in the physical environment, the computer system detects a third event (e.g., input corresponding to a request to change a mode that the computer is currently operating, input directed to one or more input devices in communication with the computer system, and/or input corresponding to manually changing a current angle of the first movement component and/or the second movement component). In some embodiments, in response to detecting the third event, the computer system automatically modifies the first movement component. In some embodiments, in response to detecting the third event, the computer system automatically modifies the second movement component (e.g., as described above in relation to FIG. 6A). In some embodiments, in response to detecting the third event, the computer system causes the computer system to operate in the first mode or the second mode. Changing the mode that the computer is operating in while navigating to the target location allows the computer system to adjust to different situations and assist in different amounts depending on a current situation, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or performing an operation when a set of conditions has been met without requiring further user input.
[0280] In some embodiments, automatically modifying the first movement component (e.g., 602 and/or 604) includes automatically modifying an angle or (e.g., and/or) a speed of the first movement component. In some embodiments, automatically modifying the second movement component (e.g., 602 and/or 604) includes automatically modifying an angle or (e.g., and/or) a speed of the second movement component (e.g., as described above in relation to FIG. 6A). Automatically modifying an angle or a speed of a movement components depending on a current mode allows the computer system to adjust to different situations and assist in different amounts and/or ways depending on a current situation, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or performing an operation when a set of conditions has been met without requiring further user input. [0281] In some embodiments, the computer system (e.g., 600 and/or 1100) operates in the first mode (e.g., while detecting the target location in the physical environment) in accordance with a determination that the target location (e.g., 606b) is a first type. In some embodiments, the computer system operates in the second mode (e.g., while detecting the target location in the physical environment) in accordance with a determination that the target location is a second type different from the first type. In some embodiments, the computer system operates in the third mode (e.g., while detecting the target location in the physical environment) in accordance with a determination that the target location is a third type different from the first type and the second type (e.g., as described above in relation to FIG. 6A). In some embodiments, a mode of the computer system is selected based on a type of the target location. In some embodiments, a type of the target location is with respect to the target location and not with respect to the computer system (e.g., a type of the target location is based on the target location) (e.g., a type of the target location is not based on the computer system). In some embodiments, a type of the target location is with respect to the target location and the computer system (e.g., a type of the target location is based on the target location and the computer system). In some embodiments, a type of the target location is with respect to a direction of the target location relative to the computer system. Selecting which mode to operate depending on which type the target location is allows the computer system to adjust to different situations and assist in different amounts depending on a current situation, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or performing an operation when a set of conditions has been met without requiring further user input.
[0282] In some embodiments, before automatically modifying the first movement component (e.g., 602 and/or 604) or the second movement component (e.g., 602 and/or 604) (e.g., and/or before or while detecting the target location) (e.g., and/or before navigating to the target location) (e.g., and/or before or while navigating to a target destination corresponding to and/or including the target location), the computer system detects, via one or more input devices (e.g., the first movement component, the second movement component, a different movement component, a camera, a touch-sensitive surface, a physical input mechanism, a steering mechanism, and/or another computer system separate from the computer system) in communication with the computer system (e.g., 600 and/or 1100), an input (e.g., a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, and/or a gaze direction)) corresponding to selection of a respective mode to operate the computer system. In some embodiments, in response to detecting the input corresponding to selection of the respective mode to operate the computer system and in accordance with a determination that the respective mode is the first mode, the computer system operates the computer system in the first mode (e.g., as described above in relation to FIG. 6A). In some embodiments, in response to detecting the input corresponding to selection of the respective mode to operate the computer system and in accordance with a determination that the respective mode is the second mode, the computer system operates the computer system in the second mode (e.g., as described above in relation to FIG. 6A). In some embodiments, before forgoing automatically modifying the first movement component or the second movement component (e.g., and/or before or while detecting the target location) (e.g., and/or before navigating to the target location) (e.g., and/or before or while navigating to a target destination corresponding to and/or including the target location), the computer system detects, via one or more input devices in communication with the computer system, a second input corresponding to selection of a respective mode to operate the computer system; and in response to detecting the second input corresponding to selection of the respective mode to operate the computer system in accordance with a determination that the respective mode is the first mode, the computer system operates the computer system in the first mode and in accordance with a determination that the respective mode is the second mode, the computer system operates the computer system in the second mode and in accordance with a determination that the respective mode is the third mode, the computer system operates the computer system in the third mode.
[0283] In some embodiments, the input corresponding to selection of the respective mode to operate the computer system includes an input corresponding to (e.g., changing, modifying, and/or maintaining) an angle of the first movement component (e.g., 602 and/or 604) or (e.g., and/or) the second movement component (e.g., 602 and/or 604) (e.g., as described above in relation to FIG. 6A). Selecting different modes based on an angle of a movement component allows the computer system to adjust to different situations while detecting normal navigation inputs and without requiring an explicit request to change to a mode, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or performing an operation when a set of conditions has been met without requiring further user input. [0284] In some embodiments, while detecting the target location (e.g., 606b) in the physical environment, while navigating to the target location (e.g., before reaching the target location), while the computer system (e.g., 600 and/or 1100) is operating in the first mode, and after automatically modifying the first movement component (e.g., 602 and/or 604) (e.g., and/or while the second movement component is configured to be controlled in a manual manner), the computer system detects an event (e.g., detecting that the computer system is within a predefined distance from the target location, detecting that the computer system is a predefined direction and/or orientation with respect to the target location, and/or detecting that the computer system performed a particular operation and/or portion of a maneuver). In some embodiments, in response to detecting the event, the computer system forgoes automatically modifying the first movement component. In some embodiments, in response to detecting the event, the computer system automatically modifies the second movement component (e.g., 602 and/or 604) (e.g., while the computer system continues to operate in the first mode) (e.g., as described above in relation to FIG. 6A). In some embodiments, in response to detecting the event, the computer system configures (1) the first movement component to be controlled in a manual manner and (2) the second movement component to be controlled in an automatic manner. Changing which movement component is automatically controlled while navigating to the target location allows the computer system to adapt to different portions of the maneuver and provide assistance where needed, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input. In some embodiments, a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
[0285] Note that details of the processes described above with respect to process 1000 (e.g., FIGS. 10A-10B) are also applicable in an analogous manner to other methods described herein. For example, process 900 optionally includes one or more of the characteristics of the various methods described above with reference to process 1000. For example, one or movement components can be configured to be controlled in an automatic and/or manual manner using one or more techniques described above in relation to process 900 where the computer system can adjust the one or more movement component based on how the one or more movement components are configured using one or more techniques described above in relation to process 1000. For brevity, these details are not repeated below. [0286] FIGS. 11 A-l ID illustrate exemplary user interfaces for redirecting a movable computer system in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 12 and 13.
[0287] In some embodiments, FIGS. 11 A-l ID illustrate one or more scenarios, where navigation of a computer system is updated based on whether an error is detected in navigation (e.g., a failure to turn left within a time period and/or failure to turn right into a particular parking spot). In some embodiments, based on the error being detected in navigation, a user is provided with one or more options to change the navigation (e.g., change to navigate to a different target destination, such as a parking spot and/or a different type of location) and maintain the navigation (e.g., maintain the current navigation path and/or change navigation path to the original target destination).
[0288] In some embodiments, the navigation is automatically changed based on the error being detected in navigation. For example, a nearest possible destination (e.g., a parking spot) that is reachable is changed to be the target destination. For another example, one or more preferences of the user, one or more previous trips by the movable computer system, an object in nearest possible destination, an environmental state (e.g., shade and/or covering) of a possible destination, and/or a type of surface of a possible destination can be used, amongst other things, to determine where and/or how to change the navigation.
[0289] In some embodiments, feedback is generated at a portion of a computer system, such as a steering wheel, based on the error being detected in navigation. In some embodiments, the feedback guides a user to correct and/or automatically cause a computer system (e.g., a movable computer system, a smart phone, a smart watch, a tablet, and/or a laptop) to correct a navigation error for a desired navigational path, to avoid a navigation error for the desired navigational path, and/or continue to navigate on a desired navigational path.
[0290] FIG. 11A illustrates computer system 1100. In some embodiments, computer system 1100 is the movable computer system. In other embodiments, computer system 1100 is in communication with the movable computer system. As illustrated in FIG. 11 A, computer system 1100 displays navigation user interface 1122. Navigation user interface 1122 is displayed as a visual tool to assist a user in navigating to a target destination (e.g., a parking spot, a grocery store, an office building, and/or a home). At FIG. 11 A, the target destination is a parking spot. As illustrated in FIG. 11 A, navigation user interface 1122 includes navigation instructions 1102, navigation representation 1104, and destination information 1106. Navigation instructions 1102 includes both graphical (e.g., an arrow and/or a representation of a traffic signal) and textual instructions (e.g., turn left, turn right, and/or turn around) to assist the user in navigating towards the target destination. At FIG. 11 A, navigation instructions 1102 indicate that the movable computer system must turn left in two feet.
[0291] Navigation representation 1104 includes movable computer system representation 1110, path representation 1112, parking spots representation 1108, target position representation 1114, and target destination representation 1108b. Target destination representation 1108b is a representation of the target destination of the movable computer system. In some embodiments, movable computer system representation 1110 is a real-time representation of the movable computer system that is navigating towards the target destination. The positioning of movable computer system representation 1110 and target destination representation 1108b within navigation user interface 1122 is representative of a real-world representation of the movable computer system relative to the target destination. Representation of path 1112 is a representation of the path that the movable computer system must travel such that the movable computer system navigates from the current position of the movable computer system to the target destination. Target position representation 1114 is a representation of a target position of the movable computer system once the movable computer system has arrived at the target destination.
[0292] Destination information 1106 includes information regarding the distance between the movable computer system and the target destination, the amount of time left that the movable computer system must travel before the movable computer system arrives at the target destination, and the estimated time at which the movable computer system will arrive at the target destination. At FIG. 11 A, the movable computer system is traveling along in a forward direction along the path that is represented by path representation 1112.
[0293] At FIG. 1 IB, a determination is made (e.g., by the movable computer system and/or by another computer system that is in communication with the movable computer system) that the movable computer system must turn left for the movable computer system to arrive at the target destination. Because a determination is made (e.g., by the movable computer system and/or by another computer system that is in communication with the movable computer system) that the movable computer system must turn left in order for the movable computer system to arrive at the target destination, computer system 1100 updates navigation instructions 1102 to indicate that the movable computer system must turn left in zero feet. Further, at FIG. 1 IB, computer system 1100 updates the display of path representation 1112 to indicate that the movable computer system must turn left to arrive at the target destination. At FIG. 1 IB, the movable computer system continues in a forward direction along the path and does not turn left.
[0294] At FIG. 11C, a determination is made that the movable computer system has gone too far and cannot reach target position representation 1114 (or cannot park inside the parking spot, indicated by target destination representation 1108b). In order words, at FIG. 11C, a determination is made that an error has occur with respect to navigating to the target destination. As illustrated in FIG. 11C, computer system 1100 displays navigation decision user interface 1116, which includes maintain navigation control 1118 and change navigation control 1120. In some embodiments, maintain navigation control 1118 includes a representation (e.g., text, symbols, and/or arrows) of the current target destination and change navigation control 1120 includes a representation of a new target destination (e.g., target destination representation 1108a of FIG. 1 ID).
[0295] In some embodiments, navigation decision user interface 1116 includes an indication of an error, such as an indication of the movable computer system being out of range of the target destination, an indication that navigation of the movable computer system cannot be corrected to reach the target destination (e.g., cannot turn left when you are zero feet from the parking spot to enter into the parking spot). In some embodiments, in response to detecting an input directed to maintain navigation control 1118, computer system 1100 maintains display of navigation user interface 1122 of FIG. 1 IB and/or the movable computer system continues to navigate based on the previous navigation instructions (e.g., navigation instructions described above in relation to FIGS. 11 A-l IB). In some embodiments, in response to detecting an input directed to maintain navigation control 1118, computer system 1100 displays a new path to the target destination (e.g., target destination representation 1108b and not a new target destination, such as target destination representation 1108a of FIG. 1 ID). At FIG. 11C, movable computer system 600 detects input 1105c, which is directed to change navigation control 1120. In some embodiments, input 1105c includes a
I l l verbal input and/or one or more other inputs, such as a tap input, an air gesture, and/or a pressing input.
[0296] As illustrated in FIG. 1 ID, in response to detecting input 1105c, computer system 1100 updates display of navigation user interface 1122 with new navigation instructions. As illustrated in FIG. 1 ID, navigation user interface 1122 includes target position representation 1124 at target destination representation 1108a, which is a different parking spot than target destination representation 1108b of FIG. 1 IB. Target destination representation 1108a is further away from the movable computer system (e.g., as indicated by 1110) in FIG. 1 ID than target destination representation 1108b is from the movable computer system in FIG.
1 ID. At FIG. 1 ID, in response to detecting input 1105c, computer system 1100 has selected a different parking spot to which the movable computer system is able to navigate using navigation instructions 1102 of FIG. 1 ID. In some embodiments, in response to detecting input 1105c, computer system 1100, the movable computer system, or another computer system causes the movable computer system to automatically navigate differently (e.g., to navigate according to the changed navigation) (e.g., without detecting user input after detecting input 1105c) (e.g., at least partially navigate, where at least some components of the computer system are automatically controlled, or more-fully navigate, where an increased number and/or all components of the movable computer system are automatically controlled). In some embodiments, in response to detecting input 1105c, computer system 1100, the movable computer system, or another computer system does not cause the movable computer system to automatically navigate differently, rather the movable computer system is manually navigated. In some embodiments, maintain navigation control 1118 and/or change navigation control 1120 is provided via audio output, where a user is informed that options to maintain the current navigation and/or change the current navigation are available.
[0297] Looking back at FIG. 11C, one or more additional operations can be performed when the determination is made that an error has occur with respect to navigating to the target destination. In some embodiments, feedback is generated at a portion of the movable computer system (e.g., represented by movable computer system representation 1110). In some embodiments, the portion of the movable computer system is an input component, such as steering wheel, and/or a component that allows a user to navigate the movable computer system. In some embodiments, feedback includes one or more of visual, auditory, and/or haptic feedback. For example, feedback can include one or more lights of and/or that are in communication with the movable computer system to flash, one or more playback devices of and/or that are in communication with the movable computer system to output an audible tone, and/or one or more hardware components of and/or that are in communication with the movable computer system to pulsate.
[0298] In some embodiments, feedback can be generated at different portions of the movable computer system based on the determination is made that an error has occur with respect to navigating to the target destination. In some embodiments, feedback can be generated at a screen portion of the movable computer system and other feedback can be generated at a steering wheel portion of the movable computer system. In some embodiments, feedback can be generated at a particular portion of the movable computer system based on the distance that the moveable computer system is away from the target destination and/or how the movable computer system is currently moving with respect to the target destination. In some embodiments, feedback can be generated at the portion of the movable computer system based on an external object being detected (e.g., feedback can be generated that would prevent a steering wheel from being turned such that the movable computer system would hit a wall, tree, and/or stump).
[0299] In some embodiments, generating the feedback includes automatically rotating the portion of the movable computer system in a direction. Using the example above, in some embodiments, the portion of the movable computer system would be automatically rotated at FIG. 11C so that the movable computer system would start turning left according to navigation instructions 1102. Besides for automatically rotating the portion of the movable computer system in a direction, resistance could be increased and/or decreased to the portion of the movable computer system to generate the feedback. In such examples, an application of resistance could be increased to the portion of the movable computer system to prevent the user to turn right (e.g., because the user needs to turn left according to navigation instructions 1102) and/or an application of resistance could be decreased to the portion of the movable computer system to make turning the portion left easier. In some embodiments, feedback can be generated at the portion differently based on the distance from the target destination. In some embodiments, automatically rotating the portion of the movable computer system in a direction and/or increased and/or decreased to the portion of the movable computer system to generate the feedback can occur at a magnitude based on the distance between the movable computer system and a target destination and/or an obstacle. In some of these examples, the portion of the movable computer system is automatically rotated at a greater force as the movable computer system gets closer to the target destination (and a determination is made that the movable computer system is not on the correct path and/or is not navigating according to navigation instructions 1102).
[0300] FIG. 12 is a flow diagram illustrating a method (e.g., process 1200) for providing feedback based on an orientation of movable computer system in accordance with some embodiments. Some operations in process 1200 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0301] As described below, process 1200 provides an intuitive way for providing feedback based on an orientation of a movable computer system. Process 1200 reduces the cognitive burden on a user for providing feedback based on an orientation of a movable computer system, thereby creating a more efficient human-machine interface. For battery- operated computing devices, enabling a user to provide feedback based on an orientation of a movable computer system faster and more efficiently conserves power and increases the time between battery charges.
[0302] In some embodiments, process 1200 is performed at a computer system (e.g., 600 and/or 1100) (e.g., as described above with respect to process 900) that is in communication with an input component (e.g., a steering mechanism, a steering wheel, a steering yoke, an input device, a touch screen, a camera, and/or a physical hardware device) and an output component (e.g., 602 and/or 604) (e.g., an actuator, a wheel, and/or an axel), wherein the input component is configured to control an orientation (e.g., a direction and/or an angle) of the output component. In some embodiments, the input component is configured to detect input, such as input corresponding to a user of the computer system. In some embodiments, the input component detects input within an at least partial enclosure of the computer system. In some embodiments, the output movement is located on a first side of the computer system. In some embodiments, the output component primarily causes a change in orientation of the first side of the computer system. In some embodiments, the output component causes a change in direction, speed, and/or acceleration of the computer system.
[0303] The computer system detects (1202) a target location (606b, 706, 806, 1108b, and/or 1108a) (e.g., as described above with respect to process 900 and/or process 1000) in a physical environment. [0304] While (1204) detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment (e.g., and while the output component is moving in a first direction) (e.g., and/or in response to detecting a current location of the computer system relative to the target location) (e.g., and while the computer system is in a first (e.g., semiautomatic) and/or a third (e.g., manual) mode, as described above with respect to process 1000) and in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is in a first orientation with respect to the target location (606b, 706, 806, 1108b, and/or 1108a), the computer system provides (1206) first feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG.
11C). In some embodiments, the first feedback does not change an orientation and/or position of the computer system. In some embodiments, the first feedback indicates, corresponds to, and/or is with respect to a new orientation with respect to the target location, the new orientation different from the first orientation. In some embodiments, the first feedback is provided internal to an enclosure corresponding to the computer system.
[0305] While (1204) detecting the target location in the physical environment and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is in a second orientation with respect to the target location (606b, 706, 806, 1108b, and/or 1108a), the computer system provides (1208) second feedback (e.g., visual, auditory, and/or haptic) with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback (e.g., as described above in relation to FIG.
11C). In some embodiments, the second feedback is a different type of feedback than the first feedback. In some embodiments, the second feedback is the same type of feedback as the first feedback. In some embodiments, the second feedback does not change an orientation and/or position of the computer system. In some embodiments, the second feedback indicates, corresponds to, and/or is with respect to a new orientation with respect to the target location, the new orientation different from the first orientation. In some embodiments, the second feedback is provided internal to an enclosure corresponding to the computer system.
Providing different feedback depending on an orientation of the computer system with respect to the target location allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0306] In some embodiments, providing the first feedback includes rotating the input component (e.g., a rotatable input mechanism). In some embodiments, providing the second feedback includes rotating the input component (e.g., as described above in relation to FIG. 11C). In some embodiments, providing the first feedback includes rotating the input component a first amount. In some embodiments, providing the second feedback includes rotating the input component a second amount different from the first amount. In some embodiments, providing the first feedback includes rotating the input component a first direction. In some embodiments, providing the second feedback includes rotating the input component a second direction different from the first direction. Rotating the input component to provide feedback allows the computer system to assist navigation with respect to an input component used for the navigation, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0307] In some embodiments, providing the first feedback includes adding or reducing an amount of resistance to movement of the input component (e.g., as described above in relation to FIG. 11C) (e.g., the input component becomes harder (e.g., when adding the amount of resistance) or easier (e.g., when reducing the amount of resistance) to rotate and/or more). Adding or reducing an amount of resistance of the input component allows the computer system to assist navigation with respect to an input component used for the navigation, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0308] In some embodiments, while detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is at a first location with respect (e.g., relative) to the target location, the computer system provides third feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG. 11C). In some embodiments, the third feedback does not change an orientation and/or position of the computer system. In some embodiments, the third feedback indicates, corresponds to, and/or is with respect to a new location with respect to the target location, the new location different from the first location and/or the second location. In some embodiments, the third feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the third feedback is different from the first feedback and the second feedback. In some embodiments, the third feedback is the same as the first feedback or the second feedback. In some embodiments, while detecting the target location in the physical environment and in accordance with a determination that a fourth set of one or more criteria is satisfied, wherein the fourth set of one or more criteria includes a criterion that is satisfied when the computer system is at a second location with respect to the target location, the computer system provides fourth feedback (e.g., visual, auditory, and/or haptic) with respect to the input component, wherein the fourth set of one or more criteria is different from the third set of one or more criteria, wherein the second location is different from the first location, and wherein the fourth feedback is different from the third feedback (e.g., as described above in relation to FIG. 11C). In some embodiments, the fourth feedback is a different type of feedback than the third feedback. In some embodiments, the fourth feedback is the same type of feedback as the third feedback. In some embodiments, the fourth feedback does not change an orientation and/or position of the computer system. In some embodiments, the third feedback indicates, corresponds to, and/or is with respect to a new location with respect to the target location, the new location different from the third location, the second location, and/or the first location. In some embodiments, the fourth feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the fourth feedback is different from the first feedback and the second feedback. In some embodiments, the fourth feedback is the same as the first feedback or the second feedback. Providing different feedback depending on a location of the computer system with respect to the target location allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0309] In some embodiments, while detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment and in accordance with a determination that a fifth set of one or more criteria is satisfied, wherein the fifth set of one or more criteria includes a criterion that is satisfied in accordance with detection of an object external to the computer system (e.g., 600 and/or 1100), the computer system provides fifth feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG. 11C). In some embodiments, the fifth feedback does not change an orientation and/or position of the computer system. In some embodiments, the fifth feedback indicates, corresponds to, and/or is with respect to a new location and/or a new orientation with respect to the target location. In some embodiments, the fifth feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the fifth feedback is different from the first feedback, the second feedback, the third feedback, and/or the fourth feedback. In some embodiments, the third feedback is the same as the first feedback, the second feedback, the third feedback, and/or the fourth feedback. In some embodiments, while detecting the target location in the physical environment and in accordance with a determination that the fifth set of one or more criteria is not satisfied (e.g., in accordance with a determination that the object and/or no object is detected with respect to the target location), the computer system forgoes providing the fifth feedback with respect to the input component (e.g., as described above in relation to FIG. 11C). In some embodiments, in accordance with a determination that the fifth set of one or more criteria is not satisfied (e.g., in accordance with a determination that the object and/or no object is detected with respect to the target location), forgoing providing feedback (e.g., any feedback) with respect to the input component. Providing different feedback depending on whether an object external to the computer system is detected allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0310] In some embodiments, while detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment and in accordance with a determination that a sixth set of one or more criteria is satisfied, wherein the sixth set of one or more criteria includes a criterion that is satisfied when the computer system (e.g., 600 and/or 1100) is a first distance from the target location, the computer system provides sixth feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG. 11C). In some embodiments, the sixth feedback does not change an orientation and/or position of the computer system. In some embodiments, the sixth feedback indicates, corresponds to, and/or is with respect to a new location and/or a new orientation with respect to the target location. In some embodiments, the sixth feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the sixth feedback is different from the first feedback, the second feedback, the third feedback, the fourth feedback, and/or the fifth feedback. In some embodiments, the sixth feedback is the same as the first feedback, the second feedback, the third feedback, the fourth feedback, and/or the fifth feedback. In some embodiments, while detecting the target location in the physical environment and in accordance with a determination that a seventh set of one or more criteria is satisfied, wherein the seventh set of one or more criteria includes a criterion that is satisfied when the computer system is a second distance from the target location, the computer system provides seventh feedback (e.g., visual, auditory, and/or haptic) with respect to the input component (e.g., without providing the sixth feedback), wherein the seventh set of one or more criteria is different from the sixth set of one or more criteria, wherein the second distance is different from the first distance, and wherein the seventh feedback is different from the sixth feedback (e.g., as described above in relation to FIG. 11C). In some embodiments, the seventh feedback is a different type of feedback than the sixth feedback. In some embodiments, the seventh feedback is the same type of feedback as the sixth feedback. In some embodiments, the seventh feedback does not change an orientation and/or position of the computer system. In some embodiments, the seventh feedback indicates, corresponds to, and/or is with respect to a new location with respect to the target location. In some embodiments, the seventh feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the seventh feedback is different from the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, and/or the sixth feedback. In some embodiments, the fourth feedback is the same as the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, and/or the sixth feedback. In some embodiments, in accordance with a determination that the sixth set of one or more criteria is satisfied, the computer system does not provide the seventh feedback. Providing different feedback depending on a distance of the computer system from the target location allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input. [0311] In some embodiments, while detecting the target location (606b, 706, 806, 1108b, and/or 1108a) in the physical environment, the computer system performs a movement maneuver (e.g., as described above in relation to FIGS. 6A-6D, 7A, and/or 8A) with respect to the target location, wherein performing the movement maneuver includes: in accordance with a determination that a current portion (e.g., a previous operation, a current operation, and/or a next operation) of the movement maneuver is a first portion (and/or that one or more criteria is satisfied), providing eighth feedback (e.g., visual, auditory, and/or haptic) with respect to (e.g., using, based on, via, by, and/or in proximity to) the input component (e.g., as described above in relation to FIG. 11C) and in accordance with a determination that the current portion of the movement maneuver is a second portion different from the first portion (and/or that one or more criteria is satisfied), providing ninth feedback (e.g., visual, auditory, and/or haptic) with respect to the input component (e.g., without providing the eighth feedback), wherein the ninth feedback is different from the eighth feedback (e.g., as described above in relation to FIG. 11C). In some embodiments, the eighth feedback does not change an orientation and/or position of the computer system. In some embodiments, the eighth feedback indicates, corresponds to, and/or is with respect to a new location and/or a new orientation with respect to the target location. In some embodiments, the eighth feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the eighth feedback is different from the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, the sixth feedback, and/or the seventh feedback. In some embodiments, the eighth feedback is the same as the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, the sixth feedback, and/or the seventh feedback. In some embodiments, the ninth feedback is a different type of feedback than the eighth feedback. In some embodiments, the ninth feedback is the same type of feedback as the eighth feedback. In some embodiments, the ninth feedback does not change an orientation and/or position of the computer system. In some embodiments, the ninth feedback indicates, corresponds to, and/or is with respect to a new location with respect to the target location. In some embodiments, the ninth feedback is provided internal to an enclosure corresponding to the computer system. In some embodiments, the ninth feedback is different from the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, the sixth feedback, and/or the seventh feedback. In some embodiments, the ninth feedback is the same as the first feedback, the second feedback, the third feedback, the fourth feedback, the fifth feedback, the sixth feedback, the seventh feedback, and/or the eighth feedback. In some embodiments, in accordance with a determination that the current portion of the movement maneuver is the first portion, the computer system does not provide the ninth feedback. Providing different feedback depending on a current portion of a maneuver allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0312] In some embodiments, the ninth feedback is a different type of feedback (e.g., from auditory to visual to haptic to physical rotation) than the eighth feedback (e.g., as described above in relation to FIG. 11C). Providing different types of feedback depending on a current portion of a maneuver allows the computer system to guide and/or assist with navigating to the target location, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0313] In some embodiments, providing the first feedback includes displaying a visual cue, providing an auditory cue, or (e.g., and/or) providing haptic feedback (e.g., as described above in relation to FIG. 11C). In some embodiments, a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
[0314] Note that details of the processes described above with respect to process 1200 (e.g., FIG. 12) are also applicable in an analogous manner to other methods described herein. For example, process 1200 optionally includes one or more of the characteristics of the various methods described above with reference to process 1200. For example, one or movement components can be configured to be controlled in an automatic and/or manual manner using one or more techniques described above in relation to process 900 where feedback can be provided once the one or more components are configured using one or more techniques described above in relation to process 1200. For brevity, these details are not repeated below.
[0315] FIG. 13 is a flow diagram illustrating a method (e.g., process 1300) for redirecting a movable computer system in accordance with some embodiments. Some operations in process 1300 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted. [0316] As described below, process 1300 provides an intuitive way for redirecting a movable computer system. Process 1300 reduces the cognitive burden on a user for redirecting a movable computer system, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to redirect a movable computer system faster and more efficiently conserves power and increases the time between battery charges.
[0317] In some embodiments, process 1300 is performed at a computer system (e.g., 600 and/or 1100) (e.g., as described above with respect to process 900) in communication with an input component (e.g., a steering mechanism, a steering wheel, a steering yoke, an input device, a touch screen, a camera, and/or a physical hardware device). In some embodiments, the computer system is in communication with an output component (e.g., a touch screen, a speaker, and/or a display generation component). In some embodiments, the input component is configured to detect input, such as input corresponding to a user of the computer system. In some embodiments, the input component detects input within an at least partial enclosure of the computer system.
[0318] After detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location (e.g., 1108a and/or 1108b) (e.g., a target destination, a stopping location, a parking spot, a demarcated area, and/or a pre-defined area) (examples of the first input include a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, and/or a gaze direction) on a location corresponding to the target location and/or a control corresponding to the target location) and while navigating (e.g., manually, via providing one or more instructions, and/or at least partially automatically via the computer system) to the first target location (e.g., and/or after performing one or more operations corresponding to navigating to the target location), the computer system detects (1302) (e.g., via one or more sensors in communication with the computer system and/or via receiving a message from another computer system different from the computer system) an error (e.g., (1) an instruction of the one or more instructions not followed (2) a difficulty and/or impossibility with respect to a current location (e.g., target location has been blocked, target location is no longer in path of computer system, and/or target location has does not currently satisfy one or more criteria (e.g., is no longer and/or more desirable and/or is no longer and/or more convenient) and navigating to the target location according to a previously determined path, and/or (3) a statement and/or request made by a user of the computer system and/or detected via the one or more sensors) with respect to navigating to the first target location (e.g., as described above in relation to FIGS. 1 IB and 11C). In some embodiments, a sensor of the one or more sensors includes a camera, a gyroscope, and/or a depth sensor). In some embodiments, the error is detected after detecting, via the input component, a first set of one or more inputs corresponding to selection of the first target location (examples of the first input include a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, and/or a gaze direction) on a location corresponding to the target location and/or a control corresponding to the target location).
[0319] In response to detecting the error, the computer system initiates (1304) a process to select a respective target location (e.g., as described above in relation to FIG. 11C) (e.g., 1108a and/or 1108b) (e.g., maintain the first target location or change to a second target location different from the first target location). In some embodiments, initiating the process to select the respective target location includes providing (e.g., displays and/or outputs (e.g., auditorily and/or visually)), via the output component, a control (e.g., a user-interface element that, when selected, performs an operation). In some embodiments, the control is displayed on top of (e.g., at least partially overlays) a user interface displayed when the error is detected. In some embodiments, the control is displayed with and/or instead of a user interface displayed when the error is detected. In some embodiments, a user interface, displayed when the error is detected, is visually changed to include display of the control. In some embodiments, after providing (e.g., when the providing is verbal) (and, in some embodiments, while providing (e.g., when the providing is verbal and/or visual)) (e.g., within a predefined time of providing) the control, detecting, via the input component, a second set of one or more inputs (e.g., a tap input and/or non-tap input (e.g. a verbal instruction, a hand motion, a swipe motion, a gaze direction, and/or any combination there)) corresponding to the control (e.g., selection of the control). In some embodiments, in response to detecting the second set of one or more inputs: in accordance with a determination that the control corresponds to maintaining the first target location, the computer system initiates a process to maintain the first target location (e.g., updating and/or providing one or more new instructions) (e.g., changing a path to the target location) (e.g., providing one or more new options for navigating to the target location) (e.g., providing a control to confirm that the target location should be maintained) and in accordance with a determination that the control corresponds to changing the first target location, the computer system initiates a process to change the first target location. In some embodiments, a single control is displayed that, when selected at different portions, either initiates a process to maintain the first target location or initiates a process to change the first target location. In some embodiments, a first control is configured to initiate a process to maintain the first target location, and a second control different from the first control is configured to initiate a process to change the first target location. In some embodiments, the control corresponds to a new target location. In some embodiments, the process to change the first target location includes displaying a user interface including one or more representations of different target locations. In some embodiments, the process to change the first target location includes displaying a user interface including a confirmation element to confirm a new target location. Initiating a process to select a respective target location in response to detecting an error with respect to navigating to the first target location allows the computer system to provide options to react to the error and, in some embodiments, navigate to a different location, thereby providing improved feedback reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0320] In some embodiments, the process to select a respective target location (e.g., 1108a and/or 1108b) includes: providing (e.g., displaying and/or outputting audio) a first control (e.g., 1118) to maintain the first target location and providing (e.g., concurrently with or separate from providing the first control) a second control (1120) to select a new target location different from the first target location. In some embodiments, the second control is different from the first control. Providing two separate controls to select different target locations in response to detecting an error with respect to navigating to the first target location allows the computer system to provide options to react to the error and, in some embodiments, navigate to a different location, thereby providing improved feedback reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0321] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with a display generation component. In some embodiments, providing the second control (e.g., 1120) includes displaying, via the display generation component, an indication corresponding to the new target location (e.g., 1108a and/or 1108b) (e.g., as described above in relation to FIG. 11C) (e.g., a representation of the new target location relative to the first target location) (e.g., an outline and/or other visual indication at location corresponding to the new target location). Displaying an indication corresponding to the new target location when providing two separate controls to select different target locations in response to detecting an error with respect to navigating to the first target location allows the computer system to provide options to react to the error and, in some embodiments, navigate to a different location, thereby providing improved feedback reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0322] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with a movement component (e.g., as described above with respect to process 900). In some embodiments, navigating to the first target location (e.g., 1108a and/or 1108b) includes automatically causing, by the computer system, the movement component to change operation (e.g., as described above in relation to FIG. 1 ID) (e.g., change to a new direction, orientation, location, speed, and/or acceleration). In some embodiments, navigating to the first target location is performed in an at least partial automatic and/or autonomous manner. In some embodiments, navigating to the first target location is performed in a partially assisted manner (e.g., a first part of navigating is performed in a manual manner and a second part of navigating is performed in an automatic manner) (e.g., a first movement component is controlled in an automatic manner while a second movement component is controlled in a manual manner). Automatically causing the movement component to change operation when navigating to the first target location allows the computer system to assist in navigation, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0323] In some embodiments, navigating to the first target location (e.g., 1108a and/or 1108b) is manual (e.g., navigating to the first target location is fully controlled by a user) (e.g., a direction of navigating to the first target location is fully controlled by a user) (e.g., from the perspective of a user causing the computer system to turn and/or move) (e.g., fully manual and/or without substantial automatic steering). In some embodiments, the computer system is in communication with one or more output components (e.g., a display generation component and/or a speaker). In some embodiments, navigating to the first target location consists of outputting, via the output component, content (e.g., does not include automatically modifying an angle and/or orientation of one or more movement components (as described above)). In some embodiments, the computer system is in communication with a movement component (e.g., as described above with respect to process 900). In some embodiments, navigating to the first target location does not include the computer system causing the movement component to be automatically modified. In some embodiments, navigating to the first target location includes outputting, via the one or more output components, an indication of a next maneuver to navigate to the target location.
[0324] In some embodiments, detecting the error includes detecting that the computer system (e.g., 600 and/or 1100) is at least a predefined distance from the first target location (e.g., 1108a and/or 1108b) (e.g., as described above in relation to FIG. 11C). In some embodiments, the error is not detected in accordance with a determination that the computer system is within the predefined distance from the first target location. Detecting the error including detecting that the computer system is at least a predefined distance from the first target location allows the computer system to recognize when the computer system has missed and/or passed the first target location and provide a way to fix the error, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0325] In some embodiments, detecting the error includes detecting that a current orientation of the computer system (e.g., 600 and/or 1100) is a first orientation (e.g., an orientation that is not able to be corrected by the computer system using a current path to the first target location) with respect to the first target location (e.g., 1108a and/or 1108b) (e.g., as described above in relation to FIG. 11C). In some embodiments, the error is not detected in accordance with a determination that the computer system is a second orientation with respect to the first target location, where the second orientation is different from the first orientation. Detecting the error including detecting that a current orientation of the computer system is a first orientation with respect to the first target location allows the computer system to recognize when the computer system is in an orientation not able to be corrected with a current path and provide a way to fix the error, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0326] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with an output component. In some embodiments, after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system provides, via the output component, a third control (e.g., 1116) to select a new target location different from the first target location, wherein the new target location is the same type of location as the first target location (e.g., as described above in FIGS. 11C-1 ID) (e.g., the first target location and the new target location are both parking spots with lines defining a respective parking spot). In some embodiments, while providing the control to select a new target location, the computer system does not provide a control to select a new target location that is a different type of location than the first target location. Providing a control to select a new target location that is the same type as the first target location allows the computer system to intelligently provide alternatives, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0327] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with a second display generation component. In some embodiments, after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system displays, via the second display generation component, a fourth control (e.g., 1116) to select the respective target location (e.g., as described above at FIG. 11C).
[0328] In some embodiments, while displaying the fourth control to select the respective target location (e.g., 1108a and/or 1108b), the computer system detects, via a second input component in communication with the computer system (e.g., 600 and/or 1100), a verbal input corresponding to selection of the fourth control (e.g., as described above in relation to FIG. 11C). In some embodiments, in response to detecting the verbal input corresponding to selection of the fourth control, the computer system initiates a process to navigate to the respective target location (e.g., as described above in relation to FIG. 1 ID). Allowing verbal input to select a visual control allows the computer system to provide different ways to provide input particularly when some ways, in some embodiments, may be harder to provide (e.g., hands might be occupied) than others, thereby reducing the number of inputs needed to perform an operation and/or performing an operation when a set of conditions has been met without requiring further user input.
[0329] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with an audio generation component. In some embodiments, after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system outputs, via the audio generation component, an auditory indication of a fifth control to select the respective target location (e.g., as described above at FIG. 11C). Outputting an auditory indication of a control to select the respective target location allows the computer system to provide different ways to provide output particularly when some ways, in some embodiments, may be harder to receive (e.g., gaze might be occupied such that seeing what is displayed may be harder) than others, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0330] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with an output component and a second input component. In some embodiments, after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system detects, via the second input component, an input corresponding to selection of a sixth control (e.g., 1118) to maintain the first target location (e.g., 1108a and/or 1108b). In some embodiments, in response to detecting the input corresponding to the selection of the sixth control (1118) to maintain the first target location, the computer system outputs, via the output component, an indication of a new path to the first target location (e.g., as described above in relation to FIGS. 11C and 1 ID). In some embodiments, before outputting the indication of the new path to the first target location (and/or while navigating to the first target location), the computer system outputs, via the output component, an indication of a path to the first target location, where the path is different from the new path. Outputting an indication of a new path to the first target location in response to detecting the input corresponding to the selection of the control to maintain the first target location allows the computer system to correct an error and provide instruction to a user for how to correct the error, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0331] In some embodiments, the output component includes a display generation component. In some embodiments, outputting, via the output component, the indication of the new path to the first target location (e.g., 1108a and/or 1108b) includes displaying, via the display generation component, the indication of the new path to the first target location (e.g., as described above in relation to FIGS. 11C and 1 ID).
[0332] In some embodiments, the computer system (e.g., 600 and/or 1100) is in communication with a second input component. In some embodiments, after initiating the process to select a respective target location (e.g., 1108a and/or 1108b) (e.g., as part of the process to select a respective target location), the computer system detects, via the second input component, an input (1105c) corresponding to selection of a control (1120) to change the first target location to a second target location different from the first target location. In some embodiments, in response to detecting the input corresponding to the selection of the control to change the first target location to the second target location, the computer system navigates at least partially automatically to the second target location (e.g., as described above in relation to FIG. 1 ID). Navigating at least partially automatically to the second target location in response to detecting the input corresponding to the selection of the control to change the first target location to the second target location allows the computer system to assist with navigation when an error is detected, thereby providing improved feedback, reducing the number of inputs needed to perform an operation, and/or performing an operation when a set of conditions has been met without requiring further user input. In some embodiments, a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
[0333] Note that details of the processes described above with respect to process 1300 (e.g., FIG. 13) are also applicable in an analogous manner to the methods described herein. For example, process 900 optionally includes one or more of the characteristics of the various methods described above with reference to process 1300. For example, one or movement components can be configured to be controlled in an automatic and/or manual manner using one or more techniques described above in relation to process 900 based on the detection of an error using one or more techniques described above in relation to process 1300. For brevity, these details are not repeated below. [0334] FIGS. 14A-14H illustrate exemplary user interfaces for interacting with different map data, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 15 and 16. Throughout the user interfaces, user input is illustrated using a circular shape with dotted lines (e.g., user input 1421 in FIG. 14B). It should be recognized that the user input can be any type of user input, including a tap on touch-sensitive screen, a button press, a gaze toward a control, a voice request with an identification of a control, a gesture made by a user and captured by a camera, and/or any other affirmative action performed by a user. In some examples, a single representation of a user input in a figure (1) includes one or more different types of user input and/or (2) represents different types of user input to result in different operations. For example, a single illustrated user input can be a tap input, a tap-and-hold input, and/or a swipe gesture.
[0335] FIG. 14A illustrates navigation user interface 1410 for interacting with different map data. Computer system 1400 displays navigation user interface 1410 on touchscreen display 1402. In some embodiments, the device being navigated is the device that displays navigation user interface 1410 (e.g., computer system 1400). In some embodiments, the device being navigated is a device other than the device that displays navigation user interface 1410. For example, the device being navigated is in communication with the device that displays navigation user interface 1410.
[0336] Navigation user interface 1410 includes navigation instruction 1410a, map 1410b, and arrival information 1410c. Navigation instruction 1410a indicates a current instruction to a user of navigation user interface 1410. In FIG. 14A, navigation instruction 1410a indicates the instruction textually (e.g., “Turn Right”) and visually (e.g., right turn arrow graphic). Other examples of navigation instructions include “turn left”, “proceed straight”, “continue for 3 kilometers”, and/or “turn around.” Map 1410b includes a visual representation of a geographic location (e.g., the location surrounding the device being navigated) (e.g., computer generated graphic and/or an image captured by one or more cameras). It should be recognized that navigation user interface 1410 can include different, less, and/or more user interface elements than illustrated in FIG. 14 A.
[0337] In some embodiments, a map (e.g., 1410b) is generated based on one or more pieces of map data. Such map data can describe one or more features of the map, such as the location of roadways, paths, trails, and/or rail lines, terrain/topology data, traffic data and/or other conditions data, building data, and/or graphic elements for displaying the map. Map data can also include data from one or more on-device sensors (e.g., that are part of the device being navigated and/or part of the device displaying navigation user interface 1410) and/or one or more external sensor (e.g., a stationary camera that transmits its data to the device being navigated when they are within a threshold proximity). In some examples, the sensor data is measured and transmitted in real-time or near-in-time as the device being navigated approaches or is physically present/near the measured area.
[0338] As will be appreciated by one of ordinary skill in the art, there are many types and sources of data that can be input into a process for determining a navigation route. These different pieces of data can be used in different ways and/or at different times during the process of determining a navigation route. For example, if map data is available from a verified and/or trusted source (e.g., verified by a first-party developer of the navigation application), navigation along a route indicated by the trusted source can be weighed more heavily by the process (e.g., and thus be preferred and/or be more likely to be selected) in making a routing decision as compared to a similar route from an untrusted source. As another example, map data from a trusted source can be used to determine an initial route, but during navigation along that route received sensor data can indicate that the route is impassable (e.g., a path is closed, not safe, and/or no longer exists) — the navigation process for determining navigation can take into account the sensor data to override and/or aide the route derived or received from the trusted data source and, for example, select a different route (e.g., perhaps from an unverified data source, depending on the available options).
[0339] In some embodiments, map data has (e.g., is associated with) a state. In the examples that follow, this disclosure will refer to map data as having an associated “state”. This state can, for example, be a function of (e.g., determined in whole or in part by) the type(s) and/or source(s) of data that make up the map data. For example, data that is from a verified source can be considered as having a different state than data from an unverified source. Similarly, two pieces of data from a verified source can have different states, where a first of such pieces of data is in conflict with sensor data (e.g., obstruction detected on the path) and second of such pieces of data is not in conflict with the sensor data (e.g., path is clear). Thus, whether map data is of a particular state can be based on one or more criteria. In some examples, the term “state” refers to a classification or identification of map data that satisfies a set of one or more criteria (e.g., classified by the device being navigated, the device displaying navigation user interface 1410, and/or a server in communication with either or both of such devices). How such states are defined (e.g., which set of one or more criteria is used to delineate states) can be different based on the intended use of the map data (e.g., the type of decision being made based on the state). For example, states that represent how recently associated data was updated (e.g., how fresh the data is) can be considered by a certain subprocess or decision within a navigation routing process (e.g., in an urban area where traffic level can be highly dynamic), yet not be considered by another subprocess or decision within the navigation routing process (e.g., determining whether the pathway is physically passable (e.g., paved or not) based on the type of navigation (e.g., via car, via bike, and/or on foot)). In some examples, map data “state” is referred to as a “level,” “category,” or other appropriate phrase that can be recognized by one of ordinary skill in the art.
[0340] The examples depicted in FIGS. 14A-14H involve user interfaces associated with one of four example states. The four example states are distinct states based on two criteria: (1) whether or not sufficient map data can be retrieved from a storage resource (e.g., memory of computer system 1400 and/or a server), and (2) whether or not the navigation application (and/or a device or server in communication with the navigation application) can determine a recommended path based on the available map data (e.g., from any source). For criterion (1), retrieved map data can be considered “sufficient” if it is verified and/or trusted (e.g., comes from a verified source, such as the developer of the navigation application, and/or a source trusted by the navigation application (e.g., an owner of the premises represented by the map data)), and can be considered “insufficient” if no (or not enough) map data can be retrieved, if the retrieved map data is not verified and/or trusted (e.g., lacks a trust and/or verification credential associated with a verified and/or trusted source), if the retrieved map data does not include enough information for determining a recommended path (e.g., on its own), and/or any other appropriate criterion to delineate whether sufficient data could not be retrieved from a data source. For criterion (2), whether or not the navigation application can determine a recommended path based on the available map data (e.g., from any source) can be based on whether map data can be derived (e.g., collected and/or created) from one or more sources of data (e.g., other than the storage resource) (e.g., one or more sensor, and/or one or more unverified and/or untrusted source) that is sufficient for determining (e.g., by the navigation application) a recommended path. In some examples, deriving map data includes creating map data. For example, creating map data can include creating a new map when map data does not exist and/or adding information to an existing map when map data is insufficient, incomplete, and/or incorrect (e.g., outdated). In some examples, deriving map data includes creating map data with objects, paths, and/or other aspects of a physical environment that are not defined and/or specified in the available map data. For example, sufficient map data may not be available from the storage resource (e.g., criterion (1) is not satisfied); however, the navigation application can derive map data from sources such as on-device cameras and/or other sensors. In some examples, the derived map data is sufficient (e.g., for the navigation application and/or a process and/or device in communication therewith) to determine a recommended path. For example, deriving map data and determining a path based on the derived map data stands in contrast to the device simply receiving map data and then positioning itself within the received map data (e.g., using GPS data). Whether a navigation application (and/or associated process) can determine a recommended path can be affected by several factors including the external environment and the specific process used to determine a recommended path (e.g., depending on the parameters of such process). For example, a navigation application can require that a path determined by its navigation path determination processes have an associated confidence value above a certain threshold before recommending the route to a user (e.g., as depicted in FIG. 14F using navigation user interface 1410). If enough map data is collected to determine a possible path, but such possible path does not have the requisite confidence value, the possible path would not be recommended and thus second criterion would indicate that the navigation application cannot determine a recommended path. In summary, for a set of states based on criteria (1) and (2) above, map data can have one of at least four possible states: a first state {sufficient map data from storage resource; recommended path can be determined based on collected map data}, a second state {sufficient map data from storage resource; no recommended path can be determined based on collected map data}, a third state {insufficient map data from storage resource; recommended path can be determined based on collected map data}, and a fourth state {insufficient map data from storage resource; no recommended path can be determined based on collected map data}. More, less, and/or different criteria can be used to determine a map data state. In making one or more decision (e.g., regarding whether to proceed with or without prompting for user input), a navigation application can use all, some, or none of the possible states. For example, the second state may never (or rarely) logically occur because if sufficient map data is retrieved from a storage resource, then a recommended path should be determinable. In some embodiments, computer system 1400 receives data from one or more other computer systems of the same, similar, and/or different type as computer system 1400. For example, another computer system can be navigating an environment using one or more sensors of the other computer system. The other computer system can detect and/or derive information corresponding to the environment using data detected by the one or more sensors. Computer system 1400 can receive the information either directly from the other computer system and/or through another device, such as a server. Such information can be detected near in time and/or location to where computer system 1400 is navigating.
[0341] Referring to FIG. 14A again, map 1410b includes indicator 1412 representing the current position of the device being navigated (e.g., computer system 1400 in this example). Map 1410b also include navigation path 1414a representing the upcoming portion of the navigation (e.g., as determined and suggested by the navigation application). Map 1410b also includes example navigation path 1414b representing a previously travelled portion of the navigation. Navigation path 1414b can have a visual appearance that indicates that a path was traveled, or simply appear with the default visual appearance of the underlying path (e.g., as if no navigation is programmed). In FIG. 14A, navigation path 1414a is based on map data associated with the first state {sufficient map data from storage resource; recommended path can be determined based on collected map datajand has a visual appearance associated with the first state. In this example, portion 1414a has solid line borders. As illustrated in FIG. 14A, the navigation application instructs (e.g., textually by 1410a and graphically by 1414a) a user to turn right at the next juncture.
[0342] FIG. 14B illustrates navigation user interface 1410 as it appears at a time after the scenario in FIG. 14A, but while the same navigation session (e.g., still navigating to the same destination) is continued. In FIG. 14B, navigation instruction 1410a is updated to display “Proceed Straight,” map 1410b is updated to depict a current surrounding geographic area, and arrival information 1410c remains unchanged. Navigation user interface 1410 in FIG. 14B also includes path confirmation user interface 1420. In some embodiments, a path confirmation user interface (e.g., 1420) includes a map area (e.g., 1420a) that includes a recommended navigation path (e.g., 1414a) for upcoming navigation. In some embodiments, the path confirmation user interface also includes a message area (e.g., 1420b) indicating (e.g., prompting) that user input is required to continue navigation, a selectable icon (e.g., 1420c) for confirming the recommended path, and a selectable icon (e.g., 1420d) for declining the recommended path. In the example of FIG. 14B, the map data meets criteria for the third state described above {insufficient map data from storage resource; recommended path can be determined based on collected map data}. In this example, the third state criteria are met because the navigation application does not receive sufficient data from a verified source but is able to collect enough map data from an unverified source and a plurality of sensors on computer system 1400 in order to recommend a navigation path. The collected map data can be used as the basis to recommend a path as illustrated by navigation path 1414a in FIG. 14B (e.g., a recommended turn to the left at the next juncture). However, because the navigation recommendation is not entirely based on data from the verified source, the navigation application is configured to prompt for user input confirmation by displaying path confirmation user interface 1420. Prompting a user (e.g., instead of proceeding automatically) can be preferable because the confidence of navigation recommendation based on map data from a storage resource (e.g., a verified source) can generally be (or always be) higher than if it comes from an alternative source (e.g., an unverified source), and the prompt serves to attain user consent to proceed with navigation even though confidence may be lower and/or indicate to the user that navigation is occurring in an area of lower confidence data (e.g., requiring more user attention and/or intervention). In FIG. 14B, computer system 1400 receives user input 1421 (e.g., a tap gesture) on icon 1420c for confirming the recommended path indicated by navigation path 1414a.
[0343] In some embodiments, map data collected from a source other than the storage resource includes map data received from and/or based on crowdsourced data. In some embodiments, the crowdsourced data includes and/or is based on one or more previous navigation routes (e.g., one or more navigation routes successfully traversed by one or more other devices).
[0344] FIG. 14C illustrates navigation user interface 1410 for interacting with different map data in response to computer system 1400 receiving user input 1421 in FIG. 14B. As shown in FIG. 14C, navigation user interface 1410 now includes updated navigation instruction 1410a (e.g., instructing the user to turn left at the next juncture, matching the confirmed recommended navigation path from FIG. 14B). In some embodiments, a navigation path (e.g., 1414a) maintains a visual appearance associated with state of the map data prior to confirmation of the recommend path. For example, in FIG. 14C navigation path 1414a maintains the visual appearance of having dotted line borders as it appeared in FIG. 14B. This can inform a user that this portion of navigation involves map data associated with the third state (e.g., and thus lower confidence map data). [0345] FIG. 14D illustrates navigation user interface 1410 after the device being navigated performs the left turn instructed in FIG. 14C. In this example, computer system 1400 continually updates the displayed map area to display the real-time location of the device being navigated relative to the map (e.g., represented by indicator 1412 within the map area). This can be performed using location data such as global positioning system (GPS) data. In some embodiments, a navigation path maintains a visual appearance associated with state of the map data prior to confirmation of the recommend path even after the associated area is traversed. For example, in FIG. 14D navigation path 1414b maintains the visual appearance of having a dotted line border as it had prior to the corresponding portion of the map area having been traversed (e.g., indicator 1412 in FIG. 14C traversed along the navigation path and into the dotted line region, and so in FIG. 14D the navigation path 1414b already traversed keeps the dotted line appearance). Note that even though navigation paths 1414a and 1414b in FIG. 14D both have dotted line borders, they are not necessarily identical. In this example, navigation path 1414a includes shading to indicate the upcoming navigation route, but navigation path 1414b does not include the shading. Navigation path 1414a also keeps the visual appearance associated with the third state. In some embodiments, after traversal of the corresponding map area, a navigation path changes in a manner that it matches the visual appearance of one or more other states. For example, navigation portion 1414b in FIG. 14D could instead have solid line borders (as in FIG. 14C), which matches the appearance of traversed paths associated with map data having the first state (e.g., all traversed paths can be indicated the same visually, such as with a solid border line).
[0346] FIG. 14E illustrates navigation user interface 1410 as displayed in response to the navigation application reaching a point where no recommended path can be determined for the device being navigated, and is displayed after the device being navigated continues proceeding forward as instructed in FIG. 14D. For example, the map data for this area can be associated with the fourth state described above {insufficient map data from storage resource; no recommended path can be determined based on collected map data}. In some embodiments, in response to determining that map data is associated with a certain state (e.g., the fourth state), the device (e.g., computer system 1400) requires user input of a navigation path. For example, navigation instruction 1410a in navigation user interface 1410 of FIG. 14E includes a prompt for a user to input a navigation path (asking “How to proceed?”). Additionally, navigation path 1414a is displayed with a visual appearance indicating the user input is required (e.g., displayed as an incomplete segment). In FIG. 14E, computer system 1400 receives user input 1423 (e.g., a swipe gesture to the left) on map 1410b, representing a command to the navigation application for navigation to proceed to the left (e.g., make a left turn).
[0347] FIG. 14F illustrates navigation user interface 1410 as it appears in response to computer system 1400 receiving user input 1423 in FIG. 14E. In FIG. 14F, navigation user interface 1410 also includes invalid path user interface 1430. In some embodiments, an invalid path user interface (e.g., 1430) includes one or more of an indication that navigation path created or requested based on user input (e.g., 1423) is invalid (e.g., not possible, not safe, obstructed, and/or the like), an option to retry user input (e.g., icon 1430b), and/or an option to end navigation (e.g., icon 1430c). For example, subsequent to receiving user input 1423, computer system 1400 determines (e.g., based on sensor data) that a left turn is not safe. In FIG. 14F, computer system 1400 receives user input 1431 (e.g., a tap gesture) on icon 1430b for retrying user input of navigation path. In some embodiments, receiving user input representing selection of an option to end navigation (e.g., user input selection of icon 1430c) causes one or more of the following actions: a navigation session ends (e.g., the current trip is ended), a device being navigated stops (e.g., if the device being navigated can receive and act upon relevant instructions), a device being navigated backs up (e.g., and returns to another location) (e.g., if the device being navigated can receive and act upon relevant instructions).
[0348] FIG. 14G illustrates exemplary navigation user interface 1410 (returned to the same scenario as described in FIG. 14E) as displayed in response to computer system 1400 receiving user input 1431. In some embodiments, user input defining a path can include one or more valid gesture types. For example, a valid gesture can be a continuous gesture such as a swipe (as shown in FIG. 14E) for indicating a location and/or direction associated with a desired navigation maneuver (e.g., which may nonetheless define an invalid path as determined by sensor data). As another example, an additional or alternative valid gesture can be non-continuous gesture such as a series of inputs defining points along a desired navigation path as shown in FIG. 14G. These points can be interpolated between to determine the desired navigation path. In FIG. 14G, computer system 1400 receives user input 1433 and then user input 1435 (e.g., both being a tap gesture) on map 1410b, collectively representing a command for navigation to proceed forward to the location of user input 1433 and then proceed to the right to the location of user input 1435 (e.g., resulting in a right turn). In some embodiments, user input defining and/or confirming a navigation path includes voice input. For example, at navigation user interface 1410 in FIG. 14B voice input (“Yes”) can cause the same result as user input 1421, and/or at navigation user interface 1410 in FIG. 14G voice input (“turn right”) can cause the same result as user input 1433 and user input 1435.
[0349] In some embodiments, user input defining a path can include one or more user inputs corresponding to selection on a representation of the intended traversal area (e.g., area in front of the device being navigated). For example, at FIGS. 14E and 14G, map 1410b can be computer generated graphics and/or include a camera view of what the intended traversal area looks like (e.g., from one or more cameras attached to the device being navigated).
[0350] FIG. 14H illustrates exemplary navigation user interface 1410 as it appears in response to computer system 1400 receiving user input 1433 and user input 1435 in FIG. 14G. As illustrated in FIG. 14H, navigation user interface 1410 includes updated navigation instruction 1410a (which now instructs “Turn Right”) and navigation path 1414a in the shape of the path defined by user input 1433 and user input 1435. At FIG. 14H, navigation path 1414a is based on map data associated with the fourth state {insufficient map data from storage resource; no recommended path can be determined based on collected map data without user inputjand has a visual appearance associated with the fourth state (e.g., appears as a single, solid line). The visual appearance of navigation path 1414a can indicate that this portion of the navigation is user defined. As navigation proceeds through the user defined portion, navigation paths 1414a and 1414b can behave as described above with respect to the other visual appearances (e.g., navigation path 1414b can be a single, solid line indicating a user defined path has been traversed, or can change to a thicker line having solid borders as shown in FIG. 14C).
[0351] In summary, the examples described with respect to FIGS. 14A-14H illustrate three distinct scenarios that each correspond to a different map data state. In FIG. 14A, map data associated with the first state described above does not require user input intervention. In FIG. 14B, map data associated with the third state described above results in the navigation application being able to infer a recommend navigation path, which is presented at a user interface that requires user input intervention to confirm. In FIGS. 14E and 14G, map data associated with the fourth state described above results in the navigation application not being able to infer a recommend navigation path and instead requires ad-hoc user input intervention to determine a navigation path. [0352] In some embodiments, while awaiting valid user input to define and/or confirm a navigation path, the device being navigated performs a waiting maneuver (e.g., if it includes movement capability). For example, prior to receiving user input 1421 of FIG. 14B, and/or user input 1433 and user input 1435 of FIG. 14G, the device being navigated can stop moving and wait for instructions. The device being navigated can maintain the waiting maneuver until valid user input is received (e.g., and not resume or continue further movement in response to user input 1423 of FIG. 14E).
[0353] In some embodiments, a user interface and/or prompt for requesting user input can be displayed at a threshold (e.g., predetermined) distance away from the location represented by the map data requiring the user input (e.g., a half mile away from where the navigation instruction is needed, such as at the border of a map data state change from first state to third state). In some embodiments, a user interface and/or prompt for requesting user input can be displayed at a threshold (e.g., predetermined) time until arrival away from the location represented by the map data requiring the user input (e.g., one minute before arrival at where the navigation instruction is needed, based on current travel speed).
[0354] In some embodiments, the device being navigated corresponds to (e.g., is associated with, logged into, and/or assigned to) a particular user (e.g., user account, such as a user account belonging to the owner of the vehicle). In some embodiments, the device being navigated is connected to (e.g., in communication with) a plurality of devices. For example, the device being navigated can be connected to two other devices: a different device of the owner (e.g., a smartphone displaying navigation user interface 1410) and a device of a guest (e.g., a user other than the owner). In some embodiments, a user interface and/or prompt for requesting user input is displayed at one or more of the plurality of devices connected to the device being navigated. For example, the owner’s different device can display navigation user interface 1410 prompting for user input whereas the device of the guest does not display navigation user interface 1410. In this way, the device being navigated can prompt for input from certain users and/or devices preferentially and/or sequentially. In some embodiments, the device being navigated is connected to one other device. For example, the one other device can display a user interface and/or prompt requesting user input depending on whether the one other device corresponds to the owner of the device being navigated (e.g., and/or belongs to a set of users, such as registered users, authorized users, and/or trusted users). In some embodiments, if the one other device is a device of a guest (e.g., not the owner), the one other device does not display navigation user interface 1410. In some embodiments, if the one other device is a different device of the owner, the one other device does display navigation user interface 1410. For example, a device the owner, but not a device of a guest, can be prompted and provide instructions to the device being navigated for navigating through areas with insufficient map data. However, by not prompting certain users (e.g., guests) in the same way as the owner, the device being navigated can be prevented from being navigated through such areas (e.g., which can be a preference of and/or made by the owner).
[0355] In some embodiments, the device being navigated and the device displaying navigation user interfaces (e.g., 1410 in FIGS. 14A-14H) are the same device. For example, computer system 1400 displays the user interfaces and is tracking and updating navigation based on its own location and movement. In some embodiments, the device being navigated and the device displaying navigation user interfaces (e.g., 1410 in FIGS. 14A-14H) are different devices. For example, computer system 1400 displays the user interfaces but is tracking and updating navigation based on the location and movement of another device (e.g., for guiding another smartphone; for guiding a device with autonomous and/or semi- autonomous movement capabilities). In some embodiments, the navigation user interfaces are displayed on a shared screen. For example, the navigation interfaces can be displayed on a touchscreen of a vehicle that is attached to computer device 1400 (e.g., a user connects their smartphone via a wire or wirelessly to a computer inside of their vehicle, causing a display of the vehicle to be controlled by an operating system of the smartphone (e.g., like Apple CarPlay)).
[0356] FIG. 15 is a flow diagram illustrating a method for interacting with different map data using a computer system in accordance with some embodiments. Process 1500 is performed at a computer system (e.g., system 100). The computer system is in communication with one or more output components. Some operations in process 1500 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0357] As described below, process 1500 provides an intuitive way for interacting with different map data. The method reduces the cognitive burden on a user for interacting with different map data, thereby creating a more efficient human-machine interface. For battery operated computing devices, enabling a user to interact with different map data faster and more efficiently conserves power and increases the time between battery charges.
[0358] In some embodiments, process 1500 is performed at a computer system (e.g., 1400) that is in communication with one or more output components (e.g., 1402) (e.g., a display screen, a touch-sensitive display, a haptic output component, and/or a speaker). In some embodiments, the computer system is a watch, a fitness tracking device, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device. In some embodiments, the computer system is in communication with one or more input devices (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button).
[0359] The computer system receives (702) a request (e.g., as described above with respect to FIGS. 14A-14H) to navigate to a first destination (e.g., as described above with respect to FIGS. 14A-14H). In some embodiments, the request is received via a map application (e.g., an application configured to provide directions to destinations). In some embodiments, receiving the request includes detecting, via a sensor in communication with the computer system, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)). In some embodiments, the request is received via a determination by the computer system to navigate to the first destination.
[0360] In response to receiving the request (e.g., as described above with respect to FIGS. 14A-14H), the computer system initiates (1504) navigation to the first destination (e.g., as described above with respect to FIGS. 14A-14H) (e.g., displaying navigation interface 1410 as illustrated in FIG. 14A). In some embodiments, navigating to the first destination includes providing, via at least one output component of the one or more output components, one or more maneuvers (e.g., directions). In some embodiments, navigating to the first destination includes causing a physical component in communication with the computer system to change position.
[0361] While (706) navigating to the first destination (e.g., as illustrated in FIG. 14A) (e.g., after initiating navigation to the first destination, such as after providing at least one maneuver (e.g., a direction) with respect to navigating to the first destination), in accordance with a determination that an intended traversal area (e.g., represented by 1414a) (e.g., an upcoming traversal area, a next traversal area, a future traversal area, and/or an area for which the computer system has determined to navigate to and/or through) includes a first quality of map data (e.g., represented by navigation path 1414a of FIG. 14B, 14C, 14D, 14E, and/or 14G) (e.g., map data associated with the second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H) (e.g., a first level of map data, an amount of map data below a threshold, an inadequate amount of map data, and/or map data with a confidence level below a threshold), the computer system requests (1508), via the one or more output components, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to an upcoming maneuver (e.g., displaying path confirmation user interface 1420, navigation instruction 1410a of FIG. 14E, and/or navigation instruction 1410a of FIG. 14G) (e.g., a maneuver, a next maneuver, a direction, a next direction, and/or an upcoming direction, such as “go straight,” “turn left,” and/or “turn right”). In some embodiments, the requesting includes outputting, via a speaker of the one or more output components, an audio request with respect to the next maneuver. In some embodiments, the requesting includes displaying, via a display component of the one or more output components, a visual request with respect to the next maneuver. In some embodiments, the first quality of map data is determined based on metadata corresponding to the intended traversal area. In some embodiments, the first quality of map data is determined based on a confidence level corresponding to the intended traversal area.
[0362] While (706) navigating to the first destination, in accordance with a determination that the intended traversal area includes a second quality of map data (e.g., represented by navigation path 1414a of FIG. 14A) (e.g., map data associated with the first state as described with respect to FIGS. 14A-14H) (e.g., predefined map data, map data including one or more potential routes through the intended traversal area, and/or map data determined based on data detected via one or more sensors in communication with the computer system) different from the first quality of map data (e.g., represented by navigation path 1414a of FIG. 14B, 14C, 14D, 14E, and/or 14G) (e.g., map data associated with the second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H), the computer system forgoes (1510) requesting input with respect to the upcoming maneuver (e.g., forgoing displaying path confirmation user interface 1420, navigation instruction 1410a of FIG. 14E, and/or navigation instruction 1410a of FIG. 14G) (e.g., continuing to display navigation user interface 1410 as in FIG. 14 A). In some embodiments, in accordance with the determination that the intended traversal area includes the second quality of map data, outputting, via a speaker of the one or more output components, the upcoming maneuver. In some embodiments, in accordance with the determination that the intended traversal area includes the second quality of map data, displaying, via a display component of the one or more output components, the upcoming maneuver. In some embodiments, in accordance with the determination that the intended traversal area includes the second quality of map data, providing, via an output component, the upcoming maneuver without additional input required after initiating navigation to the first destination. In some embodiments, the first quality of map data is determined to be of lower quality (e.g., includes less data, includes data that corresponds to less detailed map data, and/or does not include data that is included in the second quality of map data) than the second quality of map data. Requesting input with respect to the upcoming maneuver when the intended traversal area includes the first quality of map data provides the user with different functionality depending on the quality of map data for the intended traversal area, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0363] In some embodiments, while navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H), in accordance with the determination that the intended traversal area includes the second quality of map data, the computer system performs the upcoming maneuver (e.g., performing the maneuver represented by navigation path 1414a of FIG. 14A) (e.g., displaying a representation of the upcoming maneuver and/or causing the computer system to be navigated according to the upcoming maneuver) without receiving input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to the upcoming maneuver (e.g., 1414a of FIG. 14A) (e.g., since initiating navigation to the first destination and/or since receiving input with respect to a maneuver before the upcoming maneuver). In some embodiments, before navigating to the first destination, a route to the first destination is selected via input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) and the route includes the upcoming maneuver. In some embodiments, before navigating to the first destination, a route to the first destination is selected via input and no further include is received with respect to the upcoming maneuver. In some embodiments, the second quality of map data was contributed by a third party (e.g., a person or company in control of the intended traversal area and/or a person, company, and entity that has visited, selected, and/or navigated the intended area) and not a manufacturer of the computer system. In some embodiments, the second quality of map data is verified by a mapping software performing the upcoming maneuver. In some embodiments, the second quality of map data is verified by a user associated with the mapping software. Performing the upcoming maneuver when the intended traversal area includes the second quality of map data provides the user with functionality without the user needing to directly request such functionality, thereby reducing the number of inputs needed to perform an operation and/or performing an operation when a set of conditions has been met without requiring further user input.
[0364] In some embodiments, while navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H), in accordance with the determination that the intended traversal area includes the first quality of map data and after (e.g., while and/or in conjunction with) a computer-generated path (e.g., 1414a in FIG. 14B) (e.g., the computergenerated path is a recommended path and/or a determined path through the intended traversal area and/or through locations that correspond to the intended traversal area) corresponding to the upcoming maneuver is displayed (e.g., via the display component and/or via a second computer system that is different from the computer system), the computer system receives input (e.g., 1421) (e.g., a tap input or, in some examples, a non-tap input (e.g., a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to approval of the computergenerated path. In some embodiments, the computer-generated path includes the upcoming maneuver. In some embodiments, the computer-generated path is generated without input from a user of the computer system. In some embodiments, while navigating to the first destination, in response to receiving the input, the computer system performs the upcoming maneuver (e.g., performing the maneuver represented by navigation path 1414a of FIG. 14B) according to the computer-generated path (e.g., 1414a of FIG. 14B). In some embodiments, in response to receiving input corresponding to rejection of the computer-generated path causes display of a second computer-generated path different from the computer-generated path. Performing the upcoming maneuver according to the computer-generated path when approval of the path is received provides the user the ability to decide whether a path that was generated for the user is what the user wants, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0365] In some embodiments, the computer-generated path is generated based on data captured by one or more sensors that are in communication with the computer system. In some embodiments, the one or more sensors are included within and/or attached to a housing includes within and/or has attached the one or more output components. In some embodiments, the one or more sensors do not detect a location (e.g., via a global positioning system) but rather detects one or more objects in a physical environment. In some embodiments, the computer-generated path is generated based on data captured by a plurality of sensors in communication with the computer system. In some embodiments, the one or more sensors includes a camera and the data includes an image captured by the camera. In some embodiments, the one or more sensors includes a radar, lidar, and/or another ranging sensor. Generating the computer-generated path based on data captured by one or more sensors that are in communication with the computer system ensures that the computergenerated path is based on current data and not data that was detected previously, thereby adapting to a current context and/or state of a physical environment.
[0366] In some embodiments, the computer-generated path is generated based on data captured by a different computer system separate from the computer system. In some embodiments, the different computer system is remote from and/or not physically connected to the computer system. In some embodiments, the computer-generated path is generated based on a heat map determined based on data collected from a plurality of different computer systems. In some embodiments, the plurality of different computer systems is not in communication with the computer system but rather are in communication with the different computer system that is in communication with the computer system. In some embodiments, the different computer system is in wireless communication with the computer system, such as via the Internet. In some embodiments, the data is received by the computer system in a message sent by the different computer system. In some embodiments, the different computer system generates the computer-generated path, and the computer system receives the computer-generated path from the different computer system. Generating the computergenerated path based on data captured by the different computer system provides the ability for operations to be performed and/or data to be detected by computer systems different from the computer system, thereby offloading such operations to different processors and/or allowing for different types of data to be detected/used when the computer system might not be in communication with such sensors.
[0367] In some embodiments, while navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H), in accordance with a determination that the intended traversal area includes a third quality of map data (e.g., represented by navigation path 1414a of FIG. 14E, and/or 14G) (e.g., map data associated with the fourth state as described with respect to FIGS. 14A-14H) (e.g., the second quality of map data or a quality of map data different from the first and second quality of map data), the computer system receives input (e.g., 1423, 1433, and/or 1435) (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to a path (e.g., defined by 1423, 1433, and/or 1435) (e.g., a navigation path and/or one or more instructions for navigating with respect to the intended traversal area) with respect to the intended traversal area. In some embodiments, the third quality of map data is the second quality of map data. In some embodiments, the path is generated based on the input. In some embodiments, the third quality of map data is a lower quality of map data than the second quality of map data. In some embodiments, while navigating to the first destination, after receiving the input corresponding to the path and in accordance with a determination that the path meets a first set of criteria, the computer system navigates (e.g., with respect to the intended traversal area) via the path (e.g., navigating via 1414a of 14H). In some embodiments, after receiving the input corresponding to the path and in accordance with a determination that the path does not meet the first set of criteria, forgoing navigating via the path (e.g., request a different path). In some embodiments, the first set of criteria includes a criterion that is met when the path is determined to be navigable by the computer system. In some embodiments, the path is determined to be navigable by the computer system based on data captured by one or more sensors in communication with the computer system. In some embodiments, the path is determined to be navigable by the computer system based on one or more objects detected in the intended traversal area. Navigating via the path when the path meets the first set of criteria ensures that the path is accepted by the computer system and that not just any path will be used for navigation, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0368] In some embodiments, while navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H) (e.g., while displaying navigation interface 1410), in accordance with the determination that the intended traversal area includes the third quality of map data (e.g., while displaying navigation interface 1410 of FIG. 14E) and after receiving the input (e.g., 1423) corresponding to the path, in accordance with a determination that the path does not meet the first set of criteria, the computer system forgoes navigating via the path (e.g., and displaying invalid path user interface 1430) (e.g., rejecting the path and, in some examples, requesting input corresponding to a different path), wherein the determination that the path does not meet the first set of criteria is based on data detected by one or more sensors in communication with the computer system. In some embodiments, the one or more sensors do not detect a location of the computer system but rather detect a characteristic (e.g., an object, a surface, and/or a path within) of a physical environment. Forgoing navigating via the path when the path does not meet the first set of criteria ensures that not just any path will be used for navigation, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0369] In some embodiments, while navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H), in accordance with the determination that the intended traversal area includes the second quality of map data (e.g., represented by navigation path 1414a of FIG. 14A) (e.g., map data associated with the first state as described with respect to FIGS. 14A-14H) and after performing the upcoming maneuver without receiving input with respect to the upcoming maneuver (e.g., represented by navigation path 1414a of FIG. 14 A), in accordance with a determination a second intended traversal area includes the first quality of map data (e.g., represented by navigation path 1414a of FIG. 14B, 14C, 14D, 14E, and/or 14G) (e.g., map data associated with the second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H), the computer system requests, via the one or more output components, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to a second upcoming maneuver different from the upcoming maneuver (e.g., displaying path confirmation user interface 1420, navigation instruction 1410a of FIG. 14E, and/or navigation instruction 1410a of FIG. 14G). In some embodiments, requesting input with respect to the second upcoming maneuver is in a different form than requesting input with respect to the upcoming maneuver (e.g., one includes providing a suggested path while the other requires a user to identify at least one or more points to use to generate a path). In some embodiments, the second intended traversal area is different from the intended traversal area. Requesting input with respect to the second upcoming maneuver after performing the upcoming maneuver without receiving input with respect to the upcoming maneuver ensures that the computer system only requests user input for some maneuvers and not other maneuvers, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0370] In some embodiments, a first path corresponding to the upcoming maneuver has a first visual appearance (e.g., visual appearance of 1414a in FIGS. 14A, 14B, 14D, 14E, 14G, and/or 14H) and a second path corresponding to the second upcoming maneuver has a second visual appearance different (e.g., visual appearance of 1414a in FIGS. 14A, 14B, 14D, 14E, 14G, and/or 14H) from (e.g., a different color, pattern, line weight, line segmentation (e.g., solid lines v. dotted lines), and/or size) the first visual appearance. In some embodiments, the first visual appearance indicates a first respective quality of map data (e.g., map data associated with the first state, second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H) and the second visual appearance indicates a second respective quality of map data (e.g., map data associated with the first state, second state, third state, and/or fourth state as described with respect to FIGS. 14A-14H) different from the first respective quality of map data. In some embodiments, the second upcoming maneuver is the same type of maneuver as the upcoming maneuver (e.g., the same maneuver). Different paths having different visual appearances based on the amount of input required for a path provides the user with feedback about the state of the computer system and an amount of confidence that the user should have in a particular path, thereby providing improved visual feedback to the user. [0371] Note that details of the processes described above with respect to process 1500 (e.g., FIG. 15) are also applicable in an analogous manner to the methods described below/above. For example, process 1600 optionally includes one or more of the characteristics of the various methods described above with reference to process 1500. For example, the computer system of process 1600 can be the computer system of process 1500. For brevity, these details are not repeated below.
[0372] FIG. 16 is a flow diagram illustrating a method for interacting with different map data using a computer system in accordance with some embodiments. Process 1600 is performed at a computer system (e.g., system 100). The computer system is in communication with one or more output components. Some operations in process 1600 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0373] As described below, process 1600 provides an intuitive way for interacting with different map data. The method reduces the cognitive burden on a user for interacting with different map data, thereby creating a more efficient human-machine interface. For battery operated computing devices, enabling a user to interact with different map data faster and more efficiently conserves power and increases the time between battery charges.
[0374] In some embodiments, process 1600 is performed at a computer system (e.g., 1400) that is in communication with one or more output components (e.g., 1402) (e.g., display screen, a touch-sensitive display, a haptic output device, and/or a speaker). In some embodiments, the computer system is a watch, a fitness tracking device, a phone, a tablet, a processor, a head-mounted display (HMD) device, and/or a personal computing device. In some embodiments, the computer system is in communication with one or more input devices (e.g., a physical input mechanism, a camera, a touch-sensitive display, a microphone, and/or a button).
[0375] The computer system receives (1602) a request to navigate to a first destination (e.g., a request to display navigation interface 1410 of FIG. 14A). In some embodiments, the request is received via a map application (e.g., an application configured to provide directions to destinations). In some embodiments, receiving the request includes detecting, via a sensor in communication with the computer system, input (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)). In some embodiments, the request is received via a determination by the computer system to navigate to the first destination.
[0376] In response to receiving the request (e.g., a request to display navigation interface 1410 of FIG. 1 A), the computer system initiates (1604) navigation to the first destination (e.g., as described above with respect to FIGS. 14A-14H) (e.g., as illustrated in FIG. 14A). In some embodiments, navigating to the first destination includes providing, via at least one output component of the one or more output components, one or more maneuvers (e.g., directions). In some embodiments, navigating to the first destination includes causing a physical component in communication with the computer system to change position.
[0377] While (1606) navigating to the first destination (e.g., as described above with respect to FIGS. 14A-14H) (e.g., as illustrated in FIG. 14A) (e.g., after initiating navigation to the first destination, such as after providing at least one maneuver (e.g., a direction) with respect to navigating to the first destination), in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area (e.g., an upcoming traversal area, a next traversal area, a future traversal area, and/or an area for which the computer system has determined to navigate to and/or through) includes inadequate map data (e.g., a first level of map data, predefined map data, map data including one or more potential routes through the intended traversal area, and/or map data determined based on data detected via one or more sensors in communication with the computer system) to determine an upcoming maneuver (e.g., a maneuver, a next maneuver, a direction, a next direction, and/or an upcoming direction, such as “go straight,” “turn left,” and/or “turn right”) (e.g., represented by navigation path 1414a of FIGS. 14E and/or 14G) (e.g., map data associated with the fourth state as described with respect to FIGS. 14A-14H), the computer system requests (1608), via the one or more output components, input (e.g., 1423, 1433, and/or 1435) (e.g., a dragging input or, in some examples, a non-dragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to the upcoming maneuver. In some embodiments, requesting includes outputting, via a speaker of the one or more output components, an audio request with respect to the upcoming maneuver. In some embodiments, requesting includes displaying, via a display component of the one or more output components, a visual request (e.g., a request for a user to select one or more points for which to include in the upcoming maneuver, a request for a user to draw a path to correspond to the upcoming maneuver, a request for a user to verbally describe the upcoming maneuver, and/or a request for a user to point or otherwise indicate a direction and/or area to include in the upcoming maneuver). In some embodiments, in accordance with a determination that the intended traversal area includes adequate map data to determine the upcoming maneuver, forgoing requesting input with respect to the upcoming maneuver. Requesting input with respect to the upcoming maneuver when the intended traversal area includes inadequate map data provides the user with different functionality depending on the map data for the intended traversal area, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0378] In some embodiments, after requesting input with respect to the upcoming maneuver, the computer system receives input (e.g., 1423) (e.g., a drag input or, in some examples, a non-drag input (e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to a first path (e.g., 1414a of FIG. 14H) (e.g., a drawn path, a path indicating movement and/or direction, and/or a path that is updated over time and/or while the computer system is moving) in a first representation (e.g., navigation user interface 1410 of FIG. 14E) (e.g., a graphical representation, a line, a path, a textual representation, and/or a symbolic representation) of the intended traversal area. In some embodiments, the input is continuous input including input at a first position and a second position, wherein the path includes the first position and the second position. In some embodiments, the input includes a tap and hold gesture that begins at a first position and continues to a second position, where the path includes the first position and the second position. In some embodiments, the computer system navigates according to the path. In some embodiments, the input includes a drawing of a continuous line in the representation of the intended traversal area. Receiving input corresponding to the first path in the first representation provides the user a precise way for instructing the computer system where to navigate, thereby reducing the number of inputs needed to perform an operation and/or providing additional control options without cluttering the user interface with additional displayed controls. [0379] In some embodiments, after requesting input with respect to the upcoming maneuver, the computer system receives input (e.g., 1433, and/or 1435) (e.g., a drag input or, in some examples, a non-drag input (e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) corresponding to one or more points (e.g., centroids of 1433, and/or 1435) in a second representation (e.g., navigation user interface 1410 of FIG. 14H) of the intended traversal area, wherein a second path is generated based on the one or more points. In some embodiments, the one or more points includes a plurality of points, wherein a line between the plurality of points is generated (e.g., using interpolation or some other operation to identify a path between the plurality of points). In some embodiments, the one or more points includes a point, wherein a line between a location of the computer system and the point is generated (e.g., using interpolation or some other operation to identify a path between the plurality of points). In some embodiments, the input includes a plurality of distinct input, each distinct input including detection of the distinct input and detection of a release of the distinct input. In some embodiments, the input includes a first input and a second input distinct (e.g., separate) from the first input. Receiving input corresponding to one or more points in the second representation provides the user a precise way for instructing the computer system where to navigate, thereby reducing the number of inputs needed to perform an operation and/or providing additional control options without cluttering the user interface with additional displayed controls.
[0380] In some embodiments, after requesting input with respect to the upcoming maneuver, the computer system receives (e.g., via a microphone that is in communication with the computer system) a voice request corresponding to the intended traversal area. In some embodiments, the voice request includes one or more verbal instructions for navigating with respect to the intended traversal area. Receiving the voice request corresponding to the intended traversal area provides the user a precise way for instructing the computer system where to navigate, thereby reducing the number of inputs needed to perform an operation and/or providing additional control options without cluttering the user interface with additional displayed controls.
[0381] In some embodiments, the navigation to the first destination is initiated along a third path (e.g., 1414a of FIG. 14H) (e.g., a path through a physical environment and/or a path including one or more directions for navigating the physical environment). In some embodiments, a portion of the third path goes through the intended traversal area (e.g., the path is configured to navigate through and/or along the intended traversal area). In some embodiments, the path is determined by the computer system. In some embodiments, the computer system sends, to a device in communication with the computer system such as a server, a request for the path and, after sending the request, the computer system receives, from the device, the path. The navigation including the portion that requires input to go within provides the user the ability to navigate into areas for which map data accessible by the computer system is inadequate, thereby increasing the number of options available to the user and allowing for the user to save time while navigating to a destination.
[0382] In some embodiments, the navigation to the first destination is initiated along a fourth path (e.g., 1414a of FIG. 14A) (e.g., a path through a physical environment, the path including one or more directions for navigating the physical environment). In some embodiments, the fourth path includes a respective portion that does not require an input (e.g., 1421, 1423, 1433, and/or 1435) (e.g., user input) (e.g., one or more respective inputs that are obtained to navigate through the respective portion) (e.g., one or more drag inputs and/or one or more non-drag inputs (e.g., a tap input, a rotational input, an air gesture, a mouse click, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) to navigate through the respective portion (e.g., the path includes a maneuver to navigate through the portion without a user confirming the maneuver). In some embodiments, the path is determined by the computer system. In some embodiments, the computer system sends, to a device in communication with the computer system such as a server, a request for the path and, after sending the request, the computer system receives, from the device, the path. The navigation including the portion that does not require input to go through reduces the amount of input required by the user during navigation, thereby reducing the number of inputs needed to perform an operation and/or performing an operation when a set of conditions has been met without requiring further user input.
[0383] In some embodiments, the set of one or more criteria includes a criterion that is met when a determination is made that the computer system is within a first threshold distance (e.g., zero or more) (e.g., 1-10 meters) from the intended traversal area. In some embodiments, the first threshold distance is predefined and applied to all navigation and all portions of a navigation by the computer system. In some embodiments, the first threshold distance is based the intended traversal area and is different for different intended traversal areas (e.g., different intended traversal areas may be smaller or bigger and require different amount of time to handle) (e.g., different intended traversal areas may include different areas around them for stopping). Requesting input with respect to the upcoming maneuver when the intended traversal area is within the first threshold distance provides the user with options with respect to navigation at a time in which the user is in a position to provide input, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0384] In some embodiments, the set of one or more criteria includes a criterion that is met when a determination is made that the computer system is not moving (e.g., based on data detected by a sensor in communication with the computer system and/or based on a current maneuver being performed for navigating) and within a second threshold distance (e.g., zero or more) (e.g., 1-10 meters) from the intended traversal area. In some embodiments, the second threshold distance is predefined and applied to all navigation and all portions of a navigation by the computer system. In some embodiments, the second threshold distance is based the intended traversal area and is different for different intended traversal areas (e.g., different intended traversal areas may be smaller or bigger and require different amount of time to handle) (e.g., different intended traversal areas may include different areas around them for stopping). In some embodiments, in accordance with a determination that the computer system is moving, the computer system does not request input with respect to the upcoming maneuver. In some embodiments, in accordance with a determination that the computer system is not within the second threshold distance from the intended traversal area, the computer system does not request input with respect to the upcoming maneuver. Requesting input with respect to the upcoming maneuver when the computer system is not moving and within the second threshold distance from the intended traversal area provides the user with options with respect to navigation at a time in which the user is in a position to provide input, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input. [0385] In some embodiments, after requesting input with respect to the upcoming maneuver, the computer system receives a set of one or more inputs including one or more inputs (e.g., 1423, 1433, and/or 1435) (e.g., a dragging input or, in some examples, a nondragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to the upcoming maneuver. In some embodiments, the set of one or more inputs includes input defining a path for the navigation to take with respect to the intended traversal area. In some embodiments, in response to receiving the set of one or more inputs including the one or more input with respect to the second upcoming maneuver, in accordance with a determination that a path resulting from the set of one or more input does not meet a first set of criteria, the computer system requests (e.g., displaying invalid path user interface 1430 of FIG. 14F), via the one or more output components, different input (e.g., 1431) (e.g., a dragging input or, in some examples, a nondragging input (e.g., a rotational input, an air gesture, a mouse click and drag input, a voice input, a swipe input, and/or a gaze input)) with respect to the upcoming maneuver (e.g., without initiating navigation of the upcoming maneuver). In some embodiments, the first set of criteria includes a criterion that is met when the path is determined to be safe and/or possible to be navigated by the computer system. In some embodiments, the first set of criteria includes a criterion that is met based on one or more objects identified in a physical environment corresponding to the path. In some embodiments, in accordance with a determination that the path resulting from the set of one or more inputs does not meet the first set of criteria, the computer system forgoes requesting, via the one or more output components, different input with respect to the upcoming maneuver and/or initiates navigation of the upcoming maneuver. Requesting different input when the path does not meet the first set of criteria ensures that not just any path will be used for navigation, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or performing an operation when a set of conditions has been met without requiring further user input.
[0386] Note that details of the processes described above with respect to process 1500 (e.g., FIG. 4) are also applicable in an analogous manner to the methods described below/above. For example, process 1400 optionally includes one or more of the characteristics of the various methods described above with reference to process 1500. For example, the computer system of process 1400 can be the computer system of process 1500. For brevity, these details are not repeated below.
[0387] This disclosure, for purpose of explanation, has been described with reference to specific embodiments. The discussions above are not intended to be exhaustive or to limit the disclosure and/or the claims to the specific embodiments. Modifications and/or variations are possible in view of the disclosure. Some embodiments were chosen and described in order to explain principles of techniques and their practical applications. Others skilled in the art are thereby enabled to utilize the techniques and various embodiments with modifications and/or variations as are suited to a particular use contemplated.
[0388] Although the disclosure and embodiments have been fully described with reference to the accompanying drawings, it is to be noted that various changes and/or modifications will become apparent to those skilled in the art. Such changes and/or modifications are to be understood as being included within the scope of this disclosure and embodiments as defined by the claims.
[0389] It is the intent of this disclosure that any personal information of users should be gathered, managed, and handled in a way to minimize risks of unintentional and/or unauthorized access and/or use.
[0390] Therefore, although this disclosure broadly covers use of personal information to implement one or more embodiments, this disclosure also contemplates that embodiments can be implemented without the need for accessing such personal information.

Claims

CLAIMS What is claimed is:
1. A method, comprising: at a computer system that is in communication with a display component and one or more input devices: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; and in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
2. The method of claim 1, further comprising: in response to receiving the request, ceasing to display the first indication.
3. The method of any one of claims 1-2, wherein the computer system includes the second device.
4. The method of any one of claims 1-3, wherein receiving the request to have the first device navigate with respect to the third device includes detecting input directed to a control that includes an indication of the third device.
5. The method of claim 4, further comprising: while the first device is navigating with respect to the third device, displaying, via the display component, a second control that includes an indication of the second device, wherein the second control is different from the control; while displaying the second control, receiving input directed to the second control; and in response to receiving the input directed to the second control, displaying, via the display component, a third indication that the first device is navigating with respect to the second device.
6. The method of any one of claims 1-5, further comprising: in response to receiving the request, classifying the third device as a guest user of the first device.
7. The method of claim 6, wherein the third device is classified as the guest user of the first device for a predefined amount of time, and wherein the third device is no longer classified as a guest user of the first device after the predefined amount of time has lapsed.
8. The method of any one of claims 1-7, wherein the second device is a different type of device than the first device, and wherein the third device is a different type of device than the first device.
9. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for performing the method of any one of claims 1-8.
10. A computer system that is in communication with a display component and one or more input devices, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 1-8.
11. A computer system that is in communication with a display component and one or more input devices, comprising: means for performing the method of any one of claims 1-8.
12. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for performing the method of any one of claims 1-8.
13. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; and in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
14. A computer system that is in communication with a display component and one or more input devices, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; and in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
15. A computer system that is in communication with a display component and one or more input devices, comprising: means for, displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; means for, while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; and means for, in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
16. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for: displaying, via the display component, a first indication that a first device is navigating with respect to a second device different from the first device; while the first device is navigating with respect to the second device, receiving, via the one or more input devices, a request to have the first device navigate with respect to a third device instead of the second device, wherein the third device is different from the first device; and in response to receiving the request, displaying, via the display component, a second indication that the first device is navigating with respect to the third device.
17. A method, comprising: at a computer system that is in communication with a display component and one or more input devices: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
18. The method of claim 17, wherein the respective device is a different type of device than the computer system.
19. The method of any one of claims 17-18, further comprising: before receiving the set of one or more inputs, configuring the respective device, such that the respective device is caused to be navigated to a location corresponding to the first position in conjunction with the respective device is caused to be navigated to the location.
20. The method of any one of claims 17-19, further comprising: in response to receiving the set of one or more inputs, configuring the respective device in a second manner, such that the respective device transitions to a reduced power state when at the location corresponding to the second position, wherein the second manner is different from the first manner.
21. The method of any one of claims 17-20, further comprising: after configuring the respective device in response to receiving the set of one or more inputs and in accordance with a determination that the respective device has arrived at the specific location corresponding to the second position, displaying, via the display component, a notification that the respective device has reached the location.
22. The method of any one of claims 17-21, further comprising: in response to receiving the set of one or more inputs and in accordance with a determination that the first set of criteria are not met, forgoing configuring the respective device in the first manner.
23. The method of any one of claims 17-22, further comprising: before displaying the representation of the location, receiving a request to capture an image; and in response to receiving the request, causing capture, via a camera in communication with the computer system, of a first image, wherein the one or more images includes the first image.
24. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for performing the method of any one of claims 17-23.
25. A computer system that is in communication with a display component and one or more input devices, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 17-23.
26. A computer system that is in communication with a display component and one or more input devices, comprising: means for performing the method of any one of claims 17-23.
27. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for performing the method of any one of claims 17-23.
28. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
29. A computer system that is in communication with a display component and one or more input devices, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
30. A computer system that is in communication with a display component and one or more input devices, comprising: means for, after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; means for, receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: means for, displaying, via the display component, the representation of the respective device at the second position; and means for configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
31. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display component and one or more input devices, the one or more programs including instructions for: after capture of one or more images of a location, displaying, via the display component, a representation of a respective device at a first position within a representation of the location, wherein the representation of the location is generated based on the one or more images; receiving, via the one or more input devices, a set of one or more inputs, wherein the set of one or more inputs includes an input corresponding to a request to move the representation of the respective device from the first position to a second position within the representation of the location, and wherein the second position is different from the first position; and in response to receiving the set of one or more inputs and in accordance with a determination that a first set of criteria are met: displaying, via the display component, the representation of the respective device at the second position; and configuring the respective device in a first manner, such that the respective device is caused to be navigated to a specific location corresponding to the second position when the respective device is caused to be navigated to the location.
32. A method, comprising: at a computer system that is in communication with a first movement component and a second movement component different from the first movement component: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
33. The method of claim 32, further comprising: after configuring the one or more angles of the one or more movement components, detecting a current angle of the second movement component; and in response to detecting the current angle of the second movement component: in accordance with a determination that the current angle of the second movement component is a first angle, automatically modifying a current angle of the first movement component to be a second angle; and in accordance with a determination that the current angle of the second movement component is a third angle different from the first angle, automatically modifying the current angle of the first movement component to be a fourth angle different from the second angle.
34. The method of any one of claims 32-33, further comprising: after configuring the one or more angles of the one or more movement components, detecting a current location of the computer system; and in response to detecting the current location of the computer system: in accordance with a determination that the current location of the computer system is a first orientation relative to the target location, automatically modifying a current angle of the first movement component to be a fifth angle; and in accordance with a determination that the current location of the computer system is a second orientation relative to the target location, wherein the second orientation is different from the first orientation, automatically modifying the current angle of the first movement component to be a sixth angle different from the fifth angle.
35. The method of any one of claims 32-34, further comprising: after configuring the one or more angles of the one or more movement components, detecting a current location of an object external to the computer system; and in response to detecting the current location of the object external to the computer system: in accordance with a determination that the current location of the object is a first location, automatically modifying a current angle of the first movement component to be a seventh angle; and in accordance with a determination that the current location of the object is a second location different from the first location, automatically modifying the current angle of the first movement component to be an eighth angle different from the seventh angle.
36. The method of any one of claims 32-35, further comprising: before detecting the event with respect to the target location, detecting, via one or more input devices in communication with the computer system, an input corresponding to selection of the target location from one or more available locations, wherein the event occurs while navigating to the target location.
37. The method of claim 36, wherein the input corresponds to an angle of the second movement component.
38. The method of any one of claims 32-37, wherein, after configuring the one or more angles of the one or more movement components: an angle of a third movement component is configured to be controlled in the automatic manner; and an angle of a fourth movement component is configured to be controlled in the manual manner, wherein the third movement component is different from the first movement component and the second movement component, and wherein the fourth movement component is different from the first movement component, the second movement component, and the third movement component.
39. The method of any one of claims 32-38, wherein configuring the one or more angles of one or more movement components includes, in accordance with a determination that the target location is a first type of target location, configuring the angle of the first movement component to converge to a target angle at the target location.
40. The method of any one of claims 32-39, wherein configuring the one or more angles of one or more movement components includes, in accordance with a determination that the target location is a second type of target location, configuring the angle of the first movement component to converge to: a first target angle at a first point of navigating to the target location; and a second target angle at a second point of navigating to the target location, wherein the second target angle is different from the first target angle, and wherein the second point is different from the first point.
41. The method of any one of claims 32-40, wherein configuring the one or more angles of one or more movement components includes, in accordance with a determination that the target location is a third type of target location, configuring the angle of the first movement component to be controlled in an automatic manner for a first portion of a maneuver and in a manual manner for a second portion of the maneuver, and wherein the second portion is different from the first portion.
42. The method of any one of claims 32-41, further comprising: in response to detecting the event and in accordance with a determination that a fifth set of one or more criteria is satisfied, wherein the fifth set of one or more criteria is different from the first set of one or more criteria, configuring one or more angles of one or more movement components, wherein the first set of one or more criteria includes a criterion that is satisfied in accordance with a determination that the computer system is a first direction relative to the target location when detecting the event, wherein the fifth set of one or more criteria includes a criterion that is satisfied in accordance with a determination that the computer system is a second direction relative to the target location when detecting the event, wherein the second direction is different from the first direction, and wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the fifth set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in a manual manner; and an angle of the second movement component is configured to be controlled in an automatic manner.
43. The method of any one of claims 32-42, further comprising: after detecting the event and while navigating to the target location, detecting misalignment of the second movement component relative to the target location; and in response to detecting misalignment of the second movement component relative to the target location, providing, via one or more output devices in communication with the computer system, feedback with respect to a current angle of the second movement component.
44. The method of any one of claims 32-43, further comprising: while an angle of the first movement component is configured to be controlled in an automatic manner and before reaching the target location, detecting, via one or more input devices in communication with the computer system, a second input; and in response to detecting the second input, configuring an angle of the first movement component to be controlled in a manual manner.
45. The method of any one of claims 32-44, further comprising: while an angle of the first movement component is configured to be controlled in an automatic manner and before reaching the target location, detecting, via one or more input devices in communication with the computer system, an object; and in response to detecting the object, configuring an angle of the first movement component to be controlled in an automatic manner using a first path, wherein, before detecting the object, configuring the one or more angles of the one or more movement components in response to detecting the event includes configuring an angle of the first movement component to be controlled in an automatic manner using a second path different from the first path.
46. The method of any one of claims 32-45, wherein a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
47. The method of any one of claims 32-46, further comprising: after configuring the one or more angles of the one or more movement components in response to detecting the event and in conjunction with configuring an angle of the first movement component to be controlled in an automatic manner, causing the computer system to accelerate or deaccelerate.
48. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for performing the method of any one of claims 32-47.
49. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 32-47.
50. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: means for performing the method of any one of claims 32-47.
51. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for performing the method of any one of claims 32-47.
52. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
53. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
54. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: means, while detecting a target location in a physical environment, for detecting an event with respect to the target location; and means, responsive to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, for configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
55. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for: while detecting a target location in a physical environment, detecting an event with respect to the target location; and in response to detecting the event and in accordance with a determination that a first set of one or more criteria is satisfied, configuring one or more angles of one or more movement components, wherein, after configuring the one or more angles of the one or more movement components in response to detecting the event and in accordance with the determination that the first set of one or more criteria is satisfied: an angle of the first movement component is configured to be controlled in an automatic manner; and an angle of the second movement component is configured to be controlled in a manual manner different from the automatic manner.
56. A method, comprising: at a computer system that is in communication with a first movement component and a second movement component different from the first movement component: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
57. The method of claim 56, further comprising: while the computer system is operating in the first mode and while navigating to the target location, detecting a first event; in response to detecting the first event: automatically modifying the second movement component; or forgoing automatically modifying the first movement component; while the computer system is operating in the second mode and while navigating to the target location, detecting a second event; in response to detecting the second event: forgoing automatically modifying the first movement component; or forgoing automatically modifying the second movement component; while the computer system is operating in the third mode and while detecting the target location in the physical environment, detecting a third event; and in response to detecting the third event: automatically modifying the first movement component; or automatically modifying the second movement component.
58. The method of any one of claims 56-57, wherein automatically modifying the first movement component includes automatically modifying an angle or a speed of the first movement component, and wherein automatically modifying the second movement component includes automatically modifying an angle or a speed of the second movement component.
59. The method of any one of claims 56-58, wherein the computer system operates in the first mode in accordance with a determination that the target location is a first type, wherein the computer system operates in the second mode in accordance with a determination that the target location is a second type different from the first type, and wherein the computer system operates in the third mode in accordance with a determination that the target location is a third type different from the first type and the second type.
60. The method of any one of claims 56-59, further comprising: before automatically modifying the first movement component or the second movement component, detecting, via one or more input devices in communication with the computer system, an input corresponding to selection of a respective mode to operate the computer system; and in response to detecting the input corresponding to selection of the respective mode to operate the computer system: in accordance with a determination that the respective mode is the first mode, operating the computer system in the first mode; and in accordance with a determination that the respective mode is the second mode, operating the computer system in the second mode.
61. The method of claim 60, wherein the input corresponding to selection of the respective mode to operate the computer system includes an input corresponding to an angle of the first movement component or the second movement component.
62. The method of any one of claims 56-61, further comprising: while detecting the target location in the physical environment, while navigating to the target location, while the computer system is operating in the first mode, and after automatically modifying the first movement component, detecting an event; and in response to detecting the event: forgoing automatically modifying the first movement component; and automatically modifying the second movement component.
63. The method of any one of claims 56-62, wherein a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
64. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for performing the method of any one of claims 56-63.
65. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 56-63.
66. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: means for performing the method of any one of claims 56-63.
67. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for performing the method of any one of claims 56-63.
68. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
69. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
70. A computer system that is in communication with a first movement component and a second movement component different from the first movement component, comprising: means for detecting a target location in a physical environment; and means, while detecting the target location in the physical environment, for: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and
, in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
71. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with a first movement component and a second movement component different from the first movement component, the one or more programs including instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a first mode: automatically modifying the first movement component; and forgoing automatically modifying the second movement component; in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a second mode different from the first mode, automatically modifying the first movement component and the second movement component, wherein the second set of one or more criteria is different from the first set of one or more criteria; and in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the computer system is operating in a third mode different from the second mode and the first mode, forgoing automatically modifying the first movement component and the second movement component, wherein the third set of one or more criteria is different from the first set of one or more criteria and the second set of one or more criteria.
72. A method, comprising: at a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
73. The method of claim 72, wherein providing the first feedback includes rotating the input component, and wherein providing the second feedback includes rotating the input component.
74. The method of any one of claims 72-73, wherein providing the first feedback includes adding or reducing an amount of resistance to movement of the input component.
75. The method of any one of claims 72-74, further comprising: while detecting the target location in the physical environment: in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied
178
RECTIFIED SHEET (RULE 91) ISA/EP when the computer system is at a first location with respect to the target location, providing third feedback with respect to the input component; and in accordance with a determination that a fourth set of one or more criteria is satisfied, wherein the fourth set of one or more criteria includes a criterion that is satisfied when the computer system is at a second location with respect to the target location, providing fourth feedback with respect to the input component, wherein the fourth set of one or more criteria is different from the third set of one or more criteria, wherein the second location is different from the first location, and wherein the fourth feedback is different from the third feedback.
76. The method of any one of claims 72-75, further comprising: while detecting the target location in the physical environment: in accordance with a determination that a fifth set of one or more criteria is satisfied, wherein the fifth set of one or more criteria includes a criterion that is satisfied in accordance with detection of an object external to the computer system, providing fifth feedback with respect to the input component; and in accordance with a determination that the fifth set of one or more criteria is not satisfied, forgoing providing the fifth feedback with respect to the input component.
77. The method of any one of claims 72-76, further comprising: while detecting the target location in the physical environment: in accordance with a determination that a sixth set of one or more criteria is satisfied, wherein the sixth set of one or more criteria includes a criterion that is satisfied when the computer system is a first distance from the target location, providing sixth feedback with respect to the input component; and in accordance with a determination that a seventh set of one or more criteria is satisfied, wherein the seventh set of one or more criteria includes a criterion that is satisfied when the computer system is a second distance from the target location, providing seventh feedback with respect to the input component, wherein the seventh set of one or more criteria is different from the sixth set of one or more criteria, wherein the second distance is different from the first distance, and wherein the seventh feedback is different from the sixth feedback.
78. The method of any one of claims 72-77, further comprising:
179
RECTIFIED SHEET (RULE 91) ISA/EP while detecting the target location in the physical environment, performing a movement maneuver with respect to the target location, wherein performing the movement maneuver includes: in accordance with a determination that a current portion of the movement maneuver is a first portion, providing eighth feedback with respect to the input component; and in accordance with a determination that the current portion of the movement maneuver is a second portion different from the first portion, providing ninth feedback with respect to the input component, wherein the ninth feedback is different from the eighth feedback.
79. The method of claim 78, wherein the ninth feedback is a different type of feedback than the eighth feedback.
80. The method of any one of claims 72-79, wherein providing the first feedback includes displaying a visual cue, providing an auditory cue, or providing haptic feedback.
81. The method of any one of claims 72-80, wherein a computer-generated path to the target location is generated based on data captured by a different computer system separate from the computer system.
82. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, the one or more programs including instructions for performing the method of any one of claims 72-81.
83. A computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 72-81.
180
RECTIFIED SHEET (RULE 91) ISA/EP
84. A computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, comprising: means for performing the method of any one of claims 72-81.
85. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, the one or more programs including instructions for performing the method of any one of claims 72-81.
86. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, the one or more programs including instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
87. A computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, comprising:
181
RECTIFIED SHEET (RULE 91) ISA/EP one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
88. A computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, comprising: means for detecting a target location in a physical environment; and means, while detecting the target location in the physical environment, for: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
89. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with an input component and an output component, wherein the input component is configured to control an orientation of the output component, the one or more programs including instructions for: detecting a target location in a physical environment; and while detecting the target location in the physical environment: in accordance with a determination that a first set of one or more criteria is satisfied, wherein the first set of one or more criteria includes a criterion that is satisfied when the computer system is in a first orientation with respect to the target location, providing first feedback with respect to the input component; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the computer system is in a second orientation with respect to the target location, providing second feedback with respect to the input component, wherein the second set of one or more criteria is different from the first set of one or more criteria, wherein the second orientation is different from the first orientation, and wherein the second feedback is different from the first feedback.
90. A method, comprising: at a computer system in communication with an input component: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
91. The method of claim 90, wherein the process to select a respective target location includes: providing a first control to maintain the first target location; and providing a second control to select a new target location different from the first target location, wherein the second control is different from the first control.
92. The method of claim 91, wherein the computer system is in communication with a display generation component, and wherein providing the second control includes displaying, via the display generation component, an indication corresponding to the new target location.
93. The method of any one of claims 90-92, wherein the computer system is in communication with a movement component, and wherein navigating to the first target location includes automatically causing, by the computer system, the movement component to change operation.
94. The method of any one of claims 90-92, wherein navigating to the first target location is manual.
95. The method of any one of claims 90-94, wherein detecting the error includes detecting that the computer system is at least a predefined distance from the first target location.
96. The method of any one of claims 90-95, wherein detecting the error includes detecting that a current orientation of the computer system is a first orientation with respect to the first target location.
97. The method of any one of claims 90-96, wherein the computer system is in communication with an output component, the method further comprising: after initiating the process to select a respective target location, providing, via the output component, a third control to select a new target location different from the first target location, wherein the new target location is the same type of location as the first target location.
98. The method of any one of claims 90-97, wherein the computer system is in communication with a second display generation component, the method further comprising: after initiating the process to select a respective target location, displaying, via the second display generation component, a fourth control to select the respective target location.
99. The method of claim 98, further comprising: While displaying the fourth control to select the respective target location, detecting, via a second input component in communication with the computer system, a verbal input corresponding to selection of the fourth control; and in response to detecting the verbal input corresponding to selection of the fourth control, initiating a process to navigate to the respective target location.
100. The method of any one of claims 90-99, wherein the computer system is in communication with an audio generation component, the method further comprising: after initiating the process to select a respective target location, outputting, via the audio generation component, an auditory indication of a fifth control to select the respective target location.
101. The method of any one of claims 90-100, wherein the computer system is in communication with an output component and a second input component, the method further comprising: after initiating the process to select a respective target location, detecting, via the second input component, an input corresponding to selection of a sixth control to maintain the first target location; and in response to detecting the input corresponding to the selection of the sixth control to maintain the first target location, outputting, via the output component, an indication of a new path to the first target location.
102. The method of claim 101, wherein the output component includes a display generation component, and wherein outputting, via the output component, the indication of the new path to the first target location includes displaying, via the display generation component, the indication of the new path to the first target location.
103. The method of any one of claims 90-102, wherein the computer system is in communication with a second input component, the method further comprising: after initiating the process to select a respective target location, detecting, via the second input component, an input corresponding to selection of a control to change the first target location to a second target location different from the first target location; and in response to detecting the input corresponding to the selection of the control to change the first target location to the second target location, navigating at least partially automatically to the second target location.
104. The method of any one of claims 90-103, wherein a computer-generated path to the respective target location is generated based on data captured by a different computer system separate from the computer system.
105. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with an input component, the one or more programs including instructions for performing the method of any one of claims 90-104.
106. A computer system in communication with an input component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 90-104.
107. A computer system in communication with an input component, comprising: means for performing the method of any one of claims 90-104.
108. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system in communication with an input component, the one or more programs including instructions for performing the method of any one of claims 90-104.
109. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with an input component, the one or more programs including instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
110. A computer system in communication with an input component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
111. A computer system in communication with an input component, comprising: means, after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, for detecting an error; and means, responsive to detecting the error, for initiating a process to select a respective target location.
112. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system in communication with an input component, the one or more programs including instructions for: after detecting, via the input component, a first set of one or more inputs corresponding to selection of a first target location and while navigating to the first target location, detecting an error; and in response to detecting the error, initiating a process to select a respective target location.
113. A method, comprising: at a computer system that is in communication with one or more output components: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and
187
RECTIFIED SHEET (RULE 91) ISA/EP while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
114. The method of claim 113, further comprising: while navigating to the first destination: in accordance with the determination that the intended traversal area includes the second quality of map data, performing the upcoming maneuver without receiving input with respect to the upcoming maneuver.
115. The method of any one of claims 113-114, further comprising: while navigating to the first destination: in accordance with the determination that the intended traversal area includes the first quality of map data and after a computer-generated path corresponding to the upcoming maneuver is displayed, receiving input corresponding to approval of the computergenerated path; and in response to receiving the input, performing the upcoming maneuver according to the computer-generated path.
116. The method of claim 115, wherein the computer-generated path is generated based on data captured by one or more sensors that are in communication with the computer system.
117. The method of any one of claims 115-116, wherein the computer-generated path is generated based on data captured by a different computer system separate from the computer system.
118. The method of any one of claims 113-117, further comprising: while navigating to the first destination:
188
RECTIFIED SHEET (RULE 91) ISA/EP in accordance with a determination that the intended traversal area includes a third quality of map data, receiving input corresponding to a path with respect to the intended traversal area; and after receiving the input corresponding to the path and in accordance with a determination that the path meets a first set of criteria, navigating via the path.
119. The method of claim 118, further comprising: while navigating to the first destination: in accordance with the determination that the intended traversal area includes the third quality of map data and after receiving the input corresponding to the path: in accordance with a determination that the path does not meet the first set of criteria, forgoing navigating via the path, wherein the determination that the path does not meet the first set of criteria is based on data detected by one or more sensors in communication with the computer system.
120. The method of any one of claims 113-119, further comprising: while navigating to the first destination: in accordance with the determination that the intended traversal area includes the second quality of map data and after performing the upcoming maneuver without receiving input with respect to the upcoming maneuver: in accordance with a determination a second intended traversal area includes the first quality of map data, requesting, via the one or more output components, input with respect to a second upcoming maneuver different from the upcoming maneuver.
121. The method of claim 120, wherein a first path corresponding to the upcoming maneuver has a first visual appearance and a second path corresponding to the second upcoming maneuver has a second visual appearance different from the first visual appearance, and wherein the first visual appearance indicates a first respective quality of map data and the second visual appearance indicates a second respective quality of map data different from the first respective quality of map data.
122. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in
189
RECTIFIED SHEET (RULE 91) ISA/EP communication with one or more output components, the one or more programs including instructions for performing the method of any one of claims 113-121.
123. A computer system that is in communication with one or more output components, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 113-121.
124. A computer system that is in communication with one or more output components, comprising: means for performing the method of any one of claims 113-121.
125. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for performing the method of any one of claims 113-121.
126. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
190
RECTIFIED SHEET (RULE 91) ISA/EP
127. A computer system that is in communication with one or more output components, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
128. A computer system that is in communication with one or more output components, comprising: means for receiving a request to navigate to a first destination; means for, in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: means for, in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and means for, in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
129. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and
191
RECTIFIED SHEET (RULE 91) ISA/EP while navigating to the first destination: in accordance with a determination that an intended traversal area includes a first quality of map data, requesting, via the one or more output components, input with respect to an upcoming maneuver; and in accordance with a determination that the intended traversal area includes a second quality of map data different from the first quality of map data, forgoing requesting input with respect to the upcoming maneuver.
130. A method, comprising: at a computer system that is in communication with one or more output components: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
131. The method of claim 130, further comprising: after requesting input with respect to the upcoming maneuver, receiving input corresponding to a first path in a first representation of the intended traversal area.
132. The method of any one of claims 130-131, further comprising: after requesting input with respect to the upcoming maneuver, receiving input corresponding to one or more points in a second representation of the intended traversal area, wherein a second path is generated based on the one or more points.
133. The method of any one of claims 130-132, further comprising: after requesting input with respect to the upcoming maneuver, receiving a voice request corresponding to the intended traversal area.
192
RECTIFIED SHEET (RULE 91) ISA/EP
134. The method of any one of claims 130-133, wherein the navigation to the first destination is initiated along a third path, and wherein a portion of the third path goes through the intended traversal area.
135. The method of claim 134, wherein the navigation to the first destination is initiated along a fourth path, and wherein the fourth path includes a respective portion that does not require an input to navigate through the respective portion.
136. The method of any one of claims 130-135, wherein the set of one or more criteria includes a criterion that is met when a determination is made that the computer system is within a first threshold distance from the intended traversal area.
137. The method of any one of claims 130-136, wherein the set of one or more criteria includes a criterion that is met when a determination is made that the computer system is not moving and within a second threshold distance from the intended traversal area.
138. The method of any one of claims 130-137, further comprising: after requesting input with respect to the upcoming maneuver, receiving a set of one or more inputs including one or more inputs with respect to the upcoming maneuver; and in response to receiving the set of one or more inputs including the one or more input with respect to the second upcoming maneuver: in accordance with a determination that a path resulting from the set of one or more input does not meet a first set of criteria, requesting, via the one or more output components, different input with respect to the upcoming maneuver.
139. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for performing the method of any one of claims 130-138.
140. A computer system that is in communication with one or more output components, comprising: one or more processors; and
193
RECTIFIED SHEET (RULE 91) ISA/EP memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 130-138.
141. A computer system that is in communication with one or more output components, comprising: means for performing the method of any one of claims 130-138.
142. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for performing the method of any one of claims 130-138.
143. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
144. A computer system that is in communication with one or more output components, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and
194
RECTIFIED SHEET (RULE 91) ISA/EP while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
145. A computer system that is in communication with one or more output components, comprising: means for receiving a request to navigate to a first destination; means for, in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: means for, in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
146. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more output components, the one or more programs including instructions for: receiving a request to navigate to a first destination; in response to receiving the request, initiating navigation to the first destination; and while navigating to the first destination: in accordance with a determination that a set of one or more criteria is met, wherein the set of criteria includes a criterion that is met when a determination is made that an intended traversal area includes inadequate map data to determine an upcoming maneuver, requesting, via the one or more output components, input with respect to the upcoming maneuver.
195
RECTIFIED SHEET (RULE 91) ISA/EP
PCT/US2024/049121 2023-09-30 2024-09-27 Techniques for configuring navigation of a device Pending WO2025072869A1 (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US202363541821P 2023-09-30 2023-09-30
US202363541810P 2023-09-30 2023-09-30
US202363587108P 2023-09-30 2023-09-30
US63/541,810 2023-09-30
US63/541,821 2023-09-30
US63/587,108 2023-09-30
US18/896,677 US20250109965A1 (en) 2023-09-30 2024-09-25 User input for interacting with different map data
US18/896,455 2024-09-25
US18/896,677 2024-09-25
US18/896,455 US20250110633A1 (en) 2023-09-30 2024-09-25 Techniques for configuring navigation of a device
US18/896,680 2024-09-25
US18/896,680 US20250109945A1 (en) 2023-09-30 2024-09-25 Techniques and user interfaces for providing navigation assistance

Publications (2)

Publication Number Publication Date
WO2025072869A1 true WO2025072869A1 (en) 2025-04-03
WO2025072869A4 WO2025072869A4 (en) 2025-05-22

Family

ID=93119586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/049121 Pending WO2025072869A1 (en) 2023-09-30 2024-09-27 Techniques for configuring navigation of a device

Country Status (1)

Country Link
WO (1) WO2025072869A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143496A1 (en) * 2008-12-31 2012-06-07 Cellco Partnership D/B/A Verizon Wireless Enabling a first mobile device to navigate to a location associated with a second mobile device
US20140309924A1 (en) * 2013-04-16 2014-10-16 Apple Inc. Seamless transition from outdoor to indoor mapping
US20210284159A1 (en) * 2020-03-12 2021-09-16 Honda Motor Co., Ltd. Information processing method, and vehicle following travel system
US20230174062A1 (en) * 2020-05-05 2023-06-08 Jaguar Land Rover Limited Automatic speed control for a vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143496A1 (en) * 2008-12-31 2012-06-07 Cellco Partnership D/B/A Verizon Wireless Enabling a first mobile device to navigate to a location associated with a second mobile device
US20140309924A1 (en) * 2013-04-16 2014-10-16 Apple Inc. Seamless transition from outdoor to indoor mapping
US20210284159A1 (en) * 2020-03-12 2021-09-16 Honda Motor Co., Ltd. Information processing method, and vehicle following travel system
US20230174062A1 (en) * 2020-05-05 2023-06-08 Jaguar Land Rover Limited Automatic speed control for a vehicle

Also Published As

Publication number Publication date
WO2025072869A4 (en) 2025-05-22

Similar Documents

Publication Publication Date Title
US20240068835A1 (en) Systems and methods for generating an interactive user interface
US20250068166A1 (en) Autonomous and user controlled vehicle summon to a target
KR102740742B1 (en) Artificial intelligence apparatus and method for determining inattention of driver
CN107111332B (en) Use sound to facilitate interaction between users and their environment
JP6188795B2 (en) On-vehicle device, server device, and running state control method
KR102811794B1 (en) An artificial intelligence apparatus for managing operation of artificial intelligence system and method for the same
US20210072831A1 (en) Systems and methods for gaze to confirm gesture commands in a vehicle
JPWO2019124158A1 (en) Information processing equipment, information processing methods, programs, display systems, and moving objects
KR102741059B1 (en) An artificial intelligence apparatus for determining path of user and method for the same
KR20190098102A (en) Artificial intelligence device for controlling external device
KR20210030155A (en) Robot and controlling method thereof
KR102331672B1 (en) Artificial intelligence device and method for determining user's location
US20250110623A1 (en) Techniques for controlling a device
US20250110479A1 (en) Techniques for controlling an area
US12524069B2 (en) Techniques for motion compensation
WO2025072869A1 (en) Techniques for configuring navigation of a device
US20250109945A1 (en) Techniques and user interfaces for providing navigation assistance
KR102777971B1 (en) Control system and method using gesture in vehicle
WO2025072896A1 (en) Systems and methods for navigating paths
US20250110634A1 (en) Techniques for providing controls
US20250110630A1 (en) User interfaces and techniques for displaying information
WO2025072856A1 (en) Techniques for controlling an area
WO2025072851A1 (en) Techniques for displaying content with a live video feed
WO2025072868A1 (en) Techniques for providing controls
WO2025072854A9 (en) Techniques for motion compensation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24790773

Country of ref document: EP

Kind code of ref document: A1