AU2024200149B2 - Devices, methods, and graphical user interfaces for system-wide behavior for 3D models - Google Patents
Devices, methods, and graphical user interfaces for system-wide behavior for 3D modelsInfo
- Publication number
- AU2024200149B2 AU2024200149B2 AU2024200149A AU2024200149A AU2024200149B2 AU 2024200149 B2 AU2024200149 B2 AU 2024200149B2 AU 2024200149 A AU2024200149 A AU 2024200149A AU 2024200149 A AU2024200149 A AU 2024200149A AU 2024200149 B2 AU2024200149 B2 AU 2024200149B2
- Authority
- AU
- Australia
- Prior art keywords
- user interface
- representation
- input
- cameras
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1637—Details related to the display arrangement, including those related to the mounting of the display in the housing
- G06F1/1643—Details related to the display arrangement, including those related to the mounting of the display in the housing the display being associated to a digitizer, e.g. laptops that can be used as penpads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1686—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/014—Force feedback applied to GUI
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/04—Architectural design, interior design
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Telephone Function (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Position Input By Displaying (AREA)
Abstract
1005066680 A computer system having a display generation component and one or more input devices displays a representation of a virtual three-dimensional object in a user interface region. In response to an input for rotating the object, when the input corresponds to a request to rotate the object about a first axis, the object is rotated by an amount that is determined based on a magnitude of the input and is constrained by a limit on the movement restricting rotation by more than a threshold amount. When the input corresponds to a request to rotate the object about a second axis, the object is rotated relative to the second axis by an amount that is determined based on a magnitude of the input, wherein, for an input with a magnitude above a respective threshold, the device rotates the object by more than the threshold amount of rotation. 1005066680
Description
1005066680
10 Jan 2024
Devices, Methods, Devices, Methods,and andGraphical Graphical User User Interfaces Interfaces for for
System-WideBehavior System-Wide Behaviorfor for3D 3DModels Models
[0001]
[0001] This relates generally to electronic devices that display virtual objects, This relates generally to electronic devices that display virtual objects,
including but not limited to electronic devices that display virtual objects in a variety of including but not limited to electronic devices that display virtual objects in a variety of 2024200149
contexts. contexts.
[0002]
[0002] This application This application is is related relatedtotoInternational Application International Number Application Number
PCT/US2019/014791 PCT/US2019/014791 (International (International Publication Publication Number Number WO 2019/147699) WO 2019/147699) filed on filed on 23 January 2019, the contents of which are incorporated herein by reference in their entirety. 23 January 2019, the contents of which are incorporated herein by reference in their entirety.
[0003]
[0003] Thedevelopment The developmentof of computer computer systems systems for for augmented augmented reality reality has has increased increased
significantly ininrecent significantly recentyears. years.Example Example augmented reality environments augmented reality environmentsinclude includeatatleast least some some
virtual elements that replace or augment the physical world. Input devices, such as touch- virtual elements that replace or augment the physical world. Input devices, such as touch-
sensitive surfaces, sensitive surfaces,for forcomputer computer systems systems and other electronic and other electronic computing devicesare computing devices areused usedto to interact with interact with virtual/augmented reality environments. virtual/augmented reality environments. Example touch-sensitivesurfaces Example touch-sensitive surfaces include touchpads, include touchpads, touch-sensitive touch-sensitive remote remotecontrols, controls, and and touch-screen touch-screendisplays. displays. Such Suchsurfaces surfaces are used to manipulate user interfaces and objects therein on a display. Example user are used to manipulate user interfaces and objects therein on a display. Example user
interface objects include digital images, video, text, icons, and control elements such as interface objects include digital images, video, text, icons, and control elements such as
buttons and buttons and other other graphics. graphics.
[0004]
[0004] But methods But methodsand andinterfaces interfacesfor for interacting interacting with environments that include environments that include at at least some virtual elements (e.g., applications, augmented reality environments, mixed reality least some virtual elements (e.g., applications, augmented reality environments, mixed reality
environments,and environments, andvirtual virtual reality reality environments) are cumbersome, environments) are cumbersome, inefficient,and inefficient, andlimited. limited. For For example, using a sequence of inputs to orient and position a virtual object in an augmented example, using a sequence of inputs to orient and position a virtual object in an augmented
reality environment is tedious, creates a significant cognitive burden on a user, and detracts reality environment is tedious, creates a significant cognitive burden on a user, and detracts
from the from the experience experiencewith withthe the virtual/augmented virtual/augmentedreality reality environment. environment.InInaddition, addition, these these methods take longer than necessary, thereby wasting energy. This latter consideration is methods take longer than necessary, thereby wasting energy. This latter consideration is
particularly important in battery-operated devices. particularly important in battery-operated devices.
1005066680
SUMMARY SUMMARY 10 Jan 2024
[0005]
[0005] Accordingly,there Accordingly, there is is aa need need for for computer systemswith computer systems withimproved improved methods methods and and
interfaces for interacting with virtual objects. Such methods and interfaces optionally interfaces for interacting with virtual objects. Such methods and interfaces optionally
complement complement or or replaceconventional replace conventional methods methods for for interacting interacting with with virtualobjects. virtual objects.Such Such methodsand methods andinterfaces interfacesreduce reducethe thenumber, number,extent, extent,and/or and/ornature natureofofthe the inputs inputs from fromaa user user and and produceaa more produce moreefficient efficient human-machine human-machine interface. interface. ForFor battery-operated battery-operated devices,such devices, such methodsand methods andinterfaces interfacesconserve conservepower power and and increase increase thetime the time between between battery battery charges. charges. 2024200149
[0006]
[0006] Theabove The abovedeficiencies deficienciesand andother otherproblems problemsassociated associatedwith withinterfaces interfacesfor for interacting with virtual objects (e.g., user interfaces for augmented reality (AR) and related interacting with virtual objects (e.g., user interfaces for augmented reality (AR) and related
non-ARinterfaces) non-AR interfaces)may maybebe reduced reduced or or eliminated eliminated by by thethe disclosed disclosed computer computer systems. systems. In In someembodiments, some embodiments,thethe computer computer system system includes includes a desktop a desktop computer. computer. In some In some
embodiments,thethecomputer embodiments, computer system system is portable is portable (e.g.,aanotebook (e.g., notebookcomputer, computer, tabletcomputer, tablet computer, or handheld or device). In handheld device). In some embodiments, some embodiments, thethe computer computer system system includes includes a personal a personal
electronic device electronic device (e.g., (e.g.,a a wearable wearableelectronic electronicdevice, device,such suchasas a watch). In In a watch). some someembodiments, embodiments,
the computer the systemhas computer system has(and/or (and/orisis in in communication communication with) with) a touchpad. a touchpad. In In some some
embodiments,thethecomputer embodiments, computer system system has has (and/or (and/or is is in in communication communication with) with) a touch-sensitive a touch-sensitive
display (also display (also known as aa "touch known as “touch screen" screen”or or "touch-screen “touch-screendisplay"). display”). In In some embodiments, some embodiments,
the computer the systemhas computer system hasa agraphical graphicaluser userinterface interface (GUI), (GUI),one oneoror more moreprocessors, processors,memory memory and one and oneor or more moremodules, modules,programs programs or or setsofofinstructions sets instructionsstored storedin in the the memory for memory for
performingmultiple performing multiplefunctions. functions. In In some someembodiments, embodiments,thethe user user interactswith interacts withthe theGUI GUIin in part part
through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some
embodiments, embodiments, thefunctions the functionsoptionally optionallyinclude includegame game playing, playing, image image editing, editing, drawing, drawing,
presenting, word presenting, processing, spreadsheet word processing, spreadsheetmaking, making,telephoning, telephoning,video videoconferencing, conferencing, e-e-
mailing, instant mailing, instant messaging, workoutsupport, messaging, workout support,digital digital photographing, digital videoing, photographing, digital videoing, web web
browsing, digital music playing, note taking, and/or digital video playing. Executable browsing, digital music playing, note taking, and/or digital video playing. Executable
instructions for performing these functions are, optionally, included in a non-transitory instructions for performing these functions are, optionally, included in a non-transitory
computerreadable computer readablestorage storagemedium mediumor or other other computer computer program program product product configured configured for for execution by execution byone oneoror more moreprocessors. processors.
[0007]
[0007] According to a first aspect of the invention, there is provided a method, According to a first aspect of the invention, there is provided a method,
including: at including: at aacomputer systemhaving computer system havingaadisplay displaygeneration generationcomponent, component, one one or or more more input input
devices, and devices, one or and one or more cameras:displaying, more cameras: displaying,via viathe the display display generation generation component, component,a a representation of a virtual object in a first user interface region that includes a representation representation of a virtual object in a first user interface region that includes a representation
2
1005066680
of a field of view of one or more cameras, wherein the displaying includes maintaining a first of a field of view of one or more cameras, wherein the displaying includes maintaining a first 10 Jan 2024
spatial relationship between the representation of the virtual object and a plane detected spatial relationship between the representation of the virtual object and a plane detected
within a physical environment that is captured in the field of view of the one or more within a physical environment that is captured in the field of view of the one or more
cameras;detecting cameras; detecting movement movement of of thethe computer computer system system thatthat adjusts adjusts thethe fieldofofview field viewofofthe theone one or more or cameras;and more cameras; andininresponse responsetotodetecting detectingmovement movementof of thethe computer computer system system thatthat adjusts adjusts
the field of view of the one or more cameras: adjusting display of the representation of the the field of view of the one or more cameras: adjusting display of the representation of the
virtual object in the first user interface region in accordance with the first spatial relationship virtual object in the first user interface region in accordance with the first spatial relationship 2024200149
between the virtual object and the plane detected within the field of view of the one or more between the virtual object and the plane detected within the field of view of the one or more
cameras as the field of view of the one or more cameras is adjusted, and, in accordance with a cameras as the field of view of the one or more cameras is adjusted, and, in accordance with a
determinationthat determination that the the movement movement ofof thecomputer the computer system system causes causes more more thanthan a threshold a threshold
amount of the virtual object to move outside of a displayed portion of the field of view of the amount of the virtual object to move outside of a displayed portion of the field of view of the
one or more cameras, generating a first alert. one or more cameras, generating a first alert.
[0008]
[0008] Accordingtotoaa second According secondaspect aspectofofthe the invention, invention, there there is isprovided provided aa computer computer
system, including: system, including: aa display display generation generation component; oneorormore component; one more inputdevices; input devices;one oneorormore more cameras;one cameras; oneoror more moreprocessors; processors;and andmemory memory storing storing oneone or more or more programs, programs, wherein wherein the the one or one or more programsareareconfigured more programs configured toto bebe executed executed byby theoneone the oror more more processors, processors, thethe one one
or more programs including instructions for: displaying, via the display generation or more programs including instructions for: displaying, via the display generation
component, a representation of a virtual object in a first user interface region that includes a component, a representation of a virtual object in a first user interface region that includes a
representation of representation of aa field fieldofofview viewofofone oneorormore more cameras, cameras, wherein the displaying wherein the displaying includes includes
maintaining a first spatial relationship between the representation of the virtual object and a maintaining a first spatial relationship between the representation of the virtual object and a
plane detected within a physical environment that is captured in the field of view of the one plane detected within a physical environment that is captured in the field of view of the one
or more or cameras;detecting more cameras; detectingmovement movementof of thethe computer computer system system thatthat adjusts adjusts thethe fieldofofview field view of the of the one one or or more cameras;and more cameras; andinin response responsetoto detecting detecting movement movement of of thecomputer the computer system system
that adjusts the field of view of the one or more cameras: adjusting display of the that adjusts the field of view of the one or more cameras: adjusting display of the
representation of the virtual object in the first user interface region in accordance with the representation of the virtual object in the first user interface region in accordance with the
first spatial relationship between the virtual object and the plane detected within the field of first spatial relationship between the virtual object and the plane detected within the field of
view of view of the the one or more one or camerasasasthe more cameras thefield field of of view of the view of the one one or or more camerasisis adjusted, more cameras adjusted, and, in and, in accordance with aa determination accordance with determinationthat that the the movement movement ofof thecomputer the computer system system causes causes
morethan more thanaa threshold threshold amount amountofofthe thevirtual virtual object object to to move outside of move outside of aa displayed displayed portion portion of of
the field of view of the one or more cameras, generating a first alert. the field of view of the one or more cameras, generating a first alert.
[0009]
[0009] According to a third aspect of the invention, there is provided a computer According to a third aspect of the invention, there is provided a computer
programincluding program includinginstructions instructions that, that, when executedbybya acomputer when executed computer system system with with a display a display
generation component, generation component,one oneorormore more input input devices,andand devices, one one or or more more cameras, cameras, cause cause the the 3
1005066680
computersystem computer systemto: to:display, display, via via the the display display generation generation component, component, a arepresentation representationof of aa 10 Jan 2024
virtual object in a first user interface region that includes a representation of a field of view of virtual object in a first user interface region that includes a representation of a field of view of
one or more cameras, wherein the displaying includes maintaining a first spatial relationship one or more cameras, wherein the displaying includes maintaining a first spatial relationship
between the representation of the virtual object and a plane detected within a physical between the representation of the virtual object and a plane detected within a physical
environment that is captured in the field of view of the one or more cameras; detect environment that is captured in the field of view of the one or more cameras; detect
movement movement of of thecomputer the computer system system thatthat adjusts adjusts thethe fieldofofview field viewofofthe theone oneorormore morecameras; cameras; and in and in response to detecting movement response to movement ofofthe thecomputer computer system system that that adjuststhe adjusts thefield fieldof of view view 2024200149
of the one or more cameras: adjust display of the representation of the virtual object in the of the one or more cameras: adjust display of the representation of the virtual object in the
first user interface region in accordance with the first spatial relationship between the virtual first user interface region in accordance with the first spatial relationship between the virtual
object and the plane detected within the field of view of the one or more cameras as the field object and the plane detected within the field of view of the one or more cameras as the field
of view of of the view of the one one or or more camerasisis adjusted, more cameras adjusted, and, and, in in accordance with aa determination accordance with determinationthat that the movement the movement ofof thecomputer the computer system system causes causes more more thanthan a threshold a threshold amount amount of virtual of the the virtual object to move outside of a displayed portion of the field of view of the one or more cameras, object to move outside of a displayed portion of the field of view of the one or more cameras,
generate a first alert. generate a first alert.
[0010]
[0010] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay, display, aa touch-sensitive touch-sensitive surface, surface, and and one one or or more cameras.The more cameras. Themethod method includes displaying a representation of a virtual object in a first user interface region on the includes displaying a representation of a virtual object in a first user interface region on the
display. The method also includes, while displaying the first representation of the virtual display. The method also includes, while displaying the first representation of the virtual
object in the first user interface region on the display, detecting a first input by a contact at a object in the first user interface region on the display, detecting a first input by a contact at a
location on the touch-sensitive surface that corresponds to the representation of the virtual location on the touch-sensitive surface that corresponds to the representation of the virtual
object on the display. The method also includes, in response to detecting the first input by the object on the display. The method also includes, in response to detecting the first input by the
contact, in accordance with a determination that the first input by the contact meets first contact, in accordance with a determination that the first input by the contact meets first
criteria: displaying a second user interface region on the display, including replacing display criteria: displaying a second user interface region on the display, including replacing display
of at least a portion of the first user interface region with the representation of a field of view of at least a portion of the first user interface region with the representation of a field of view
of the one or more cameras, and continuously displaying the representation of the virtual of the one or more cameras, and continuously displaying the representation of the virtual
object while switching from displaying the first user interface region to displaying the second object while switching from displaying the first user interface region to displaying the second
user interface region. user interface region.
[0011]
[0011] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay, display, aa touch-sensitive touch-sensitive surface, surface, and and one one or or more cameras.The more cameras. Themethod method includes displaying a first representation of a virtual object in a first user interface region on includes displaying a first representation of a virtual object in a first user interface region on
the display. The method also includes, while displaying the first representation of the virtual the display. The method also includes, while displaying the first representation of the virtual
object in the first user interface region on the display, detecting a first input by a first contact object in the first user interface region on the display, detecting a first input by a first contact
at a location on the touch-sensitive surface that corresponds to the first representation of the at a location on the touch-sensitive surface that corresponds to the first representation of the
4
1005066680
virtual object on the display. The method also includes, in response to detecting the first input virtual object on the display. The method also includes, in response to detecting the first input 10 Jan 2024
by the first contact and in accordance with a determination that the input by the first contact by the first contact and in accordance with a determination that the input by the first contact
meets first criteria, displaying the representation of the virtual object in a second user meets first criteria, displaying the representation of the virtual object in a second user
interface region that is different from the first user interface region. The method also interface region that is different from the first user interface region. The method also
includes, while displaying the second representation of the virtual object in the second user includes, while displaying the second representation of the virtual object in the second user
interface region, detecting a second input, and, in response to detecting the second input, in interface region, detecting a second input, and, in response to detecting the second input, in
accordancewith accordance withaadetermination determinationthat thatthe the second secondinput inputcorresponds correspondstotoaarequest request to to manipulate manipulate 2024200149
the virtual object in the second user interface region, changing a display property of the the virtual object in the second user interface region, changing a display property of the
second representation of the virtual object within the second user interface region based on second representation of the virtual object within the second user interface region based on
the second the input; and, second input; and, in in accordance with aa determination accordance with that the determination that the second input corresponds second input corresponds
to a request to display the virtual object in an augmented reality environment, displaying a to a request to display the virtual object in an augmented reality environment, displaying a
third representation of the virtual object with a representation of a field of view of the one or third representation of the virtual object with a representation of a field of view of the one or
morecameras. more cameras.
[0012]
[0012] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay displayand andaatouch-sensitive touch-sensitive surface. surface. The methodincludes, The method includes,ininresponse responsetoto the request to display the first user interface, displaying the first user interface with a the request to display the first user interface, displaying the first user interface with a
representation of the first item. The method also includes, in accordance with a determination representation of the first item. The method also includes, in accordance with a determination
that the first item corresponds to a respective virtual three-dimensional object, displaying a that the first item corresponds to a respective virtual three-dimensional object, displaying a
representation of the first item with a visual indication to indicate that the first item representation of the first item with a visual indication to indicate that the first item
corresponds to a first respective virtual three-dimensional object. The method also includes, corresponds to a first respective virtual three-dimensional object. The method also includes,
in accordance with a determination that the first item does not correspond to a respective in accordance with a determination that the first item does not correspond to a respective
virtual three-dimensional object, displaying the representation of the first item without the virtual three-dimensional object, displaying the representation of the first item without the
visual indication. The method also includes, after displaying the representation of the first visual indication. The method also includes, after displaying the representation of the first
item, receiving a request to display a second user interface that includes a second item. The item, receiving a request to display a second user interface that includes a second item. The
method also includes, in response to the request to display the second user interface, method also includes, in response to the request to display the second user interface,
displaying the displaying the second user interface second user interface with with aa representation representation of ofthe thesecond seconditem. item.The The method method
also includes, also includes, in inaccordance accordance with with a a determination that the determination that the second second item item corresponds to aa corresponds to
respective virtual three-dimensional object, displaying a representation of the second item respective virtual three-dimensional object, displaying a representation of the second item
with the visual indication to indicate that the second item corresponds to a second respective with the visual indication to indicate that the second item corresponds to a second respective
virtual three-dimensional virtual three-dimensional object. object. The The method also includes, method also includes, in in accordance with aa accordance with
determination that the second item does not correspond to a respective virtual three- determination that the second item does not correspond to a respective virtual three-
dimensional object, displaying the representation of the second item without the visual dimensional object, displaying the representation of the second item without the visual
indication. indication.
5
1005066680
[0013]
[0013] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer 10 Jan 2024
systemhaving system havingaadisplay displaygeneration generationcomponent, component, one one or or more more input input devices, devices, andand oneone or or more more
cameras. The method includes receiving a request to display a virtual object in a first user cameras. The method includes receiving a request to display a virtual object in a first user
interface region that includes at least a portion of a field of view of the one or more cameras. interface region that includes at least a portion of a field of view of the one or more cameras.
The method also includes, in response to the request to display the virtual object in the first The method also includes, in response to the request to display the virtual object in the first
user interface region, displaying, via the display generation component, a representation of user interface region, displaying, via the display generation component, a representation of
the virtual object over at least a portion of the field of view of the one or more cameras that is the virtual object over at least a portion of the field of view of the one or more cameras that is 2024200149
included the first user interface region, wherein the field of view of the one or more cameras included the first user interface region, wherein the field of view of the one or more cameras
is aa view is view of of aaphysical physicalenvironment in which environment in the one which the one or or more morecameras camerasare arelocated. located.Displaying Displaying the representation of the virtual object includes: in accordance with a determination that the representation of the virtual object includes: in accordance with a determination that
object-placement criteria are not met, wherein the object-placement criteria require that a object-placement criteria are not met, wherein the object-placement criteria require that a
placement location for the virtual object be identified in the field of view of the one or more placement location for the virtual object be identified in the field of view of the one or more
cameras in order for the object-placement criteria to be met, displaying the representation of cameras in order for the object-placement criteria to be met, displaying the representation of
the virtual object with a first set of visual properties and with a first orientation that is the virtual object with a first set of visual properties and with a first orientation that is
independentofofwhich independent whichportion portionofofthe thephysical physical environment environmentisisdisplayed displayedininthe thefield field of of view view of
the one the one or or more cameras;and more cameras; andininaccordance accordancewith witha adetermination determination thatthe that theobject-placement object-placement criteria are met, displaying the representation of the virtual object with a second set of visual criteria are met, displaying the representation of the virtual object with a second set of visual
properties that are distinct from the first set of visual properties and with a second orientation properties that are distinct from the first set of visual properties and with a second orientation
that corresponds to a plane in the physical environment detected in the field of view of the that corresponds to a plane in the physical environment detected in the field of view of the
one or one or more cameras. more cameras.
[0014]
[0014] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay displaygeneration generationcomponent, component, one one or or more more input input devices, devices, oneone or or more more
cameras, and one or more attitude sensors for detecting changes in attitude of the device cameras, and one or more attitude sensors for detecting changes in attitude of the device
including the including the one or more one or cameras.The more cameras. Themethod method includes includes receiving receiving a request a request to to displayanan display
augmented reality view of a physical environment in a first user interface region that includes augmented reality view of a physical environment in a first user interface region that includes
a representation of a field of view of the one or more cameras. The method also includes, in a representation of a field of view of the one or more cameras. The method also includes, in
response to receiving the request to display the augmented reality view of the physical response to receiving the request to display the augmented reality view of the physical
environment,displaying environment, displayingthe therepresentation representation of of the the field field of ofview view of ofthe theone oneor ormore more cameras cameras
and, in accordance with a determination that calibration criteria are not met for the and, in accordance with a determination that calibration criteria are not met for the
augmented reality view of the physical environment, displaying a calibration user interface augmented reality view of the physical environment, displaying a calibration user interface
object that object that is isdynamically dynamically animated in accordance animated in withmovement accordance with movementof of thethe oneone or or more more
cameras in the physical environment, wherein displaying the calibration user interface object cameras in the physical environment, wherein displaying the calibration user interface object
includes: while displaying the calibration user interface object, detecting, via the one or more includes: while displaying the calibration user interface object, detecting, via the one or more
6
1005066680
attitude sensors, a change in attitude of the one or more cameras in the physical environment; attitude sensors, a change in attitude of the one or more cameras in the physical environment; 10 Jan 2024
and, in response to detecting the change in attitude of the one or more cameras in the physical and, in response to detecting the change in attitude of the one or more cameras in the physical
environment, adjusting at least one display parameter of the calibration user interface object environment, adjusting at least one display parameter of the calibration user interface object
in accordance in withthe accordance with the detected detected change changeinin attitude attitude of of the theone one or ormore more cameras in the cameras in the physical physical
environment.The environment. Themethod method also also includes,while includes, while displaying displaying thecalibration the calibrationuser userinterface interface object object that moves on the display in accordance with the detected change in attitude of the one or that moves on the display in accordance with the detected change in attitude of the one or
more cameras in the physical environment, detecting that the calibration criteria are met. The more cameras in the physical environment, detecting that the calibration criteria are met. The 2024200149
method also includes, in response to detecting that the calibration criteria are met, ceasing to method also includes, in response to detecting that the calibration criteria are met, ceasing to
display the calibration user interface object. display the calibration user interface object.
[0015]
[0015] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay displaygeneration generationcomponent componentandand oneone or or more more input input devices devices including including a a touch-sensitive surface. touch-sensitive surface. The The method includesdisplaying, method includes displaying, by bythe the display display generation generation component, a representation of a first perspective of a virtual three-dimensional object in a component, a representation of a first perspective of a virtual three-dimensional object in a
first user interface region. The method also includes, while displaying the representation of first user interface region. The method also includes, while displaying the representation of
the first perspective of the virtual three-dimensional object in the first user interface region on the first perspective of the virtual three-dimensional object in the first user interface region on
the display, detecting a first input that corresponds to a request to rotate the virtual three- the display, detecting a first input that corresponds to a request to rotate the virtual three-
dimensional object relative to a display to display a portion of the virtual three-dimensional dimensional object relative to a display to display a portion of the virtual three-dimensional
object that is not visible from the first perspective of the virtual three-dimensional object. The object that is not visible from the first perspective of the virtual three-dimensional object. The
method also includes, in response to detecting the first input: in accordance with a method also includes, in response to detecting the first input: in accordance with a
determination that the first input corresponds to a request to rotate the three-dimensional determination that the first input corresponds to a request to rotate the three-dimensional
object about a first axis, rotating the virtual three-dimensional object relative to the first axis object about a first axis, rotating the virtual three-dimensional object relative to the first axis
by an by an amount amountthat thatis is determined basedonona amagnitude determined based magnitudeof of thefirst the first input input and and is is constrained constrained by by
a limit on the movement restricting rotation of the virtual three-dimensional object by more a limit on the movement restricting rotation of the virtual three-dimensional object by more
than a threshold amount of rotation relative to the first axis; and, in accordance with a than a threshold amount of rotation relative to the first axis; and, in accordance with a
determination that the first input corresponds to a request to rotate the three-dimensional determination that the first input corresponds to a request to rotate the three-dimensional
object about a second axis that is different from the first axis, rotating the virtual three- object about a second axis that is different from the first axis, rotating the virtual three-
dimensionalobject dimensional objectrelative relative to to the thesecond second axis axis by by an an amount that is amount that is determined determined based on aa based on
magnitude of the first input, wherein, for an input with a magnitude above a respective magnitude of the first input, wherein, for an input with a magnitude above a respective
threshold, the device rotates the virtual three-dimensional object relative to the second axis by threshold, the device rotates the virtual three-dimensional object relative to the second axis by
morethan more thanthe the threshold threshold amount amountofofrotation. rotation.
[0016]
[0016] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay displaygeneration generationcomponent componentandand a touch-sensitive a touch-sensitive surface.The surface. The method method
includes displaying, via the display generation component, a first user interface region that includes displaying, via the display generation component, a first user interface region that
7
1005066680
includes a user interface object that is associated with a plurality of object manipulation includes a user interface object that is associated with a plurality of object manipulation 10 Jan 2024
behaviors, including a first object manipulation behavior that is performed in response to behaviors, including a first object manipulation behavior that is performed in response to
inputs that meet first gesture-recognition criteria and a second object manipulation behavior inputs that meet first gesture-recognition criteria and a second object manipulation behavior
that is performed in response to inputs that meet second gesture-recognition criteria. The that is performed in response to inputs that meet second gesture-recognition criteria. The
method also includes, while displaying the first user interface region, detecting a first portion method also includes, while displaying the first user interface region, detecting a first portion
of an input directed to the user interface object, including detecting movement of one or more of an input directed to the user interface object, including detecting movement of one or more
contacts across the touch-sensitive surface, and while the one or more contacts are detected contacts across the touch-sensitive surface, and while the one or more contacts are detected 2024200149
on the on the touch-sensitive touch-sensitive surface, surface, evaluating evaluating movement movement ofofthe theone oneorormore morecontacts contactswith withrespect respect to both the first gesture-recognition criteria and the second gesture-recognition criteria. The to both the first gesture-recognition criteria and the second gesture-recognition criteria. The
method also includes, in response to detecting the first portion of the input, updating an method also includes, in response to detecting the first portion of the input, updating an
appearance of the user interface object based on the first portion of the input, including: in appearance of the user interface object based on the first portion of the input, including: in
accordance with a determination that the first portion of the input meets the first gesture- accordance with a determination that the first portion of the input meets the first gesture-
recognition criteria before meeting the second gesture-recognition criteria, changing the recognition criteria before meeting the second gesture-recognition criteria, changing the
appearance of the user interface object in accordance with the first object manipulation appearance of the user interface object in accordance with the first object manipulation
behavior based on the first portion of the input and updating the second gesture-recognition behavior based on the first portion of the input and updating the second gesture-recognition
criteria by increasing a threshold for the second gesture-recognition criteria; and in criteria by increasing a threshold for the second gesture-recognition criteria; and in
accordance with a determination that the input meets the second gesture-recognition criteria accordance with a determination that the input meets the second gesture-recognition criteria
before meeting the first gesture-recognition criteria, changing the appearance of the user before meeting the first gesture-recognition criteria, changing the appearance of the user
interface object in accordance with the second object manipulation behavior based on the first interface object in accordance with the second object manipulation behavior based on the first
portion of the input and updating the first gesture-recognition criteria by increasing a portion of the input and updating the first gesture-recognition criteria by increasing a
threshold for the first gesture-recognition criteria. threshold for the first gesture-recognition criteria.
[0017]
[0017] In accordance In withsome accordance with someembodiments, embodiments, a method a method is performed is performed at a at a computer computer
systemhaving system havingaadisplay displaygeneration generationcomponent, component, one one or or more more input input devices, devices, oneone or or more more audio audio
output generators, output generators, and and one or more one or cameras.The more cameras. Themethod method includes includes displaying, displaying, viavia thedisplay the display generation component, a representation of a virtual object in a first user interface region that generation component, a representation of a virtual object in a first user interface region that
includes a representation of a field of view of one or more cameras, wherein the displaying includes a representation of a field of view of one or more cameras, wherein the displaying
includes maintaining a first spatial relationship between the representation of the virtual includes maintaining a first spatial relationship between the representation of the virtual
object and a plane detected within a physical environment that is captured in the field of view object and a plane detected within a physical environment that is captured in the field of view
of the of the one one or or more cameras.The more cameras. Themethod method also also includesdetecting includes detectingmovement movement of the of the device device thatthat
adjusts the field of view of the one or more cameras. The method also includes, in response to adjusts the field of view of the one or more cameras. The method also includes, in response to
detecting movement detecting movement ofof thedevice the devicethat thatadjusts adjusts the the field field of of view view of of the theone one or ormore more cameras: cameras:
adjusting display of the representation of the virtual object in the first user interface region in adjusting display of the representation of the virtual object in the first user interface region in
accordance with the first spatial relationship between the virtual object and the plane detected accordance with the first spatial relationship between the virtual object and the plane detected
8
1005066680
within the field of view of the one or more cameras as the field of view of the one or more within the field of view of the one or more cameras as the field of view of the one or more 10 Jan 2024
camerasisis adjusted, cameras adjusted, and, and, in in accordance with aa determination accordance with that the determination that the movement movement ofofthe thedevice device causes more causes morethan thanaathreshold thresholdamount amountofofthe thevirtual virtual object object to to move outsideof move outside of aa displayed displayed portion of the field of view of the one or more cameras, generating, via the one or more audio portion of the field of view of the one or more cameras, generating, via the one or more audio
output generators, a first audio alert. output generators, a first audio alert.
[0018]
[0018] In accordance In withsome accordance with someembodiments, embodiments, an electronic an electronic device device includes includes a display a display
generation component, generation component,optionally optionallyone oneorormore more inputdevices, input devices,optionally optionallyone oneorormore more touch- touch- 2024200149
sensitive surfaces, optionally one or more cameras, optionally one or more sensors to detect sensitive surfaces, optionally one or more cameras, optionally one or more sensors to detect
intensities of contacts with the touch-sensitive surface, optionally one or more audio output intensities of contacts with the touch-sensitive surface, optionally one or more audio output
generators, optionally one or more device orientation sensors, optionally one or more tactile generators, optionally one or more device orientation sensors, optionally one or more tactile
output generators, output generators, optionally optionally one one or or more one or more one or more moreattitude attitude sensors sensors for for detecting detecting changes changes
in attitude, in attitude,one oneorormore more processors, processors,and and memory storingone memory storing oneorormore moreprograms; programs; theoneone the oror
moreprograms more programsareareconfigured configured toto bebe executed executed by by thethe one one or or more more processors processors andand thethe oneone or or moreprograms more programsinclude includeinstructions instructionsfor forperforming performingororcausing causingperformance performanceof of thethe operations operations
of any of of the any of the methods describedherein. methods described herein. In In accordance withsome accordance with someembodiments, embodiments, a computer a computer
readable storage readable storage medium medium hasstored has storedtherein thereininstructions, instructions, which, which, when whenexecuted executedbyby an an
electronic device electronic device with with a a display display generation generation component, optionallyone component, optionally oneorormore moreinput inputdevices, devices, optionally one optionally or more one or touch-sensitive surfaces, more touch-sensitive surfaces, optionally optionally one one or or more cameras,optionally more cameras, optionally one or more sensors to detect intensities of contacts with the touch-sensitive surface, one or more sensors to detect intensities of contacts with the touch-sensitive surface,
optionally one optionally or more one or audiooutput more audio outputgenerators, generators, optionally optionally one one or or more moredevice deviceorientation orientation sensors, optionally one or more tactile output generators, and optionally one or more one or sensors, optionally one or more tactile output generators, and optionally one or more one or
moreattitude more attitude sensors, sensors, cause cause the the device device to to perform perform or or cause cause performance ofthe performance of the operations operations of of any of any of the the methods describedherein. methods described herein. In In accordance accordancewith withsome some embodiments, embodiments, a graphical a graphical useruser
interface on interface on an an electronic electronicdevice device with with aadisplay displaygeneration generationcomponent, optionally one component, optionally one or or moreinput more inputdevices, devices, optionally optionally one one or or more moretouch-sensitive touch-sensitivesurfaces, surfaces, optionally optionally one one or or more more
cameras, optionally one or more sensors to detect intensities of contacts with the touch- cameras, optionally one or more sensors to detect intensities of contacts with the touch-
sensitive surface, optionally one or more audio output generators, optionally one or more sensitive surface, optionally one or more audio output generators, optionally one or more
device orientation sensors, optionally one or more tactile output generators, and optionally device orientation sensors, optionally one or more tactile output generators, and optionally
one or one or more oneorormore more one moreattitude attitudesensors, sensors, aa memory, andone memory, and one oror more more processors processors to to execute execute
one or one or more programsstored more programs storedininthe thememory memory includes includes oneone or or more more of the of the elements elements displayed displayed
in any in any of of the the methods described herein, methods described herein, which whichare are updated updatedininresponse responsetoto inputs, inputs, as as described described
in any in any of of the the methods describedherein. methods described herein. In In accordance withsome accordance with someembodiments, embodiments, an electronic an electronic
device includes: device includes: aa display display generation generation component, optionallyone component, optionally oneorormore moreinput inputdevices, devices, 9
1005066680
optionally one optionally or more one or touch-sensitive surfaces, more touch-sensitive surfaces, optionally optionally one one or or more cameras,optionally more cameras, optionally 10 Jan 2024
one or more sensors to detect intensities of contacts with the touch-sensitive surface, one or more sensors to detect intensities of contacts with the touch-sensitive surface,
optionally one optionally or more one or audiooutput more audio outputgenerators, generators, optionally optionally one one or or more moredevice deviceorientation orientation sensors, optionally one or more tactile output generators, and optionally one or more one or sensors, optionally one or more tactile output generators, and optionally one or more one or
moreattitude more attitude sensors sensors for detecting detecting changes changes in attitude; attitude;and andmeans means for for performing performing or causing causing
performanceofofthe performance theoperations operationsofof any anyof of the the methods methodsdescribed describedherein. herein.InInaccordance accordancewith with someembodiments, some embodiments,an an information information processing processing apparatus, apparatus, for for useuse in in an an electronicdevice electronic devicewith with 2024200149
a display a display generation generation component, optionallyone component, optionally oneorormore moreinput inputdevices, devices,optionally optionallyone oneorormore more touch-sensitive surfaces, touch-sensitive surfaces, optionally optionally one one or ormore more cameras, optionally one cameras, optionally or more one or sensorsto more sensors to detect intensities of contacts with the touch-sensitive surface, optionally one or more audio detect intensities of contacts with the touch-sensitive surface, optionally one or more audio
output generators, output generators, optionally optionally one one or or more device orientation more device orientation sensors, sensors, optionally optionally one one or or more more
tactile output generators, and optionally one or more one or more attitude sensors for tactile output generators, and optionally one or more one or more attitude sensors for
detecting changes detecting in attitude changes in attitude includes includes means for performing means for or causing performing or causingperformance performanceofofthe the operations of operations of any of the any of the methods describedherein. methods described herein.
[0019]
[0019] Thus, electronic Thus, electronic devices devices with display generation with display components,optionally generation components, optionallyone one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or or more input devices, optionally one or more touch-sensitive surfaces, optionally one or
more cameras, optionally one or more sensors to detect intensities of contacts with the touch- more cameras, optionally one or more sensors to detect intensities of contacts with the touch-
sensitive surface, optionally one or more audio output generators, optionally one or more sensitive surface, optionally one or more audio output generators, optionally one or more
device orientation sensors, optionally one or more tactile output generators, and optionally device orientation sensors, optionally one or more tactile output generators, and optionally
one or one or more moreone oneorormore moreattitude attitudesensors, sensors, are are provided with improved provided with improvedmethods methods andand interfaces interfaces
for displaying virtual objects in a variety of contexts, thereby increasing the effectiveness, for displaying virtual objects in a variety of contexts, thereby increasing the effectiveness,
efficiency, and efficiency, and user user satisfaction satisfactionwith withsuch suchdevices. devices.Such Suchmethods and interfaces methods and interfaces may may
complement complement or or replaceconventional replace conventional methods methods for for displaying displaying virtual virtual objectsinina avariety objects varietyof of contexts. contexts.
[0020]
[0020] For aa better For better understanding of the understanding of the various various described described embodiments, reference embodiments, reference
should be should be made madetotothe theDescription DescriptionofofEmbodiments Embodiments below, below, in conjunction in conjunction withwith the the following following
drawingsinin which drawings whichlike like reference reference numerals numeralsrefer refer to to corresponding correspondingparts parts throughout throughoutthe the figures. figures.
[0021]
[0021] Figure 1A is a block diagram illustrating a portable multifunction device with Figure 1A is a block diagram illustrating a portable multifunction device with
a touch-sensitive a touch-sensitive display, display,in inaccordance accordance with with some embodiments. some embodiments.
10
1005066680
[0022]
[0022] Figure 1B Figure 1Bis is aa block block diagram illustrating example diagram illustrating components example components forevent for event 10 Jan 2024
handling, in handling, in accordance withsome accordance with someembodiments. embodiments.
[0023]
[0023] Figure 1C is a block diagram illustrating a tactile output module, in Figure 1C is a block diagram illustrating a tactile output module, in
accordancewith accordance withsome some embodiments. embodiments.
[0024]
[0024] Figure 2 illustrates a portable multifunction device having a touch screen, in Figure 2 illustrates a portable multifunction device having a touch screen, in
accordancewith accordance withsome some embodiments. embodiments. 2024200149
[0025]
[0025] Figure 33 is Figure is aa block block diagram of an diagram of an example multifunctiondevice example multifunction devicewith witha adisplay display and aa touch-sensitive and touch-sensitive surface, surface, in inaccordance accordance with with some embodiments. some embodiments.
[0026]
[0026] Figure 4A illustrates an example user interface for a menu of applications on a Figure 4A illustrates an example user interface for a menu of applications on a
portable multifunction portable device, in accordance multifunction device, with some accordance with someembodiments. embodiments.
[0027]
[0027] Figure 4B illustrates an example user interface for a multifunction device with Figure 4B illustrates an example user interface for a multifunction device with
a touch-sensitive surface that is separate from the display, in accordance with some a touch-sensitive surface that is separate from the display, in accordance with some
embodiments. embodiments.
[0028]
[0028] Figures 4C-4E Figures 4C-4Eillustrate illustrate examples of dynamic examples of dynamicintensity intensitythresholds, thresholds, in in accordancewith accordance withsome some embodiments. embodiments.
[0029]
[0029] Figures 4F-4K illustrate a set of sample tactile output patterns, in accordance Figures 4F-4K illustrate a set of sample tactile output patterns, in accordance
with some with some embodiments. embodiments.
[0030]
[0030] Figures 5A-5AT Figures 5A-5AT illustrateexample illustrate example useruser interfaces interfaces for for displaying displaying a a
representation of a virtual object while switching from displaying a first user interface region representation of a virtual object while switching from displaying a first user interface region
to displaying to displaying aa second user interface second user interface region, region,ininaccordance accordance with with some embodiments. some embodiments.
[0031]
[0031] Figures 6A-6AJ Figures 6A-6AJ illustrate illustrate example example user interfaces user interfaces for displaying for displaying a first a first representation of a virtual object in a first user interface region, a second representation of the representation of a virtual object in a first user interface region, a second representation of the
virtual object in the second user interface region, and a third representation of the virtual object virtual object in the second user interface region, and a third representation of the virtual object
with aa representation with representation of of aa field field of of view viewofofone oneorormore more cameras cameras in accordance in accordance with with some some embodiments,ininaccordance embodiments, accordance with with some some embodiments. embodiments.
[0032]
[0032] Figures 7A-7E, Figures 7A-7E,7F1-7F2, 7F1-7F2, 7G1-7G2, 7G1-7G2, and and 7H-7P 7H-7P illustrate illustrate example example user user interfaces for displaying an item with a visual indication to indicate that an item corresponds interfaces for displaying an item with a visual indication to indicate that an item corresponds
to aa virtual to virtualthree-dimensional three-dimensional object, object,inin accordance accordancewith with some embodiments. some embodiments.
[0033]
[0033] Figures 8A-8E Figures 8A-8Eare areflow flowdiagrams diagramsof of a a processforfordisplaying process displayinga arepresentation representation of a virtual object while switching from displaying a first user interface region to displaying a of a virtual object while switching from displaying a first user interface region to displaying a
11
1005066680
seconduser second user interface interface region in accordance with some accordance with someembodiments, embodiments,in in accordance accordance withwith somesome 10 Jan 2024
embodiments. embodiments.
[0034]
[0034] Figures 9A-9D Figures 9A-9Dareareflow flow diagrams diagrams of aofprocess a process for displaying for displaying a first a first
representation of a virtual object in a first user interface region, a second representation of the representation of a virtual object in a first user interface region, a second representation of the
virtual object in the second user interface region, and a third representation of the virtual object virtual object in the second user interface region, and a third representation of the virtual object
with aa representation with representation of of aa field field of of view viewofofone oneorormore more cameras, cameras, in accordance in accordance with with some some embodiments. embodiments. 2024200149
[0035]
[0035] Figures 10A-10D Figures 10A-10D areare flow flow diagrams diagrams of of a process a process forfor displaying displaying an an item item with with a a visual indication to indicate that an item corresponds to a virtual three-dimensional object, in visual indication to indicate that an item corresponds to a virtual three-dimensional object, in
accordancewith accordance withsome some embodiments. embodiments.
[0036]
[0036] Figures 11A-11V illustrate example user interfaces for displaying a virtual Figures 11A-11V illustrate example user interfaces for displaying a virtual
object with different visual properties depending on whether object-placement criteria are object with different visual properties depending on whether object-placement criteria are
met, in met, in accordance withsome accordance with someembodiments. embodiments.
[0037]
[0037] Figures 12A-12D, Figures 12A-12D, 12E-1, 12E-1, 12E-2, 12E-2, 12F-1, 12F-1, 12F-2, 12F-2, 12G-1, 12G-1, 12G-2, 12G-2, 12H-1, 12H-1, 12H-2, 12H-2,
12I-1, 12I-1, 12I-2, 12I-2, 12J, 12J,12K-1, 12K-1, 12K-2, 12L-1, and 12K-2, 12L-1, and12L-2 12L-2illustrate illustrate example userinterfaces example user interfaces for for displaying a calibration user interface object that is dynamically animated in accordance with displaying a calibration user interface object that is dynamically animated in accordance with
movement movement of of one one or or more more cameras cameras of aofdevice, a device, in in accordance accordance withwith somesome embodiments. embodiments.
[0038]
[0038] Figures 13A-13M Figures 13A-13M illustrateexample illustrate example user user interfacesfor interfaces forconstraining constrainingrotation rotation of aa virtual of virtualobject objectabout aboutan anaxis, axis,in in accordance accordancewith withsome some embodiments. embodiments.
[0039]
[0039] Figures 14A-14Z Figures 14A-14Z illustrate example illustrate exampleuser userinterfaces interfacesfor, for, in in accordance with aa accordance with
determination that a first threshold magnitude of movement is met for a first object determination that a first threshold magnitude of movement is met for a first object
manipulationbehavior, manipulation behavior,increasing increasingaasecond secondthreshold thresholdmagnitude magnitudeof of movement movement required required for for a a secondobject second object manipulation manipulationbehavior, behavior,ininaccordance accordancewith withsome some embodiments. embodiments.
[0040]
[0040] Figures 14AA-14AD Figures 14AA-14AD illustrate illustrate flow flow diagrams diagrams thatthat illustrateoperations illustrate operationsfor, for, in in accordancewith accordance withaadetermination determinationthat thataa first first threshold thresholdmagnitude of movement magnitude of movement isismet metfor fora a first object first objectmanipulation manipulation behavior, behavior, increasing increasing aasecond second threshold threshold magnitude of movement magnitude of movement required for required for aa second second object object manipulation behavior, in manipulation behavior, in accordance accordancewith withsome some embodiments. embodiments.
[0041]
[0041] Figures 15A-15AI Figures 15A-15AI illustrate example illustrate exampleuser userinterfaces interfacesfor for generating generating an an audio audio alert in accordance with a determination that movement of a device causes a virtual object to alert in accordance with a determination that movement of a device causes a virtual object to
moveoutside move outsideofofaa displayed displayedfield field of of view of one view of or more one or devicecameras, more device cameras,ininaccordance accordancewith with some embodiments. some embodiments. 12
1005066680
[0042]
[0042] Figures 16A-16G Figures 16A-16G areare flow flow diagrams diagrams of of a process a process forfor displaying displaying a virtualobject a virtual object 10 Jan 2024
with different visual properties depending on whether object-placement criteria are met, in with different visual properties depending on whether object-placement criteria are met, in
accordancewith accordance withsome some embodiments. embodiments.
[0043]
[0043] Figures 17A-17D Figures 17A-17D areare flow flow diagrams diagrams of of a process a process forfor displaying displaying a calibration a calibration
user interface user interface object objectthat thatisis dynamically dynamicallyanimated animated in inaccordance accordance with with movement movement of of one one oror
morecameras more camerasofofa adevice, device,inin accordance accordancewith withsome some embodiments. embodiments. 2024200149
[0044]
[0044] Figures 18A-18I Figures 18A-18Iare areflow flowdiagrams diagramsofof a aprocess processfor forconstraining constrainingrotation rotation of of aa virtual object virtual objectabout about an an axis, axis,inin accordance accordancewith withsome some embodiments. embodiments.
[0045]
[0045] Figures 19A-19H Figures 19A-19H areare flow flow diagrams diagrams of of a process a process for,ininaccordance for, accordance with with a a determination that a first threshold magnitude of movement is met for a first object determination that a first threshold magnitude of movement is met for a first object
manipulationbehavior, manipulation behavior,increasing increasingaasecond secondthreshold thresholdmagnitude magnitudeof of movement movement required required for for a a secondobject second object manipulation manipulationbehavior, behavior,ininaccordance accordancewith withsome some embodiments. embodiments.
[0046]
[0046] Figures 20A-20F Figures 20A-20F areflow are flowdiagrams diagrams of of a process a process forforgenerating generatingananaudio audio alert alert
in accordance in withaa determination accordance with determinationthat that movement movement of of a device a device causes causes a virtualobject a virtual objectto to move move outside of outside of aa displayed displayed field fieldof ofview viewof ofone oneor ormore more device device cameras, cameras, in in accordance with some accordance with some embodiments. embodiments.
[0047]
[0047] A virtual object is a graphical representation of a three-dimensional object in a A virtual object is a graphical representation of a three-dimensional object in a
virtual environment. Conventional methods of interacting with virtual objects to transition the virtual environment. Conventional methods of interacting with virtual objects to transition the
virtual objects from being displayed in the context of an application user interface (e.g., a virtual objects from being displayed in the context of an application user interface (e.g., a
two-dimensionalapplication two-dimensional applicationuser userinterface interface that that does not display an does not an augmented reality augmented reality
environment)totobeing environment) beingdisplayed displayedininthe the context context of of an an augmented augmentedreality realityenvironment environment (e.g.,an (e.g., an environmentininwhich environment whicha aview viewofof thephysical the physicalworld worldisisaugmented augmented with with supplemental supplemental
information that provides additional information to a user that is not available in the physical information that provides additional information to a user that is not available in the physical
world) often require multiple separate inputs (e.g., a sequence of gestures and button presses, world) often require multiple separate inputs (e.g., a sequence of gestures and button presses,
etc.) to achieve an intended outcome (e.g., adjusting the size, position, and/or orientation of etc.) to achieve an intended outcome (e.g., adjusting the size, position, and/or orientation of
the virtual object for a realistic or desired appearance in an augmented reality environment). the virtual object for a realistic or desired appearance in an augmented reality environment).
Further, conventional Further, methodsofofinputs conventional methods inputsoften ofteninvolve involveaa delay delay between betweenreceiving receivinga arequest requesttoto display an display an augmented reality environment augmented reality environmentand and displaying displaying theaugmented the augmented reality reality environment environment
due to the time required to activate one or more device cameras to capture a view of the due to the time required to activate one or more device cameras to capture a view of the
physical world, and/or the time required to analyze and characterize the view of the physical physical world, and/or the time required to analyze and characterize the view of the physical
13
1005066680
world (e.g., detecting planes and/or surfaces in the captured view of the physical world) in world (e.g., detecting planes and/or surfaces in the captured view of the physical world) in 10 Jan 2024
relation to the virtual objects that may be placed the augmented reality environment. The relation to the virtual objects that may be placed the augmented reality environment. The
embodiments herein provide an intuitive way for a user to display and/or interact with virtual embodiments herein provide an intuitive way for a user to display and/or interact with virtual
objects in various contexts (e.g., by allowing a user to provide input to switch from objects in various contexts (e.g., by allowing a user to provide input to switch from
displaying a virtual object in the context of an application user interface to displaying the displaying a virtual object in the context of an application user interface to displaying the
virtual object virtual objectin inananaugmented reality environment, augmented reality environment, by allowing aa user by allowing user to to change display change display
properties of a virtual object (e.g., in a three-dimensional staging environment) prior to properties of a virtual object (e.g., in a three-dimensional staging environment) prior to 2024200149
displaying the displaying the virtual virtualobject objectininananaugmented augmented reality realityenvironment, environment, by by providing an indication providing an indication that allows a user to readily identify virtual objects system-wide across multiple applications, that allows a user to readily identify virtual objects system-wide across multiple applications,
by altering by altering aa visual visualproperty propertyof ofan anobject objectwhile whiledetermining determining placement information for placement information for the the object, by providing an animated calibration user interface object to indicate movement of a object, by providing an animated calibration user interface object to indicate movement of a
device needed for calibration, by constraining rotation of a displayed virtual object about an device needed for calibration, by constraining rotation of a displayed virtual object about an
axis, by axis, by increasing increasing aathreshold thresholdmagnitude of movement magnitude of fora asecond movement for secondobject objectmanipulation manipulation behavior when behavior whena athreshold thresholdmagnitude magnitudeof of movement movement is met is met for for a firstobject a first objectmanipulation manipulation behavior, and by providing an audio alert to indicate that a virtual object has moved out of a behavior, and by providing an audio alert to indicate that a virtual object has moved out of a
displayed field of view). displayed field of view).
[0048]
[0048] Thesystems, The systems,methods, methods,and andGUIs GUIs described described herein herein improve improve useruser interface interface
interactions with interactions with virtual/augmented reality environments virtual/augmented reality in multiple environments in multiple ways. ways. For For example, example,they they make it easier to: display a virtual object in an augmented reality environment and, in make it easier to: display a virtual object in an augmented reality environment and, in
response to different inputs, adjust the appearance of the virtual object for display in the response to different inputs, adjust the appearance of the virtual object for display in the
augmentedreality augmented realityenvironment. environment.
[0049]
[0049] Below,Figures Below, Figures1A-1C, 1A-1C,2,2, and3 3provide and providea adescription descriptionofofexample example devices. devices.
Figures 4A-4B, Figures 4A-4B, 5A-5AT, 6A-6AJ, 7A-7P, 5A-5AT, 6A-6AJ, 7A-7P, 11A-11V, 11A-11V,12A-12L, 12A-12L,13A-13M, 13A-13M, 14A-14Z, 14A-14Z, and and
15A-15AI illustrate 15A-15AI illustrate example example user interfaces user interfaces for displaying for displaying virtual in virtual objects objects in aofvariety of a variety
contexts. Figures 8A-8E illustrate a process for displaying a representation of a virtual object contexts. Figures 8A-8E illustrate a process for displaying a representation of a virtual object
while switching from displaying a first user interface region to displaying a second user while switching from displaying a first user interface region to displaying a second user
interface region. Figures 9A-9D illustrate a process for displaying a first representation of a interface region. Figures 9A-9D illustrate a process for displaying a first representation of a
virtual object in a first user interface region, a second representation of the virtual object in virtual object in a first user interface region, a second representation of the virtual object in
the second user interface region, and a third representation of the virtual object with a the second user interface region, and a third representation of the virtual object with a
representation of representation of aa field fieldofofview viewofofone oneorormore more cameras. cameras. Figures Figures 10A-10D illustrate aa 10A-10D illustrate
process for displaying an item with a visual indication to indicate that an item corresponds to process for displaying an item with a visual indication to indicate that an item corresponds to
a virtual three-dimensional object. Figures 16A-16G illustrate a process for displaying a a virtual three-dimensional object. Figures 16A-16G illustrate a process for displaying a
virtual object with different visual properties depending on whether object-placement criteria virtual object with different visual properties depending on whether object-placement criteria
14
1005066680
are met. Figures 17A-17D illustrate a process for displaying a calibration user interface object are met. Figures 17A-17D illustrate a process for displaying a calibration user interface object 10 Jan 2024
that isisdynamically that dynamically animated in accordance animated in accordancewith withmovement movement of one of one or more or more cameras cameras of a of a device. Figures 18A-18I illustrate a process for constraining rotation of a virtual object about device. Figures 18A-18I illustrate a process for constraining rotation of a virtual object about
an axis. an axis. Figures Figures 14AA-14AD 14AA-14AD and and 19A-19H 19A-19H illustrate illustrate a process a process for,for, in in accordance accordance withwith a a determination that a first threshold magnitude of movement is met for a first object determination that a first threshold magnitude of movement is met for a first object
manipulationbehavior, manipulation behavior,increasing increasingaa second secondthreshold thresholdmagnitude magnitudeof of movement movement required required for for a a secondobject second object manipulation manipulationbehavior. behavior.Figures Figures20A-20F 20A-20F illustratea aprocess illustrate processfor forgenerating generatinganan 2024200149
audio alert audio alert ininaccordance accordance with with a a determination that movement determination that movement ofofa adevice devicecauses causesa avirtual virtual object to object to move outside of move outside of aa displayed field of displayed field ofview view of of one one or or more more device cameras. The device cameras. Theuser user interfaces in interfaces inFigures Figures 5A-5AT, 6A-6AJ, 5A-5AT, 6A-6AJ, 7A-7P, 7A-7P, 11A-11V, 11A-11V, 12A-12L, 12A-12L, 13A-13M, 13A-13M, 14A-14Z,14A-14Z, and and 15A-15AI areused 15A-15AI are usedtotoillustrate illustrate the the processes processes in inFigures Figures 8A-8E, 9A-9D,10A-10D, 8A-8E, 9A-9D, 10A-10D, 14AA- 14AA-
14AD, 16A-16G,17A-17D, 14AD, 16A-16G, 17A-17D,18A-18I, 18A-18I,19A-19H, 19A-19H,and and20A-20F. 20A-20F.
[0050]
[0050] Referencewill Reference will now nowbebemade madein in detailtoto embodiments, detail embodiments, examples examples of which of which are are illustrated ininthe illustrated accompanying the drawings.In accompanying drawings. In the the following detailed description, following detailed description, numerous numerous
specific details are set forth in order to provide a thorough understanding of the various specific details are set forth in order to provide a thorough understanding of the various
described embodiments. However, it will be apparent to one of ordinary skill in the art that described embodiments. However, it will be apparent to one of ordinary skill in the art that
the various the various described described embodiments may embodiments may be be practiced practiced without without these these specific specific details.InInother details. other instances, well-known instances, methods,procedures, well-known methods, procedures, components, components, circuits, circuits, andand networks networks have have not not
been described been describedin in detail detail so SO as asnot nottotounnecessarily unnecessarilyobscure obscureaspects aspectsof ofthe embodiments. the embodiments.
[0051]
[0051] It will also be understood that, although the terms first, second, etc. are, in It will also be understood that, although the terms first, second, etc. are, in
someinstances, some instances, used used herein herein to to describe describe various various elements, these elements elements, these shouldnot elements should not be be limited by limited by these these terms. terms. These terms are These terms are only only used to distinguish used to distinguish one one element fromanother. element from another. For example, a first contact could be termed a second contact, and, similarly, a second For example, a first contact could be termed a second contact, and, similarly, a second
contact could be termed a first contact, without departing from the scope of the various contact could be termed a first contact, without departing from the scope of the various
described embodiments. described embodiments. The The firstcontact first contactand andthe thesecond secondcontact contactare areboth bothcontacts, contacts, but but they they are not the same contact, unless the context clearly indicates otherwise. are not the same contact, unless the context clearly indicates otherwise.
[0052]
[0052] Theterminology The terminologyused usedininthe thedescription descriptionof of the the various described embodiments various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be herein is for the purpose of describing particular embodiments only and is not intended to be
limiting. As limiting. As used used in the the description descriptionof ofthe thevarious variousdescribed describedembodiments andthe embodiments and the appended appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as claims, the singular forms "a," "an," and "the" are intended to include the plural forms as
well, unless the context clearly indicates otherwise. It will also be understood that the term well, unless the context clearly indicates otherwise. It will also be understood that the term
15
1005066680
“and/or” as "and/or" as used herein refers used herein refers to toand and encompasses anyand encompasses any andall all possible possible combinations combinationsofofone one 10 Jan 2024
or more of the associated listed items. It will be further understood that the terms “includes,” or more of the associated listed items. It will be further understood that the terms "includes,"
“including,” “comprises,” "including," and/or"comprising," "comprises," and/or “comprising,”when when used used in in thisspecification, this specification, specify specify the the presence of stated features, integers, steps, operations, elements, and/or components, but do presence of stated features, integers, steps, operations, elements, and/or components, but do
not preclude the presence or addition of one or more other features, integers, steps, not preclude the presence or addition of one or more other features, integers, steps,
operations, elements, operations, elements, components, and/orgroups components, and/or groupsthereof. thereof.
[0053]
[0053] As used As usedherein, herein, the the term “if” is, term "if" is,optionally, optionally,construed toto construed mean mean“when” "when" or or 2024200149
“upon”oror"in "upon" “in response responseto to determining" determining”oror"in “in response responsetoto detecting," detecting,” depending onthe depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is context. Similarly, the phrase "if it is determined" or "if [a stated condition or event] is
detected” is, detected" is, optionally, optionally,construed construedtotomean mean “upon determining”oror"in "upon determining" “in response responseto to determining” or “upon detecting [the stated condition or event]” or “in response to detecting determining" or "upon detecting [the stated condition or event]" or "in response to detecting
[the stated condition or event],” depending on the context.
[the stated condition or event]," depending on the context.
[0054]
[0054] Embodiments Embodiments of of electronicdevices, electronic devices,user userinterfaces interfacesfor for such such devices, devices, and and associated processes associated for using processes for using such such devices are described. devices are described. In In some embodiments, some embodiments, thedevice the device is aa portable is portablecommunications device,such communications device, suchasasaa mobile mobiletelephone, telephone,that that also also contains contains other
functions, such functions, such as PDA and/ormusic PDA and/or musicplayer playerfunctions. functions.Example Example embodiments embodiments of portable of portable
multifunction devices multifunction devices include, include, without without limitation, limitation, the theiPhone®, iPhone®, iPod Touch®,and iPod Touch®, and iPad® iPad®
devices from Apple Inc. of Cupertino, California. Other portable electronic devices, such as devices from Apple Inc. of Cupertino, California. Other portable electronic devices, such as
laptops or tablet computers with touch-sensitive surfaces (e.g., touch-screen displays and/or laptops or tablet computers with touch-sensitive surfaces (e.g., touch-screen displays and/or
touchpads), are, optionally, used. It should also be understood that, in some embodiments, the touchpads), are, optionally, used. It should also be understood that, in some embodiments, the
device is device is not not aaportable portablecommunications device,but communications device, butis is a desktop computerwith desktop computer witha atouch- touch- sensitive surface (e.g., a touch-screen display and/or a touchpad). sensitive surface (e.g., a touch-screen display and/or a touchpad).
[0055]
[0055] In the discussion that follows, an electronic device that includes a display and In the discussion that follows, an electronic device that includes a display and
a touch-sensitive surface is described. It should be understood, however, that the electronic a touch-sensitive surface is described. It should be understood, however, that the electronic
device optionally includes one or more other physical user-interface devices, such as a device optionally includes one or more other physical user-interface devices, such as a
physical keyboard, physical keyboard,aa mouse mouseand/or and/ora ajoystick. joystick.
[0056]
[0056] The device typically supports a variety of applications, such as one or more of The device typically supports a variety of applications, such as one or more of
the following: a note taking application, a drawing application, a presentation application, a the following: a note taking application, a drawing application, a presentation application, a
word processing application, a website creation application, a disk authoring application, a word processing application, a website creation application, a disk authoring application, a
spreadsheet application, spreadsheet application, aa gaming application, aa telephone gaming application, telephone application, application, aa video video conferencing conferencing
application, an e-mail application, an instant messaging application, a workout support application, an e-mail application, an instant messaging application, a workout support
application, a photo management application, a digital camera application, a digital video application, a photo management application, a digital camera application, a digital video
16
1005066680
camera application, a web browsing application, a digital music player application, and/or a camera application, a web browsing application, a digital music player application, and/or a 10 Jan 2024
digital video player application. digital video player application.
[0057]
[0057] The various applications that are executed on the device optionally use at least The various applications that are executed on the device optionally use at least
one common one common physical physical user-interface user-interface device,such device, such asas thetouch-sensitive the touch-sensitivesurface. surface.One Oneorormore more functions of the touch-sensitive surface as well as corresponding information displayed on the functions of the touch-sensitive surface as well as corresponding information displayed on the
device are, optionally, adjusted and/or varied from one application to the next and/or within aa device are, optionally, adjusted and/or varied from one application to the next and/or within
respective application. In this way, a common physical architecture (such as the touch- respective application. In this way, a common physical architecture (such as the touch- 2024200149
sensitive surface) of the device optionally supports the variety of applications with user sensitive surface) of the device optionally supports the variety of applications with user
interfaces that are intuitive and transparent to the user. interfaces that are intuitive and transparent to the user.
[0058]
[0058] Attention is Attention is now directed toward now directed towardembodiments embodimentsof of portable portable devices devices with with touch- touch-
sensitive displays.Figure sensitive displays. Figure1A 1A is aisblock a block diagram diagram illustrating illustrating portable portable multifunction multifunction device device 100 with touch-sensitive 100 with touch-sensitive display display system 112inin accordance system 112 accordancewith withsome some embodiments. embodiments. Touch- Touch-
sensitive display sensitive display system system 112 is sometimes 112 is called aa "touch sometimes called “touch screen" screen” for for convenience, convenience,and andisis sometimessimply sometimes simplycalled calleda atouch-sensitive touch-sensitivedisplay. display. Device Device100 100includes includesmemory memory102 102 (which (which
optionally includes optionally includes one or more one or computerreadable more computer readablestorage storagemediums), mediums), memory memory controller controller
122, 122, one or more one or processingunits more processing units (CPUs) (CPUs)120, 120,peripherals peripheralsinterface interface118, 118, RF RFcircuitry circuitry 108, 108, audio circuitry audio circuitry 110, 110, speaker speaker 111, 111, microphone 113,input/output microphone 113, input/output(I/O) (I/O)subsystem subsystem106, 106,other other input or input or control control devices devices 116, 116, and and external external port port124. 124.Device Device 100 100 optionally optionally includes includes one one or or
moreoptical more optical sensors sensors 164. 164. Device Device100 100optionally optionallyincludes includesone oneorormore moreintensity intensitysensors sensors165 165 for detecting intensities of contacts on device 100 (e.g., a touch-sensitive surface such as for detecting intensities of contacts on device 100 (e.g., a touch-sensitive surface such as
touch-sensitive display system touch-sensitive 112of system 112 of device device 100). 100). Device Device100 100optionally optionallyincludes includesone oneoror more tactile output generators 167 for generating tactile outputs on device 100 (e.g., more tactile output generators 167 for generating tactile outputs on device 100 (e.g.,
generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system
112 of device 112 of 100 or device 100 or touchpad touchpad355 355ofofdevice device300). 300).These Thesecomponents components optionally optionally
communicate communicate over over oneone or or more more communication communication buses buses or signal or signal lineslines 103.103.
[0059]
[0059] It should be appreciated that device 100 is only one example of a portable It should be appreciated that device 100 is only one example of a portable
multifunction device, multifunction device, and and that that device device 100 optionally has 100 optionally has more or fewer more or fewercomponents components than than
shown,optionally shown, optionallycombines combinestwotwo or or more more components, components, or optionally or optionally has has a different a different
configuration or configuration or arrangement ofthe arrangement of the components. components.The The various various components components shown shown in Figure in Figure
1A are implemented 1A are implementedininhardware, hardware, software,firmware, software, firmware, or or a a combination combination thereof, thereof, including including oneone
or more signal processing and/or application specific integrated circuits. or more signal processing and/or application specific integrated circuits.
17
1005066680
[0060]
[0060] Memory Memory 102102 optionally optionally includes includes high-speed high-speed random random access access memory memory and and 10 Jan 2024
optionally also optionally also includes includes non-volatile non-volatile memory, suchasasone memory, such oneorormore moremagnetic magnetic disk disk storage storage
devices, flash devices, flash memory devices,ororother memory devices, other non-volatile non-volatile solid-state solid-state memory devices.Access memory devices. Accesstoto memory memory 102 102 by by other other components components of device of device 100,100, suchsuch as CPU(s) as CPU(s) 120the 120 and andperipherals the peripherals interface 118, is, optionally, controlled by memory controller 122. interface 118, is, optionally, controlled by memory controller 122.
[0061]
[0061] Peripherals interface 118 can be used to couple input and output peripherals of Peripherals interface 118 can be used to couple input and output peripherals of
the device the device to CPU(s) 120and CPU(s) 120 andmemory memory102.102. The The one one or more or more processors processors 120orrun 120 run or execute execute 2024200149
various software various software programs programsand/or and/orsets setsofof instructions instructions stored stored in inmemory 102totoperform memory 102 perform various functions for device 100 and to process data. various functions for device 100 and to process data.
[0062]
[0062] In some In embodiments, some embodiments, peripherals peripherals interface118, interface 118,CPU(s) CPU(s) 120, 120, andand memory memory
controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other
embodiments,they embodiments, theyare, are,optionally, optionally, implemented implemented on on separate separate chips. chips.
[0063]
[0063] RF (radio frequency) circuitry 108 receives and sends RF signals, also called RF (radio frequency) circuitry 108 receives and sends RF signals, also called
electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic
signals and signals and communicates with communicates with communications communications networks networks and other and other communications communications devices devices
via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for
performing these functions, including but not limited to an antenna system, an RF transceiver, performing these functions, including but not limited to an antenna system, an RF transceiver,
one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC
chipset, aa subscriber chipset, subscriberidentity identitymodule module (SIM) card, memory, (SIM) card, andSOsoforth. memory, and forth. RF RFcircuitry circuitry 108 108 optionally communicates optionally with communicates with networks, networks, such such as as thethe Internet,also Internet, alsoreferred referred to to as as the the World World
WideWeb Wide Web (WWW), (WWW), an intranet an intranet and/or and/or a wireless a wireless network, network, such such as a as a cellular cellular telephone telephone
network, aa wireless network, wireless local local area area network (LAN)and/or network (LAN) and/ora ametropolitan metropolitanarea areanetwork network (MAN), (MAN),
and other and other devices devices by by wireless wireless communication. communication. The The wireless wireless communication communication optionally optionally uses uses
any of any of aa plurality pluralityof ofcommunications standards, protocols communications standards, protocols and and technologies, technologies, including including but but not not limited to limited to Global Global System for Mobile System for MobileCommunications Communications (GSM), (GSM), Enhanced Enhanced Data Data GSM GSM Environment(EDGE), Environment (EDGE), high-speed high-speed downlink downlink packet packet access access (HSDPA), (HSDPA), high-speed high-speed uplink uplink packet access packet access(HSUPA), (HSUPA), Evolution, Evolution,Data-Only Data-Only(EV-DO), (EV-DO),HSPA, HSPA, HSPA+, Dual-Cell HSPA HSPA+, Dual-Cell HSPA (DC-HSPA), (DC-HSPA), long long term term evolution evolution (LTE), (LTE), nearnear field field communication communication (NFC), (NFC), wideband wideband code code division multiple division multiple access access (W-CDMA), (W-CDMA), codecode division division multiple multiple access access (CDMA), (CDMA), time division time division
multiple access multiple access (TDMA), (TDMA), Bluetooth, Bluetooth, Wireless Wireless Fidelity Fidelity (Wi-Fi) (Wi-Fi) (e.g.,IEEE (e.g., IEEE 802.11a, 802.11a, IEEE IEEE
802.11ac, IEEE 802.11ac, IEEE802.11ax, 802.11ax, IEEE IEEE 802.11b, 802.11b, IEEEIEEE 802.11g 802.11g and/or and/or IEEE 802.11n), IEEE 802.11n), voice voice over over Internet Protocol Internet Protocol (VoIP), (VoIP), Wi-MAX, a protocol Wi-MAX, a protocol forfor e-mail e-mail (e.g.,Internet (e.g., Internet message messageaccess access
18
1005066680
protocol (IMAP) protocol (IMAP)and/or and/orpost postoffice officeprotocol protocol(POP)), (POP)),instant instant messaging messaging(e.g., (e.g., extensible extensible 10 Jan 2024
messagingand messaging andpresence presenceprotocol protocol(XMPP), (XMPP), Session Session Initiation Initiation Protocol Protocol forfor InstantMessaging Instant Messaging and Presence and PresenceLeveraging LeveragingExtensions Extensions (SIMPLE), (SIMPLE), Instant Instant Messaging Messaging and Presence and Presence Service Service
(IMPS)),and/or (IMPS)), and/orShort ShortMessage Message Service Service (SMS), (SMS), or any or any other other suitable suitable communication communication
protocol, including communication protocols not yet developed as of the filing date of this protocol, including communication protocols not yet developed as of the filing date of this
document. document.
[0064]
[0064] Audiocircuitry Audio circuitry 110, 110, speaker speaker 111, 111, and and microphone microphone 113 113 provide provide an an audio audio 2024200149
interface between interface between aa user user and device 100. and device 100. Audio Audiocircuitry circuitry 110 110receives receives audio audiodata data from from peripherals interface 118, converts the audio data to an electrical signal, and transmits the peripherals interface 118, converts the audio data to an electrical signal, and transmits the
electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible
soundwaves. sound waves.Audio Audio circuitry110 circuitry 110also alsoreceives receiveselectrical electrical signals signals converted converted by by microphone microphone
113 fromsound 113 from soundwaves. waves.Audio Audio circuitry110 circuitry 110 converts converts thethe electricalsignal electrical signal to to audio data and audio data and
transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally,
retrieved from retrieved and/or transmitted from and/or transmitted to to memory 102 memory 102 and/or and/or RFRF circuitry108 circuitry 108 by by peripherals peripherals
interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g., interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g.,
212, Figure 212, Figure 2). 2). The headset jack The headset jack provides an interface provides an interface between audiocircuitry between audio circuitry 110 and 110 and
removableaudio removable audioinput/output input/outputperipherals, peripherals,such suchasasoutput-only output-onlyheadphones headphonesor or a headsetwith a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). both output (e.g., a headphone for one or both ears) and input (e.g., a microphone).
[0065]
[0065] I/O subsystem I/O subsystem106 106couples couplesinput/output input/outputperipherals peripheralsonondevice device100, 100,such suchasas touch-sensitive display touch-sensitive display system 112and system 112 andother otherinput input or or control control devices devices 116, 116, with peripherals with peripherals
interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor
controller 158, intensity sensor controller 159, haptic feedback controller 161, and one or controller 158, intensity sensor controller 159, haptic feedback controller 161, and one or
more input controllers 160 for other input or control devices. The one or more input more input controllers 160 for other input or control devices. The one or more input
controllers 160 receive/send electrical signals from/to other input or control devices 116. The controllers 160 receive/send electrical signals from/to other input or control devices 116. The
other input or control devices 116 optionally include physical buttons (e.g., push buttons, other input or control devices 116 optionally include physical buttons (e.g., push buttons,
rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and SO forth. In some
alternate embodiments, input controller(s) 160 are, optionally, coupled with any (or none) of alternate embodiments, input controller(s) 160 are, optionally, coupled with any (or none) of
the following: a keyboard, infrared port, USB port, stylus, and/or a pointer device such as a the following: a keyboard, infrared port, USB port, stylus, and/or a pointer device such as a
mouse.The mouse. Theone oneorormore more buttons buttons (e.g.,208, (e.g., 208,Figure Figure2)2)optionally optionally include include an an up/down up/downbutton button for volume for control of volume control of speaker speaker 111 111and/or and/ormicrophone microphone 113. 113. TheThe oneone or more or more buttons buttons
optionally include a push button (e.g., 206, Figure 2). optionally include a push button (e.g., 206, Figure 2).
19
1005066680
[0066]
[0066] Touch-sensitivedisplay Touch-sensitive display system system112 112provides providesananinput inputinterface interfaceand andananoutput output 10 Jan 2024
interface between the device and a user. Display controller 156 receives and/or sends interface between the device and a user. Display controller 156 receives and/or sends
electrical signals electrical signalsfrom/to from/totouch-sensitive touch-sensitivedisplay system display system112. 112.Touch-sensitive Touch-sensitive display display system system
112 displaysvisual 112 displays visual output output to the to the user. user. The The visual visual outputoutput optionally optionally includesincludes graphics, graphics, text, text, icons, video, icons, video, and and any any combination (collectively termed thereof(collectively combination thereof “graphics”). In termed "graphics"). In some some
embodiments,some embodiments, some or or allall ofofthe thevisual visualoutput outputcorresponds correspondstotouser userinterface interface objects. objects. As As used used
herein, the term “affordance” refers to a user-interactive graphical user interface object (e.g., herein, the term "affordance" refers to a user-interactive graphical user interface object (e.g., 2024200149
a graphical user interface object that is configured to respond to inputs directed toward the a graphical user interface object that is configured to respond to inputs directed toward the
graphical user interface object). Examples of user-interactive graphical user interface objects graphical user interface object). Examples of user-interactive graphical user interface objects
include, without limitation, a button, slider, icon, selectable menu item, switch, hyperlink, or include, without limitation, a button, slider, icon, selectable menu item, switch, hyperlink, or
other user interface control. other user interface control.
[0067]
[0067] Touch-sensitive display system 112 has a touch-sensitive surface, sensor or set Touch-sensitive display system 112 has a touch-sensitive surface, sensor or set
of sensors that accepts input from the user based on haptic and/or tactile contact. Touch- of sensors that accepts input from the user based on haptic and/or tactile contact. Touch-
sensitive display sensitive display system system 112 and display 112 and display controller controller 156 (along with 156 (along with any any associated associated modules modules and/or sets and/or sets of of instructions instructionsinin memory memory 102) detect contact 102) detect contact (and (and any any movement movement or or breaking breaking of of
the contact) on touch-sensitive display system 112 and converts the detected contact into the contact) on touch-sensitive display system 112 and converts the detected contact into
interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or
images)that images) that are are displayed displayed on touch-sensitive display on touch-sensitive display system 112. In system 112. In some embodiments, some embodiments, a a point of point of contact contact between touch-sensitive display between touch-sensitive display system 112and system 112 andthe theuser usercorresponds correspondstotoaa finger of the user or a stylus. finger of the user or a stylus.
[0068]
[0068] Touch-sensitivedisplay Touch-sensitive display system system112 112optionally optionallyuses usesLCD LCD (liquid (liquid crystal crystal
display) technology, display) LPD(light technology, LPD (light emitting emitting polymer polymerdisplay) display)technology, technology,ororLED LED (lightemitting (light emitting diode) technology, diode) technology, although althoughother otherdisplay display technologies technologiesare are used used in in other other embodiments. embodiments.
Touch-sensitivedisplay Touch-sensitive displaysystem system112 112and anddisplay displaycontroller controller156 156optionally optionallydetect detectcontact contact and and any movement any movement or or breaking breaking thereof thereof using using anyany of of a pluralityofoftouch a plurality touchsensing sensingtechnologies technologiesnow now known or later developed, including but not limited to capacitive, resistive, infrared, and known or later developed, including but not limited to capacitive, resistive, infrared, and
surface acoustic wave technologies, as well as other proximity sensor arrays or other surface acoustic wave technologies, as well as other proximity sensor arrays or other
elementsfor elements for determining determiningone oneorormore morepoints pointsofofcontact contactwith withtouch-sensitive touch-sensitivedisplay display system system 112. 112. In In some embodiments, some embodiments, projected projected mutual mutual capacitance capacitance sensing sensing technology technology is used, is used, such such as as
that found that found in in the the iPhone®, iPod Touch®, iPhone®, iPod Touch®, and and iPad® iPad from from AppleApple Inc.Cupertino, Inc. of of Cupertino, California. California.
20
1005066680
[0069]
[0069] Touch-sensitivedisplay Touch-sensitive displaysystem system112 112optionally optionallyhas hasa avideo videoresolution resolutionin in excess excess 10 Jan 2024
of 100 of dpi. In 100 dpi. In some embodiments, some embodiments, thetouch the touchscreen screenvideo video resolutionisisinin excess resolution excessof of 400 400dpi dpi (e.g., (e.g., 500 dpi, 800 500 dpi, 800dpi, dpi,ororgreater). greater).TheThe useruser optionally optionally makesmakes contactcontact with touch-sensitive with touch-sensitive
display system 112 using any suitable object or appendage, such as a stylus, a finger, and so display system 112 using any suitable object or appendage, such as a stylus, a finger, and SO
forth. In forth. Insome some embodiments, theuser embodiments, the userinterface interface is is designed to work designed to withfinger-based work with finger-basedcontacts contacts and gestures, which can be less precise than stylus-based input due to the larger area of and gestures, which can be less precise than stylus-based input due to the larger area of
contact of contact of aa finger fingeron on the thetouch touchscreen. screen.InInsome some embodiments, thedevice embodiments, the devicetranslates translates the the rough rough 2024200149
finger-based input finger-based input into into aa precise precisepointer/cursor pointer/cursorposition positionoror command for performing command for the performing the
actions desired by the user. actions desired by the user.
[0070]
[0070] In some In embodiments, some embodiments, in in additiontotothe addition thetouch touchscreen, screen,device device100 100optionally optionally includes a touchpad (not shown) for activating or deactivating particular functions. In some includes a touchpad (not shown) for activating or deactivating particular functions. In some
embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch
screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface
that is separate from touch-sensitive display system 112 or an extension of the touch-sensitive that is separate from touch-sensitive display system 112 or an extension of the touch-sensitive
surface formed surface bythe formed by the touch touchscreen. screen.
[0071]
[0071] Device100 Device 100also alsoincludes includespower powersystem system 162162 forfor powering powering the the various various
components.Power components. Power system system 162162 optionally optionally includes includes a power a power management management system, system, one or one or morepower more powersources sources(e.g., (e.g., battery, battery, alternating alternatingcurrent current(AC)), (AC)),aarecharging rechargingsystem, system, aapower power
failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light- failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-
emitting diode emitting diode (LED)) (LED))and andany anyother othercomponents components associated associated with with the the generation, generation,
management management andand distribution distribution ofof power power in in portabledevices. portable devices.
[0072]
[0072] Device100 Device 100optionally optionallyalso alsoincludes includesone oneoror more moreoptical opticalsensors sensors164. 164.Figure Figure 1A showsananoptical 1A shows opticalsensor sensorcoupled coupledwith withoptical opticalsensor sensorcontroller controller 158 158in in I/O I/O subsystem subsystem106. 106. Optical sensor(s) Optical sensor(s) 164 optionally include 164 optionally include charge-coupled device(CCD) charge-coupled device (CCD)or or complementary complementary
metal-oxidesemiconductor metal-oxide semiconductor (CMOS) (CMOS) phototransistors. phototransistors. Optical Optical sensor(s) sensor(s) 164 164 receive receive light light
from the from the environment, environment,projected projectedthrough throughone oneorormore more lens,and lens, andconverts convertsthe thelight lightto to data data representing an representing an image. image. In In conjunction conjunction with withimaging imagingmodule module143143 (also (also called called a camera a camera
module), optical sensor(s) 164 optionally capture still images and/or video. In some module), optical sensor(s) 164 optionally capture still images and/or video. In some
embodiments,ananoptical embodiments, opticalsensor sensorisislocated located on on the the back back of of device device 100, 100, opposite opposite touch-sensitive touch-sensitive display system 112 on the front of the device, so that the touch screen is enabled for use as a display system 112 on the front of the device, SO that the touch screen is enabled for use as a
viewfinder for viewfinder for still still and/or and/orvideo videoimage image acquisition. acquisition.InInsome some embodiments, anotheroptical embodiments, another optical sensor is located on the front of the device so that the user's image is obtained (e.g., for sensor is located on the front of the device SO that the user's image is obtained (e.g., for
21
1005066680
selfies, for videoconferencing while the user views the other video conference participants on selfies, for videoconferencing while the user views the other video conference participants on 10 Jan 2024
the touch screen, etc.). the touch screen, etc.).
[0073]
[0073] Device100 Device 100optionally optionallyalso alsoincludes includesone oneoror more morecontact contactintensity intensity sensors sensors 165. 165. Figure 1A shows a contact intensity sensor coupled with intensity sensor controller 159 in I/O Figure 1A shows a contact intensity sensor coupled with intensity sensor controller 159 in I/O
subsystem106. subsystem 106.Contact Contactintensity intensitysensor(s) sensor(s) 165 165optionally optionallyinclude include one oneor or more morepiezoresistive piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors,
optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., 2024200149
sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface).
Contact intensity sensor(s) 165 receive contact intensity information (e.g., pressure Contact intensity sensor(s) 165 receive contact intensity information (e.g., pressure
information or information or aa proxy for pressure proxy for pressure information) fromthe information) from the environment. environment.InInsome some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a embodiments, at least one contact intensity sensor is collocated with, or proximate to, a
touch-sensitive surface touch-sensitive surface (e.g., (e.g.,touch-sensitive display touch-sensitive system display system112). 112).InIn some someembodiments, at embodiments, at
least one contact intensity sensor is located on the back of device 100, opposite touch-screen least one contact intensity sensor is located on the back of device 100, opposite touch-screen
display system display 112which system 112 whichisislocated locatedon onthe the front front of of device device 100. 100.
[0074]
[0074] Device100 Device 100optionally optionallyalso alsoincludes includesone oneoror more moreproximity proximitysensors sensors166. 166. Figure 1A Figure 1Ashows showsproximity proximity sensor sensor 166166 coupled coupled withwith peripherals peripherals interface interface 118. 118. Alternately, Alternately,
proximitysensor proximity sensor166 166isis coupled coupledwith withinput inputcontroller controller 160 in I/O subsystem 160 in 106.InInsome subsystem 106. some embodiments,thetheproximity embodiments, proximity sensor sensor turnsoff turns offand anddisables disablestouch-sensitive touch-sensitivedisplay displaysystem system112 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a when the multifunction device is placed near the user's ear (e.g., when the user is making a
phone call). phone call).
[0075]
[0075] Device100 Device 100optionally optionallyalso alsoincludes includesone oneoror more moretactile tactile output output generators 167. generators 167.
Figure 1A Figure 1Ashows showsa atactile tactile output output generator generator coupled coupledwith withhaptic hapticfeedback feedbackcontroller controller161 161ininI/O I/O subsystem106. subsystem 106.InInsome someembodiments, embodiments, tactile tactile output output generator(s)167167 generator(s) include include oneone or or more more
electroacoustic devices electroacoustic devices such as speakers such as speakers or or other other audio audio components and/orelectromechanical components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive devices that convert energy into linear motion such as a motor, solenoid, electroactive
polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating
component (e.g., a component that converts electrical signals into tactile outputs on the component (e.g., a component that converts electrical signals into tactile outputs on the
device). Tactile output generator(s) 167 receive tactile feedback generation instructions from device). Tactile output generator(s) 167 receive tactile feedback generation instructions from
haptic feedback haptic module133 feedback module 133 and and generates generates tactileoutputs tactile outputsonondevice device100 100that thatare arecapable capableofof being sensed being sensed by byaa user user of of device 100. In device 100. In some embodiments, some embodiments, at at leastone least onetactile tactile output output
generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive
display system 112) and, optionally, generates a tactile output by moving the touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive
22
1005066680
surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in 10 Jan 2024
the same plane as a surface of device 100). In some embodiments, at least one tactile output the same plane as a surface of device 100). In some embodiments, at least one tactile output
generator sensor is located on the back of device 100, opposite touch-sensitive display system generator sensor is located on the back of device 100, opposite touch-sensitive display system
112, which 112, which is is located located on on the the front front of device of device 100. 100.
[0076]
[0076] Device100 Device 100optionally optionallyalso alsoincludes includes one oneoror more moreaccelerometers accelerometers168. 168.Figure Figure 1A showsaccelerometer 1A shows accelerometer 168 168 coupled coupled with with peripherals peripherals interface interface 118. 118. Alternately, Alternately,
accelerometer168 accelerometer 168is, is, optionally, optionally, coupled coupled with with an an input input controller controller160 160 in inI/O I/Osubsystem subsystem 106. 106. 2024200149
In some In embodiments, some embodiments, information information is is displayed displayed on on thethe touch-screen touch-screen display display in in a a portraitview portrait view or a landscape or landscape view basedononanananalysis view based analysisof of data data received received from fromthe the one one or or more more accelerometers. Device 100 optionally includes, in addition to accelerometer(s) 168, a accelerometers. Device 100 optionally includes, in addition to accelerometer(s) 168, a
magnetometer magnetometer (notshown) (not shown) andand a GPS a GPS (or (or GLONASS GLONASS or otherorglobal other navigation global navigation system) system)
receiver (not shown) for obtaining information concerning the location and orientation (e.g., receiver (not shown) for obtaining information concerning the location and orientation (e.g.,
portrait or landscape) of device 100. portrait or landscape) of device 100.
[0077]
[0077] In some In embodiments, some embodiments, thethe software software components components stored stored in memory in memory 102 include 102 include
operating system operating system126, 126,communication communication module module (or (or set set of of instructions)128, instructions) 128,contact/motion contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, haptic module (or set of instructions) 130, graphics module (or set of instructions) 132, haptic
feedback module (or set of instructions) 133, text input module (or set of instructions) 134, feedback module (or set of instructions) 133, text input module (or set of instructions) 134,
Global Positioning Global Positioning System System(GPS) (GPS) module module (or (or setset of of instructions)135, instructions) 135,and andapplications applications(or (or sets of sets of instructions) instructions)136. 136.Furthermore, Furthermore,in insome some embodiments, memory embodiments, memory 102 102 stores stores
device/global internal state 157, as shown in Figures 1A and 3. Device/global internal state device/global internal state 157, as shown in Figures 1A and 3. Device/global internal state
157 includesoneone 157 includes or or more more of: active of: active application application state, state, indicating indicating which applications, which applications, if any, if any, are currently active; display state, indicating what applications, views or other information are currently active; display state, indicating what applications, views or other information
occupyvarious occupy variousregions regionsofoftouch-sensitive touch-sensitive display display system system112; 112;sensor sensorstate, state, including including
information obtained information obtainedfrom fromthe thedevice's device’svarious varioussensors sensorsand andother otherinput inputor or control control devices devices
116; 116; and location and/or and location and/or positional positional information information concerning the device's concerning the device’s location location and/or and/or
attitude. attitude.
[0078]
[0078] Operatingsystem Operating system126 126(e.g., (e.g., iOS, iOS, Darwin, Darwin,RTXC, RTXC, LINUX, LINUX, UNIX, UNIX, os X, OS X, WINDOWS, WINDOWS, or anorembedded an embedded operating operating system system such assuch as VxWorks) VxWorks) includessoftware includes various various software componentsand/or components and/ordrivers driversfor forcontrolling controlling and andmanaging managing general general system system tasks tasks (e.g.,memory (e.g., memory management, management, storage storage device device control,power control, power management, management, etc.)etc.) and and facilitates facilitates communication communication
betweenvarious between varioushardware hardware and and software software components. components.
23
1005066680
[0079]
[0079] Communication Communication module module 128 128 facilitates facilitates communication communication with with otherother devices devices 10 Jan 2024
over one over one or or more moreexternal external ports ports 124 124 and andalso also includes includes various various software softwarecomponents componentsforfor
handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g., handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g.,
Universal Serial Universal Serial Bus (USB),FIREWIRE, Bus (USB), FIREWIRE,etc.)etc.) is adapted is adapted forfor coupling coupling directly directly to to other other
devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some
embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or
similar to similar to and/or and/or compatible compatible with with the 30-pin 30-pin connector used in connector used in some someiPhone® iPhone®, iPod iPod Touch®, Touch®, 2024200149
and iPad and iPad® devices devices from from Apple Apple Inc.Inc. of Cupertino, of Cupertino, California. California. In In some some embodiments, embodiments, the the external port is a Lightning connector that is the same as, or similar to and/or compatible with external port is a Lightning connector that is the same as, or similar to and/or compatible with
the Lightning the connectorused Lightning connector usedinin some someiPhone®, iPhone®, iPod iPod Touch®, Touch®, and iPad® and iPad® devices devices from from Apple Inc. of Cupertino, California. Apple Inc. of Cupertino, California.
[0080]
[0080] Contact/motionmodule Contact/motion module130130 optionally optionally detects detects contact contact with with touch-sensitive touch-sensitive
display system display 112(in system 112 (in conjunction conjunctionwith withdisplay displaycontroller controller 156) and other 156) and other touch-sensitive touch-sensitive devices (e.g., devices (e.g., aatouchpad touchpad or or physical physical click clickwheel). wheel).Contact/motion Contact/motion module 130includes module 130 includes various software various software components components forperforming for performing various various operations operations relatedtotodetection related detectionofof contact (e.g., by a finger or by a stylus), such as determining if contact has occurred (e.g., contact (e.g., by a finger or by a stylus), such as determining if contact has occurred (e.g.,
detecting a finger-down event), determining an intensity of the contact (e.g., the force or detecting a finger-down event), determining an intensity of the contact (e.g., the force or
pressure of the contact or a substitute for the force or pressure of the contact), determining if pressure of the contact or a substitute for the force or pressure of the contact), determining if
there is there ismovement ofthe movement of the contact contact and andtracking tracking the the movement movement across across thetouch-sensitive the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact surface (e.g., detecting one or more finger-dragging events), and determining if the contact
has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module
130 receives contact 130 receives contact data data from the touch-sensitive from the touch-sensitive surface. surface. Determining movement Determining movement of of thethe
point of contact, which is represented by a series of contact data, optionally includes point of contact, which is represented by a series of contact data, optionally includes
determiningspeed determining speed(magnitude), (magnitude),velocity velocity(magnitude (magnitudeandand direction),and/or direction), and/orananacceleration acceleration(a(a change in magnitude and/or direction) of the point of contact. These operations are, change in magnitude and/or direction) of the point of contact. These operations are,
optionally, applied to single contacts (e.g., one finger contacts or stylus contacts) or to optionally, applied to single contacts (e.g., one finger contacts or stylus contacts) or to
multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some multiple simultaneous contacts (e.g., "multitouch"/multiple finger contacts). In some
embodiments,contact/motion embodiments, contact/motion module module 130 130 and and display display controller controller 156 156 detect detect contact contact on aon a touchpad. touchpad.
[0081]
[0081] Contact/motionmodule Contact/motion module130130 optionally optionally detects detects a gestureinput a gesture inputbybya auser. user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., Different gestures on the touch-sensitive surface have different contact patterns (e.g.,
different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, different motions, timings, and/or intensities of detected contacts). Thus, a gesture is,
optionally, detected by detecting a particular contact pattern. For example, detecting a finger optionally, detected by detecting a particular contact pattern. For example, detecting a finger
24
1005066680
tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) 10 Jan 2024
event at the same position (or substantially the same position) as the finger-down event (e.g., event at the same position (or substantially the same position) as the finger-down event (e.g.,
at the position of an icon). As another example, detecting a finger swipe gesture on the touch- at the position of an icon). As another example, detecting a finger swipe gesture on the touch-
sensitive surface sensitive surface includes includes detecting detectingaafinger-down finger-down event event followed by detecting followed by detecting one one or or more more finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event. finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event.
Similarly, tap, swipe, drag, and other gestures are optionally detected for a stylus by detecting Similarly, tap, swipe, drag, and other gestures are optionally detected for a stylus by detecting
a particular contact pattern for the stylus. a particular contact pattern for the stylus. 2024200149
[0082]
[0082] In some In embodiments, some embodiments, detecting detecting a fingertap a finger tapgesture gesturedepends dependsonon thelength the lengthofof time between time betweendetecting detectingthe the finger-down finger-downevent eventand andthethefinger-up finger-upevent, event,but butisis independent independentofof the intensity of the finger contact between detecting the finger-down event and the finger-up the intensity of the finger contact between detecting the finger-down event and the finger-up
event. In event. In some embodiments, some embodiments, a tapgesture a tap gestureisisdetected detectedin in accordance accordancewith withaadetermination determinationthat that the length of time between the finger-down event and the finger-up event is less than a the length of time between the finger-down event and the finger-up event is less than a
predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4 or 0.5 seconds), independent of whether predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4 or 0.5 seconds), independent of whether
the intensity of the finger contact during the tap meets a given intensity threshold (greater the intensity of the finger contact during the tap meets a given intensity threshold (greater
than a nominal contact-detection intensity threshold), such as a light press or deep press than a nominal contact-detection intensity threshold), such as a light press or deep press
intensity threshold. Thus, a finger tap gesture can satisfy particular input criteria that do not intensity threshold. Thus, a finger tap gesture can satisfy particular input criteria that do not
require that the characteristic intensity of a contact satisfy a given intensity threshold in order require that the characteristic intensity of a contact satisfy a given intensity threshold in order
for the particular input criteria to be met. For clarity, the finger contact in a tap gesture for the particular input criteria to be met. For clarity, the finger contact in a tap gesture
typically needs to satisfy a nominal contact-detection intensity threshold, below which the typically needs to satisfy a nominal contact-detection intensity threshold, below which the
contact is not detected, in order for the finger-down event to be detected. A similar analysis contact is not detected, in order for the finger-down event to be detected. A similar analysis
applies to detecting a tap gesture by a stylus or other contact. In cases where the device is applies to detecting a tap gesture by a stylus or other contact. In cases where the device is
capable of detecting a finger or stylus contact hovering over a touch sensitive surface, the capable of detecting a finger or stylus contact hovering over a touch sensitive surface, the
nominalcontact-detection nominal contact-detectionintensity intensity threshold threshold optionally optionally does does not not correspond to physical correspond to physical
contact between the finger or stylus and the touch sensitive surface. contact between the finger or stylus and the touch sensitive surface.
[0083]
[0083] Thesame The sameconcepts conceptsapply apply inin anan analogous analogous manner manner to other to other types types of of gestures. gestures.
For example, a swipe gesture, a pinch gesture, a depinch gesture, and/or a long press gesture For example, a swipe gesture, a pinch gesture, a depinch gesture, and/or a long press gesture
are optionally detected based on the satisfaction of criteria that are either independent of are optionally detected based on the satisfaction of criteria that are either independent of
intensities of contacts included in the gesture, or do not require that contact(s) that perform intensities of contacts included in the gesture, or do not require that contact(s) that perform
the gesture reach intensity thresholds in order to be recognized. For example, a swipe gesture the gesture reach intensity thresholds in order to be recognized. For example, a swipe gesture
is detected is detected based based on on an an amount ofmovement amount of movementof of oneone or or more more contacts; contacts; a pinch a pinch gesture gesture is is
detected based detected based on on movement movement of of twotwo or or more more contacts contacts towards towards eacheach other; other; a depinch a depinch gesture gesture is is detected based detected based on on movement movement of of twotwo or or more more contacts contacts away away fromfrom each each other; other; and aand a long long press press
gesture is detected based on a duration of the contact on the touch-sensitive surface with less gesture is detected based on a duration of the contact on the touch-sensitive surface with less
25
1005066680
than aa threshold than threshold amount of movement. amount of movement.As As such, such, thethe statement statement thatparticular that particulargesture gesture 10 Jan 2024
recognition criteria do not require that the intensity of the contact(s) meet a respective recognition criteria do not require that the intensity of the contact(s) meet a respective
intensity threshold in order for the particular gesture recognition criteria to be met means that intensity threshold in order for the particular gesture recognition criteria to be met means that
the particular gesture recognition criteria are capable of being satisfied if the contact(s) in the the particular gesture recognition criteria are capable of being satisfied if the contact(s) in the
gesture do not reach the respective intensity threshold, and are also capable of being satisfied gesture do not reach the respective intensity threshold, and are also capable of being satisfied
in circumstances in whereone circumstances where oneorormore moreofofthe thecontacts contactsininthe the gesture gesture do do reach reach or or exceed exceedthe the respective intensity respective intensity threshold. threshold.InInsome some embodiments, embodiments, aatap tap gesture gesture is is detected detected based based on on a a 2024200149
determinationthat determination that the the finger-down andfinger-up finger-down and finger-upevent eventare are detected detected within within aa predefined predefined time time period, without regard to whether the contact is above or below the respective intensity period, without regard to whether the contact is above or below the respective intensity
threshold during the predefined time period, and a swipe gesture is detected based on a threshold during the predefined time period, and a swipe gesture is detected based on a
determinationthat determination that the the contact contact movement movement isisgreater greater than than aa predefined predefined magnitude, magnitude,even evenififthe the contact is above the respective intensity threshold at the end of the contact movement. Even contact is above the respective intensity threshold at the end of the contact movement. Even
in implementations where detection of a gesture is influenced by the intensity of contacts in implementations where detection of a gesture is influenced by the intensity of contacts
performing the gesture (e.g., the device detects a long press more quickly when the intensity performing the gesture (e.g., the device detects a long press more quickly when the intensity
of the contact is above an intensity threshold or delays detection of a tap input when the of the contact is above an intensity threshold or delays detection of a tap input when the
intensity of the contact is higher), the detection of those gestures does not require that the intensity of the contact is higher), the detection of those gestures does not require that the
contacts reach a particular intensity threshold so long as the criteria for recognizing the contacts reach a particular intensity threshold SO long as the criteria for recognizing the
gesture can be met in circumstances where the contact does not reach the particular intensity gesture can be met in circumstances where the contact does not reach the particular intensity
threshold (e.g., even if the amount of time that it takes to recognize the gesture changes). threshold (e.g., even if the amount of time that it takes to recognize the gesture changes).
[0084]
[0084] Contact intensity Contact intensity thresholds, thresholds, duration duration thresholds, thresholds,and andmovement thresholds movement thresholds
are, in some circumstances, combined in a variety of different combinations in order to create are, in some circumstances, combined in a variety of different combinations in order to create
heuristics for distinguishing two or more different gestures directed to the same input element heuristics for distinguishing two or more different gestures directed to the same input element
or region so that multiple different interactions with the same input element are enabled to or region SO that multiple different interactions with the same input element are enabled to
provide a richer set of user interactions and responses. The statement that a particular set of provide a richer set of user interactions and responses. The statement that a particular set of
gesture recognition criteria do not require that the intensity of the contact(s) meet a respective gesture recognition criteria do not require that the intensity of the contact(s) meet a respective
intensity threshold in order for the particular gesture recognition criteria to be met does not intensity threshold in order for the particular gesture recognition criteria to be met does not
preclude the concurrent evaluation of other intensity-dependent gesture recognition criteria to preclude the concurrent evaluation of other intensity-dependent gesture recognition criteria to
identify other gestures that do have criteria that are met when a gesture includes a contact identify other gestures that do have criteria that are met when a gesture includes a contact
with an intensity above the respective intensity threshold. For example, in some with an intensity above the respective intensity threshold. For example, in some
circumstances, first gesture recognition criteria for a first gesture – which do not require that circumstances, first gesture recognition criteria for a first gesture - which do not require that
the intensity of the contact(s) meet a respective intensity threshold in order for the first the intensity of the contact(s) meet a respective intensity threshold in order for the first
gesture recognition criteria to be met – are in competition with second gesture recognition gesture recognition criteria to be met - are in competition with second gesture recognition
criteria for a second gesture – which are dependent on the contact(s) reaching the respective criteria for a second gesture - which are dependent on the contact(s) reaching the respective
26
1005066680
intensity threshold. In such competitions, the gesture is, optionally, not recognized as meeting intensity threshold. In such competitions, the gesture is, optionally, not recognized as meeting 10 Jan 2024
the first gesture recognition criteria for the first gesture if the second gesture recognition the first gesture recognition criteria for the first gesture if the second gesture recognition
criteria for the second gesture are met first. For example, if a contact reaches the respective criteria for the second gesture are met first. For example, if a contact reaches the respective
intensity threshold intensity threshold before before the thecontact contactmoves moves by a predefined by a amountofofmovement, predefined amount movement, a deep a deep
press gesture is detected rather than a swipe gesture. Conversely, if the contact moves by the press gesture is detected rather than a swipe gesture. Conversely, if the contact moves by the
predefined amount predefined amountofofmovement movement before before the the contact contact reaches reaches the the respective respective intensitythreshold, intensity threshold, a swipe gesture is detected rather than a deep press gesture. Even in such circumstances, the a swipe gesture is detected rather than a deep press gesture. Even in such circumstances, the 2024200149
first gesture recognition criteria for the first gesture still do not require that the intensity of the first gesture recognition criteria for the first gesture still do not require that the intensity of the
contact(s) meet a respective intensity threshold in order for the first gesture recognition contact(s) meet a respective intensity threshold in order for the first gesture recognition
criteria to be met because if the contact stayed below the respective intensity threshold until criteria to be met because if the contact stayed below the respective intensity threshold until
an end of the gesture (e.g., a swipe gesture with a contact that does not increase to an an end of the gesture (e.g., a swipe gesture with a contact that does not increase to an
intensity above the respective intensity threshold), the gesture would have been recognized intensity above the respective intensity threshold), the gesture would have been recognized
by the first gesture recognition criteria as a swipe gesture. As such, particular gesture by the first gesture recognition criteria as a swipe gesture. As such, particular gesture
recognition criteria that do not require that the intensity of the contact(s) meet a respective recognition criteria that do not require that the intensity of the contact(s) meet a respective
intensity threshold in order for the particular gesture recognition criteria to be met will (A) in intensity threshold in order for the particular gesture recognition criteria to be met will (A) in
some circumstances ignore the intensity of the contact with respect to the intensity threshold some circumstances ignore the intensity of the contact with respect to the intensity threshold
(e.g. (e.g. for for a a tap tap gesture) and/or(B)(B) gesture) and/or in in some some circumstances circumstances still still be be dependent dependent on the intensity on the intensity of of the contact with respect to the intensity threshold in the sense that the particular gesture the contact with respect to the intensity threshold in the sense that the particular gesture
recognition criteria (e.g., for a long press gesture) will fail if a competing set of intensity- recognition criteria (e.g., for a long press gesture) will fail if a competing set of intensity-
dependent gesture recognition criteria (e.g., for a deep press gesture) recognize an input as dependent gesture recognition criteria (e.g., for a deep press gesture) recognize an input as
corresponding to an intensity-dependent gesture before the particular gesture recognition corresponding to an intensity-dependent gesture before the particular gesture recognition
criteria recognize a gesture corresponding to the input (e.g., for a long press gesture that is criteria recognize a gesture corresponding to the input (e.g., for a long press gesture that is
competingwith competing witha adeep deeppress pressgesture gesturefor for recognition). recognition).
[0085]
[0085] Graphicsmodule Graphics module132132 includes includes various various known known software software components components for for rendering and rendering and displaying displayinggraphics graphicson ontouch-sensitive touch-sensitivedisplay display system system112 112ororother otherdisplay, display, including components including componentsforforchanging changing thevisual the visualimpact impact (e.g.,brightness, (e.g., brightness, transparency, transparency, saturation, contrast or other visual property) of graphics that are displayed. As used herein, saturation, contrast or other visual property) of graphics that are displayed. As used herein,
the term “graphics” includes any object that can be displayed to a user, including without the term "graphics" includes any object that can be displayed to a user, including without
limitation text, web pages, icons (such as user-interface objects including soft keys), digital limitation text, web pages, icons (such as user-interface objects including soft keys), digital
images, videos, animations and the like. images, videos, animations and the like.
[0086]
[0086] In some In embodiments, some embodiments, graphics graphics module module 132 132 stores stores datadata representing representing graphics graphics
to be to be used. used. Each graphic is, Each graphic is, optionally, optionally,assigned assigneda acorresponding corresponding code. code. Graphics module132 Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along receives, from applications etc., one or more codes specifying graphics to be displayed along
27
1005066680
with, if necessary, coordinate data and other graphic property data, and then generates screen with, if necessary, coordinate data and other graphic property data, and then generates screen 10 Jan 2024
image data to output to display controller 156. image data to output to display controller 156.
[0087]
[0087] Haptic feedback Haptic feedbackmodule module 133 133 includes includes various various software software components components for for generating instructions (e.g., instructions used by haptic feedback controller 161) to produce generating instructions (e.g., instructions used by haptic feedback controller 161) to produce
tactile outputs using tactile output generator(s) 167 at one or more locations on device 100 in tactile outputs using tactile output generator(s) 167 at one or more locations on device 100 in
response to user interactions with device 100. response to user interactions with device 100. 2024200149
[0088]
[0088] Text input Text input module module134, 134,which which is,optionally, is, optionally, aa component component ofofgraphics graphicsmodule module 132, providessoft 132, provides softkeyboards keyboards for entering for entering text text in in various various applications applications (e.g., contacts (e.g., contacts 137, 137, e-mail 140, IM 141, browser 147, and any other application that needs text input). e-mail 140, IM 141, browser 147, and any other application that needs text input).
[0089]
[0089] GPSmodule GPS module135135 determines determines the the location location of of thethe device device andand provides provides this this
information for use in various applications (e.g., to telephone 138 for use in location-based information for use in various applications (e.g., to telephone 138 for use in location-based
dialing, to camera 143 as picture/video metadata, and to applications that provide location- dialing, to camera 143 as picture/video metadata, and to applications that provide location-
based services based services such such as as weather widgets, local weather widgets, local yellow yellow page pagewidgets, widgets,and andmap/navigation map/navigation widgets). widgets).
[0090]
[0090] Applications 136 Applications 136optionally optionallyinclude includethe the following followingmodules modules(or (orsets setsof of instructions), or a subset or superset thereof: instructions), or a subset or superset thereof:
contacts module contacts 137(sometimes module 137 (sometimes called called an an address address book book or or contact contact list); list);
telephone module telephone module138; 138;
video conferencing video conferencingmodule module 139; 139;
e-mail client e-mail client module 140; module 140;
instant messaging instant (IM)module messaging (IM) module 141; 141;
workoutsupport workout supportmodule module 142; 142;
cameramodule camera module 143 143 forfor still and/or still and/or video videoimages; images;
image management image managementmodule module144; 144;
browsermodule browser module147; 147;
calendar module calendar module148; 148;
widget modules widget modules149, 149,which which optionally optionally include include one one or or more more of:of: weather weather widget widget 149-1, 149-1,
stocks widget stocks 149-2, calculator widget 149-2, calculator widget 149-3, alarm widget 149-3, alarmclock clockwidget widget149-4, 149-4,dictionary dictionary
28
1005066680
widget 149-5, widget 149-5,and andother otherwidgets widgetsobtained obtainedbybythe theuser, user, as as well well as as user-created user-created widgets widgets 10 Jan 2024
149-6; 149-6;
widget creator widget creator module module150 150for formaking making user-created user-created widgets widgets 149-6; 149-6;
search module search module151; 151;
video and video and music musicplayer playermodule module 152, 152, which which is,is, optionally,made optionally, madeup up of of a a videoplayer video player moduleand module anda amusic musicplayer playermodule; module; 2024200149
notes module notes 153; module 153;
mapmodule map module 154; 154; and/or and/or
online video online module155. video module 155.
[0091]
[0091] Examplesofofother Examples otherapplications applications136 136that that are, are, optionally, optionally, stored storedininmemory 102 memory 102
include other include other word processingapplications, word processing applications, other other image editing applications, image editing applications, drawing drawing
applications, presentation applications, JAVA-enabled applications, encryption, digital rights applications, presentation applications, JAVA-enabled applications, encryption, digital rights
management, management, voice voice recognition,andand recognition, voice voice replication. replication.
[0092]
[0092] In conjunction with touch-sensitive display system 112, display controller 156, In conjunction with touch-sensitive display system 112, display controller 156,
contact module contact 130,graphics module 130, graphicsmodule module 132, 132, andand textinput text inputmodule module 134, 134, contacts contacts module module 137 137 includes executable instructions to manage an address book or contact list (e.g., stored in includes executable instructions to manage an address book or contact list (e.g., stored in
application internal application internal state state192 192ofofcontacts contactsmodule module 137 137 in in memory 102orormemory memory 102 memory 370), 370),
including: adding including: name(s)toto the adding name(s) the address address book; book;deleting deleting name(s) name(s)from fromthe theaddress addressbook; book; associating telephone number(s), e-mail address(es), physical address(es) or other associating telephone number(s), e-mail address(es), physical address(es) or other
information with information withaa name; name;associating associatingananimage imagewith witha aname; name; categorizing categorizing and and sortingnames; sorting names; providing telephone numbers and/or e-mail addresses to initiate and/or facilitate providing telephone numbers and/or e-mail addresses to initiate and/or facilitate
communications communications by by telephone telephone 138, 138, video video conference conference 139,139, e-mail e-mail 140,140, or 141; or IM IM 141; and and SO so forth. forth.
[0093]
[0093] In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, In conjunction with RF circuitry 108, audio circuitry 110, speaker 111,
microphone113, microphone 113,touch-sensitive touch-sensitivedisplay displaysystem system112, 112,display displaycontroller controller156, 156,contact contactmodule module 130, 130, graphics graphics module 132,and module 132, andtext textinput input module module134, 134,telephone telephone module module 138138 includes includes
executable instructions to enter a sequence of characters corresponding to a telephone executable instructions to enter a sequence of characters corresponding to a telephone
number,access number, accessone oneorormore moretelephone telephone numbers numbers in address in address book book 137,137, modify modify a telephone a telephone
numberthat number thathas hasbeen beenentered, entered,dial dial aa respective respective telephone telephone number, conducta aconversation number, conduct conversationand and disconnect or disconnect or hang hangup upwhen whenthetheconversation conversationisiscompleted. completed.AsAs noted noted above, above, thethe wireless wireless
29
1005066680
communication communication optionally optionally uses uses any any of of a a pluralityof plurality of communications communications standards, standards, protocols protocols andand 10 Jan 2024
technologies. technologies.
[0094]
[0094] In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, In conjunction with RF circuitry 108, audio circuitry 110, speaker 111,
microphone113, microphone 113,touch-sensitive touch-sensitivedisplay displaysystem system112, 112,display displaycontroller controller156, 156,optical optical sensor(s) sensor(s) 164, 164, optical optical sensor sensor controller controller158, 158,contact contactmodule module 130, 130, graphics graphics module 132,text module 132, text input input module134, module 134,contact contactlist list 137, 137, and and telephone module138, telephone module 138,videoconferencing videoconferencing module module 139 139 includes executable instructions to initiate, conduct, and terminate a video conference includes executable instructions to initiate, conduct, and terminate a video conference 2024200149
between a user and one or more other participants in accordance with user instructions. between a user and one or more other participants in accordance with user instructions.
[0095]
[0095] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display controller display controller 156, 156, contact contact module 130, graphics module 130, graphics module module132, 132,and andtext textinput inputmodule module 134, 134,
e-mail client module 140 includes executable instructions to create, send, receive, and e-mail client module 140 includes executable instructions to create, send, receive, and
managee-mail manage e-mailininresponse responsetotouser userinstructions. instructions. In In conjunction conjunction with with image management image management
module144, module 144,e-mail e-mailclient client module module140 140 makes makes it it very very easy easy to to createand create andsend sende-mails e-mailswith withstill still or video or video images takenwith images taken withcamera cameramodule module 143. 143.
[0096]
[0096] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display controller display controller 156, 156, contact contact module 130, graphics module 130, graphics module module132, 132,and andtext textinput inputmodule module 134, 134,
the instant the instant messaging module141 messaging module 141includes includesexecutable executable instructionstotoenter instructions enter aa sequence sequenceofof characters corresponding characters to an corresponding to an instant instant message, to modify message, to previouslyentered modify previously enteredcharacters, characters, to to transmit aa respective transmit respective instant instantmessage message (for (for example, example, using using a a Short Short Message Service(SMS) Message Service (SMS)or or
MultimediaMessage Multimedia Message Service Service (MMS) (MMS) protocol protocol for telephony-based for telephony-based instant instant messages messages or or using using XMPP, XMPP, SIMPLE, SIMPLE, AppleApple Push Push Notification Notification Service Service (APNs)(APNs) or IMPSorfor IMPS for Internet-based Internet-based instant instant
messages),to messages), to receive receive instant instant messages, and to messages, and to view received instant view received instant messages. In some messages. In some embodiments,transmitted embodiments, transmittedand/or and/orreceived receivedinstant instantmessages messages optionally optionally include include graphics, graphics,
photos, audio photos, audio files, files, video videofiles filesand/or other and/or attachments other attachmentsasas areare supported inin supported a MMS a MMS and/or and/or an an
EnhancedMessaging Enhanced Messaging Service Service (EMS). (EMS). As used As used herein, herein, “instant "instant messaging” messaging" refers refers to both to both
telephony-basedmessages telephony-based messages (e.g.,messages (e.g., messages sentusing sent usingSMS SMS or MMS) or MMS) and Internet-based and Internet-based
messages(e.g., messages (e.g., messages sent using messages sent usingXMPP, XMPP, SIMPLE, SIMPLE, APNs,APNs, or IMPS). or IMPS).
[0097]
[0097] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display controller display controller 156, 156, contact contact module 130, graphics module 130, graphics module module132, 132,text textinput inputmodule module134, 134, GPSmodule GPS module 135, 135, mapmap module module 154, 154, and video and video and music and music playerplayer modulemodule 152, workout 152, workout
support module 142 includes executable instructions to create workouts (e.g., with time, support module 142 includes executable instructions to create workouts (e.g., with time,
distance, and/or distance, and/or calorie calorieburning burning goals); goals);communicate withworkout communicate with workoutsensors sensors(in(insports sportsdevices devices 30
1005066680
and smart and smart watches); watches);receive receive workout workoutsensor sensordata; data;calibrate calibrate sensors sensors used used to to monitor monitor aa 10 Jan 2024
workout;select workout; select and and play play music musicfor for aa workout; workout;and anddisplay, display, store store and transmit workout and transmit data. workout data.
[0098]
[0098] In conjunction with touch-sensitive display system 112, display controller 156, In conjunction with touch-sensitive display system 112, display controller 156,
optical sensor(s) optical sensor(s) 164, 164, optical opticalsensor sensorcontroller controller158, contact 158, module contact module130, 130,graphics graphicsmodule module
132, 132, and imagemanagement and image management module module 144, 144, camera camera module module 143 includes 143 includes executable executable
instructions to capture still images or video (including a video stream) and store them into instructions to capture still images or video (including a video stream) and store them into
memory 102, modify characteristics of a still image or video, and/or delete a still image or memory 102, modify characteristics of a still image or video, and/or delete a still image or 2024200149
video from video from memory 102. memory 102.
[0099]
[0099] In conjunction with touch-sensitive display system 112, display controller 156, In conjunction with touch-sensitive display system 112, display controller 156,
contact module contact 130,graphics module 130, graphicsmodule module 132, 132, textinput text inputmodule module 134, 134, andand camera camera module module 143, 143, imagemanagement image management module module 144 includes 144 includes executable executable instructions instructions to arrange, to arrange, modify modify (e.g., (e.g.,
edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album),
and store still and/or video images. and store still and/or video images.
[00100]
[00100] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display system display controller 156, system controller 156, contact contact module 130,graphics module 130, graphicsmodule module 132, 132, and and textinput text input module134, module 134,browser browsermodule module 147147 includes includes executable executable instructions instructions to to browse browse the the Internet Internet in in
accordance with user instructions, including searching, linking to, receiving, and displaying accordance with user instructions, including searching, linking to, receiving, and displaying
web pages or portions thereof, as well as attachments and other files linked to web pages. web pages or portions thereof, as well as attachments and other files linked to web pages.
[00101]
[00101] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display system display controller 156, system controller 156, contact contact module 130,graphics module 130, graphicsmodule module 132, 132, textinput text inputmodule module 134, 134, e-mail e-mail client clientmodule 140, and module 140, and browser browsermodule module 147, 147, calendar calendar module module 148 148 includes includes
executable instructions to create, display, modify, and store calendars and data associated executable instructions to create, display, modify, and store calendars and data associated
with calendars (e.g., calendar entries, to do lists, etc.) in accordance with user instructions. with calendars (e.g., calendar entries, to do lists, etc.) in accordance with user instructions.
[00102]
[00102] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display system display controller 156, system controller 156, contact contact module 130,graphics module 130, graphicsmodule module 132, 132, textinput text inputmodule module 134, 134, and browsermodule and browser module147, 147, widget widget modules modules 149 149 are are mini-applications mini-applications thatthat are, are, optionally, optionally,
downloaded downloaded and and used used by by a user a user (e.g.,weather (e.g., weatherwidget widget149-1, 149-1,stocks stockswidget widget 149-2, 149-2, calculator calculator
widget 149-3, widget 149-3,alarm alarmclock clockwidget widget149-4, 149-4,and anddictionary dictionarywidget widget149-5) 149-5) or or createdbybythetheuser created user (e.g., user-created (e.g., user-createdwidget widget149-6). 149-6).In Insome some embodiments, embodiments, a awidget widgetincludes includesananHTML HTML (Hypertext Markup (Hypertext Markup Language) Language) file, file, a a CSS CSS (Cascading (Cascading Style Style Sheets) Sheets) file, file, andand a JavaScript a JavaScript file. file.
In some In embodiments, some embodiments, a widget a widget includes includes an an XMLXML (Extensible (Extensible Markup Markup Language) Language) file andfile a and a JavaScript file (e.g., Yahoo! Widgets). JavaScript file (e.g., Yahoo! Widgets).
31
1005066680
[00103]
[00103] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112, 10 Jan 2024
display system display controller 156, system controller 156, contact contact module 130,graphics module 130, graphicsmodule module 132, 132, textinput text inputmodule module 134, 134, and browsermodule and browser module 147, 147, thewidget the widget creatormodule creator module 150150 includes includes executable executable
instructions to create widgets (e.g., turning a user-specified portion of a web page into a instructions to create widgets (e.g., turning a user-specified portion of a web page into a
widget). widget).
[00104]
[00104] In conjunction In with touch-sensitive conjunction with touch-sensitive display display system 112, display system 112, display system system controller 156, controller 156, contact contact module 130, graphics module 130, graphics module module132, 132,and andtext textinput inputmodule module 134, 134, search search 2024200149
module151 module 151includes includesexecutable executableinstructions instructionstotosearch searchfor for text, text, music, music, sound, sound, image, video, image, video,
and/or other files in memory 102 that match one or more search criteria (e.g., one or more and/or other files in memory 102 that match one or more search criteria (e.g., one or more
user-specified search terms) in accordance with user instructions. user-specified search terms) in accordance with user instructions.
[00105]
[00105] In conjunction In with touch-sensitive conjunction with touch-sensitive display display system 112, display system 112, display system system controller 156, controller 156, contact contact module 130, graphics module 130, graphics module module132, 132,audio audiocircuitry circuitry110, 110,speaker speaker111, 111, RFcircuitry RF circuitry 108, 108, and browsermodule and browser module 147, 147, video video and and music music player player module module 152 152 includes includes
executable instructions executable instructions that that allow allow the theuser usertotodownload download and play back and play recordedmusic back recorded musicand and other sound other files stored sound files stored in inone oneor ormore more file fileformats, formats,such suchasas MP3 MP3 or or AAC files, and AAC files, and
executable instructions to display, present or otherwise play back videos (e.g., on touch- executable instructions to display, present or otherwise play back videos (e.g., on touch-
sensitive display system 112, or on an external display connected wirelessly or via external sensitive display system 112, or on an external display connected wirelessly or via external
port 124). port 124). In In some embodiments, some embodiments, device device 100 100 optionally optionally includes includes thefunctionality the functionalityofofananMP3 MP3 player, such as an iPod (trademark of Apple Inc.). player, such as an iPod (trademark of Apple Inc.).
[00106]
[00106] In conjunction with touch-sensitive display system 112, display controller 156, In conjunction with touch-sensitive display system 112, display controller 156,
contact module contact 130,graphics module 130, graphicsmodule module 132, 132, andand textinput text inputmodule module 134, 134, notes notes module module 153 153 includes executable instructions to create and manage notes, to do lists, and the like in includes executable instructions to create and manage notes, to do lists, and the like in
accordance with user instructions. accordance with user instructions.
[00107]
[00107] In conjunction In with RF conjunction with RFcircuitry circuitry 108, 108, touch-sensitive touch-sensitive display display system 112, system 112,
display system display controller 156, system controller 156, contact contact module 130,graphics module 130, graphicsmodule module 132, 132, textinput text inputmodule module 134, 134, GPS module GPS module 135, 135, andand browser browser module module 147, 147, map module map module 154 includes 154 includes executable executable
instructions to receive, display, modify, and store maps and data associated with maps (e.g., instructions to receive, display, modify, and store maps and data associated with maps (e.g.,
driving directions; data on stores and other points of interest at or near a particular location; driving directions; data on stores and other points of interest at or near a particular location;
and other location-based data) in accordance with user instructions. and other location-based data) in accordance with user instructions.
[00108]
[00108] In conjunction In with touch-sensitive conjunction with touch-sensitive display display system 112, display system 112, display system system controller 156, controller 156, contact contact module 130, graphics module 130, graphics module module132, 132,audio audiocircuitry circuitry110, 110,speaker speaker111, 111, RFcircuitry RF circuitry 108, 108, text text input inputmodule 134, e-mail module 134, e-mail client client module 140, and module 140, andbrowser browsermodule module 147, 147,
32
1005066680
online video module 155 includes executable instructions that allow the user to access, online video module 155 includes executable instructions that allow the user to access, 10 Jan 2024
browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen
112, orononananexternal 112, or external display display connected connected wirelessly wirelessly or via external or via external port port 124), 124), send send an e-mail an e-mail
with aa link with link to toaaparticular particularonline video, online andandotherwise video, manage otherwise manage online online videos videos in in one one or ormore more
file formats, file formats,such such as asH.264. H.264. In Insome some embodiments, instantmessaging embodiments, instant messaging module module 141,141, rather rather than than
e-mail client module 140, is used to send a link to a particular online video. e-mail client module 140, is used to send a link to a particular online video.
[00109]
[00109] Eachof Each of the the above aboveidentified identified modules andapplications modules and applicationscorrespond correspondtotoa aset set of of 2024200149
executable instructions executable instructions for for performing one or performing one or more morefunctions functionsdescribed describedabove aboveand andthethe methodsdescribed methods describedininthis this application application (e.g., (e.g.,the thecomputer-implemented methods computer-implemented methods andand other other
information processing methods described herein). These modules (i.e., sets of instructions) information processing methods described herein). These modules (i.e., sets of instructions)
need not need not be be implemented implemented asas separatesoftware separate softwareprograms, programs, procedures procedures or or modules, modules, and and thusthus
various subsets various subsets of of these these modules are, optionally, modules are, optionally, combined or otherwise combined or otherwisere-arranged re-arrangedinin various embodiments. various embodiments.InInsome some embodiments, embodiments, memory memory 102 optionally 102 optionally stores stores a subset a subset of theof the modulesand modules anddata datastructures structuresidentified identified above. above. Furthermore, memory Furthermore, memory 102102 optionally optionally stores stores
additional modules additional anddata modules and datastructures structures not not described above. described above.
[00110]
[00110] In some In embodiments, some embodiments, device device 100100 is is a device a device where where operation operation of of a predefined a predefined
set of functions on the device is performed exclusively through a touch screen and/or a set of functions on the device is performed exclusively through a touch screen and/or a
touchpad. By touchpad. Byusing usingaatouch touchscreen screenand/or and/oraatouchpad touchpadasasthe theprimary primaryinput inputcontrol controldevice devicefor for operation of operation of device device 100, 100, the the number ofphysical number of physical input input control control devices devices (such (such as as push buttons, push buttons,
dials, and the like) on device 100 is, optionally, reduced. dials, and the like) on device 100 is, optionally, reduced.
[00111]
[00111] Thepredefined The predefinedset set of of functions functions that that are areperformed exclusively through performed exclusively through aa touch touch screen and/or screen and/or aa touchpad optionally include touchpad optionally include navigation navigation between betweenuser userinterfaces. interfaces. In In some some
embodiments,thethetouchpad, embodiments, touchpad, when when touched touched by the by the user, user, navigates navigates device device 100100 to atomain, a main, home, home,
or root or root menu fromany menu from anyuser userinterface interface that that is is displayed displayed on on device device 100. 100. In In such such embodiments, embodiments, a a
“menubutton" "menu button”isisimplemented implemented using using a touchpad. a touchpad. In In some some other other embodiments, embodiments, the menu the menu
button is a physical push button or other physical input control device instead of a touchpad. button is a physical push button or other physical input control device instead of a touchpad.
[00112]
[00112] Figure 1B Figure 1Bisis aa block block diagram illustrating example diagram illustrating components example components forevent for event handling in handling in accordance accordancewith withsome some embodiments. embodiments. In some In some embodiments, embodiments, memorymemory 102 (in 102 (in Figures 1A) or 370 (Figure 3) includes event sorter 170 (e.g., in operating system 126) and a Figures 1A) or 370 (Figure 3) includes event sorter 170 (e.g., in operating system 126) and a
respective application respective application 136-1 (e.g., any 136-1 (e.g., anyof ofthe theaforementioned aforementioned applications applications 136, 136, 137-155, 137-155, 380- 380-
390). 390).
33
1005066680
[00113]
[00113] Event sorter Event sorter 170 receives event 170 receives event information information and anddetermines determinesthe theapplication application 10 Jan 2024
136-1 and application 136-1 and application view view191 191ofofapplication application 136-1 136-1totowhich whichtotodeliver deliver the the event event information. Event information. Eventsorter sorter 170 includes event 170 includes event monitor monitor171 171and andevent eventdispatcher dispatchermodule module 174. 174.
In some In embodiments, some embodiments, application application 136-1 136-1 includes includes application application internalstate internal state192, 192,which which indicates the current application view(s) displayed on touch-sensitive display system 112 indicates the current application view(s) displayed on touch-sensitive display system 112
whenthe when theapplication application is is active active or or executing. executing. In Insome some embodiments, device/globalinternal embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently state 157 is used by event sorter 170 to determine which application(s) is (are) currently 2024200149
active, and application internal state 192 is used by event sorter 170 to determine application active, and application internal state 192 is used by event sorter 170 to determine application
views 191 views 191toto which whichtotodeliver deliver event event information. information.
[00114]
[00114] In some In embodiments, some embodiments, application application internalstate internal state192 192includes includesadditional additional information, such information, such as as one one or or more of: resume more of: resumeinformation informationtotobebeused usedwhen when application application 136-1 136-1
resumes execution, user interface state information that indicates information being displayed resumes execution, user interface state information that indicates information being displayed
or that is ready for display by application 136-1, a state queue for enabling the user to go or that is ready for display by application 136-1, a state queue for enabling the user to go
back to a prior state or view of application 136-1, and a redo/undo queue of previous actions back to a prior state or view of application 136-1, and a redo/undo queue of previous actions
taken by the user. taken by the user.
[00115]
[00115] Event monitor Event monitor171 171receives receivesevent eventinformation informationfrom from peripherals peripherals interface118. interface 118. Event information Event informationincludes includesinformation informationabout abouta asub-event sub-event(e.g., (e.g., aa user user touch touch on touch- on touch-
sensitive display system 112, as part of a multi-touch gesture). Peripherals interface 118 sensitive display system 112, as part of a multi-touch gesture). Peripherals interface 118
transmits information transmits it receives information it receives from from I/O I/O subsystem 106ororaa sensor, subsystem 106 sensor, such such as as proximity proximity sensor 166, sensor 166, accelerometer(s) accelerometer(s) 168, 168, and/or and/or microphone microphone 113 113 (through (through audio audio circuitry110). circuitry 110). Informationthat Information that peripherals peripherals interface interface 118 118 receives receives from from I/O I/O subsystem 106includes subsystem 106 includes information from information fromtouch-sensitive touch-sensitivedisplay displaysystem system112 112orora atouch-sensitive touch-sensitivesurface. surface.
[00116]
[00116] In some In embodiments, some embodiments, event event monitor monitor 171171 sends sends requests requests to the to the peripherals peripherals
interface 118 at predetermined intervals. In response, peripherals interface 118 transmits interface 118 at predetermined intervals. In response, peripherals interface 118 transmits
event information. event information. In In other other embodiments, peripheralinterface embodiments, peripheral interface 118 118transmits transmitsevent event information only when there is a significant event (e.g., receiving an input above a information only when there is a significant event (e.g., receiving an input above a
predeterminednoise predetermined noisethreshold thresholdand/or and/orfor for more morethan thana apredetermined predetermined duration). duration).
[00117]
[00117] In some In embodiments, some embodiments, event event sorter170 sorter 170 alsoincludes also includesa ahit hitview viewdetermination determination module172 module 172and/or and/orananactive activeevent eventrecognizer recognizerdetermination determinationmodule module 173. 173.
[00118]
[00118] Hit view Hit determinationmodule view determination module 172 172 provides provides software software procedures procedures for for
determiningwhere determining wherea asub-event sub-eventhas hastaken takenplace placewithin withinone oneorormore more views, views, when when touch- touch-
34
1005066680
sensitive display sensitive display system system 112 displays more 112 displays thanone more than oneview. view.Views Views aremade are made up up of of controls controls andand 10 Jan 2024
other elements that a user can see on the display. other elements that a user can see on the display.
[00119]
[00119] Another aspect of the user interface associated with an application is a set of Another aspect of the user interface associated with an application is a set of
views, sometimes views, sometimesherein hereincalled calledapplication applicationviews viewsororuser userinterface interface windows, windows, ininwhich which information is information is displayed and touch-based displayed and touch-basedgestures gesturesoccur. occur.The Theapplication applicationviews views(of (ofaa respective application) respective application) in inwhich which a a touch touch is isdetected detectedoptionally optionallycorrespond correspond to toprogrammatic programmatic
levels within levels within aa programmatic orview programmatic or viewhierarchy hierarchyofofthe the application. application. For For example, the lowest example, the lowest 2024200149
level view in which a touch is detected is, optionally, called the hit view, and the set of events level view in which a touch is detected is, optionally, called the hit view, and the set of events
that are recognized as proper inputs are, optionally, determined based, at least in part, on the that are recognized as proper inputs are, optionally, determined based, at least in part, on the
hit view of the initial touch that begins a touch-based gesture. hit view of the initial touch that begins a touch-based gesture.
[00120]
[00120] Hit view Hit determinationmodule view determination module 172 172 receives receives information information related related toto sub-events sub-events
of aa touch-based of gesture. When touch-based gesture. When ananapplication applicationhas hasmultiple multipleviews viewsorganized organizedinina ahierarchy, hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy hit view determination module 172 identifies a hit view as the lowest view in the hierarchy
which should handle the sub-event. In most circumstances, the hit view is the lowest level which should handle the sub-event. In most circumstances, the hit view is the lowest level
view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub- view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub-
events that form an event or potential event). Once the hit view is identified by the hit view events that form an event or potential event). Once the hit view is identified by the hit view
determination module, the hit view typically receives all sub-events related to the same touch determination module, the hit view typically receives all sub-events related to the same touch
or input source for which it was identified as the hit view. or input source for which it was identified as the hit view.
[00121]
[00121] Active event Active event recognizer recognizer determination determinationmodule module173173 determines determines which which viewview or or views within views within aa view viewhierarchy hierarchyshould shouldreceive receiveaaparticular particular sequence of sub-events. sequence of sub-events. In In some some embodiments,active embodiments, activeevent eventrecognizer recognizerdetermination determination module module 173 173 determines determines that that onlyonly the the hit hit
view should view shouldreceive receiveaa particular particular sequence of sub-events. sequence of sub-events. In In other other embodiments, activeevent embodiments, active event recognizer determination recognizer determinationmodule module 173 173 determines determines that that allallviews viewsthat thatinclude includethe thephysical physical location of a sub-event are actively involved views, and therefore determines that all actively location of a sub-event are actively involved views, and therefore determines that all actively
involved views involved viewsshould shouldreceive receiveaaparticular particular sequence of sub-events. sequence of sub-events. In In other other embodiments, embodiments,
even if touch sub-events were entirely confined to the area associated with one particular even if touch sub-events were entirely confined to the area associated with one particular
view, views higher in the hierarchy would still remain as actively involved views. view, views higher in the hierarchy would still remain as actively involved views.
[00122]
[00122] Event dispatcher Event dispatcher module module174 174 dispatches dispatches theevent the eventinformation information to to anan event event
recognizer (e.g., recognizer (e.g., event eventrecognizer recognizer 180). 180).In Inembodiments includingactive embodiments including active event event recognizer recognizer determinationmodule determination module173, 173,event eventdispatcher dispatchermodule module 174174 delivers delivers thethe event event information information to to an an
event recognizer event recognizer determined determinedbybyactive activeevent eventrecognizer recognizerdetermination determinationmodule module 173. 173. In In some some
35
1005066680
embodiments,event embodiments, event dispatchermodule dispatcher module 174174 stores stores in in an an event event queue queue thethe event event information, information, 10 Jan 2024
whichisis retrieved which retrieved by by aa respective respective event event receiver receivermodule 182. module 182.
[00123]
[00123] In some In embodiments, some embodiments, operating operating system system 126 126 includes includes event event sorter sorter 170. 170.
Alternatively, application Alternatively, application 136-1 136-1 includes includes event event sorter sorter170. 170.In Inyet yetother embodiments, other embodiments, event event
sorter 170 sorter 170 is isaastand-alone stand-alonemodule, module, or or aa part partofofanother anothermodule module stored stored in inmemory 102,such memory 102, such as contact/motion as module130. contact/motion module 130. 2024200149
[00124]
[00124] In some In embodiments, some embodiments, application application 136-1 136-1 includes includes a pluralityofofevent a plurality eventhandlers handlers 190 and one 190 and oneor or more moreapplication applicationviews views191, 191,each eachofofwhich which includes includes instructionsfor instructions forhandling handling touch events that occur within a respective view of the application’s user interface. Each touch events that occur within a respective view of the application's user interface. Each
application view application 191of view 191 of the the application application 136-1 includes one 136-1 includes one or or more moreevent eventrecognizers recognizers180. 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In Typically, a respective application view 191 includes a plurality of event recognizers 180. In
other embodiments, other oneorormore embodiments, one moreof of event event recognizers recognizers 180 180 areare partofofa aseparate part separatemodule, module,such such as a user interface kit (not shown) or a higher level object from which application 136-1 as a user interface kit (not shown) or a higher level object from which application 136-1
inherits methods inherits andother methods and other properties. properties. In In some embodiments, some embodiments, a a respectiveevent respective eventhandler handler190 190 includes one includes one or or more of: data more of: data updater 176, object updater 176, object updater updater 177, 177, GUI updater178, GUI updater 178,and/or and/orevent event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data
updater 176, object updater 177 or GUI updater 178 to update the application internal state updater 176, object updater 177 or GUI updater 178 to update the application internal state
192. 192. Alternatively, Alternatively, one one or or more more of of the the application application views views 191 191 includes includes one or more one or respective more respective
event handlers event handlers 190. 190. Also, Also, in in some embodiments, some embodiments, oneone or or more more of data of data updater updater 176, 176, object object
updater 177, updater 177, and and GUI GUIupdater updater178 178 areincluded are included inina arespective respectiveapplication applicationview view191. 191.
[00125]
[00125] A respective event recognizer 180 receives event information (e.g., event data A respective event recognizer 180 receives event information (e.g., event data
179) from event 179) from eventsorter sorter 170, 170, and identifies an and identifies anevent event from from the the event event information. information. Event Event
recognizer 180 recognizer 180includes includesevent eventreceiver receiver 182 182and andevent eventcomparator comparator 184. 184. InIn some some embodiments, embodiments,
event recognizer 180 also includes at least a subset of: metadata 183, and event delivery event recognizer 180 also includes at least a subset of: metadata 183, and event delivery
instructions 188 (which optionally include sub-event delivery instructions). instructions 188 (which optionally include sub-event delivery instructions).
[00126]
[00126] Event receiver Event receiver 182 182receives receives event event information informationfrom fromevent eventsorter sorter170. 170.The The event information event information includes includes information informationabout abouta asub-event, sub-event,for for example, example,a atouch touchororaa touch touch movement. movement. Depending Depending on the on the sub-event, sub-event, the the event event information information alsoalso includes includes additional additional
information, such information, such as as location location of of the the sub-event. sub-event.When the sub-event When the sub-eventconcerns concernsmotion motionofof a a touch, the event information optionally also includes speed and direction of the sub-event. In touch, the event information optionally also includes speed and direction of the sub-event. In
someembodiments, some embodiments, events events include include rotation rotation of of thedevice the devicefrom from one one orientationtotoanother orientation another (e.g., (e.g., from from aa portrait portraitorientation orientationtotoa alandscape landscape orientation, orientation, or vice or vice versa), versa), and and the the event event
36
1005066680
information includes information includes corresponding correspondinginformation informationabout about thecurrent the currentorientation orientation(also (also called called 10 Jan 2024
device attitude) of the device. device attitude) of the device.
[00127]
[00127] Event comparator Event comparator184 184 compares compares the the event event information information to predefined to predefined event event or or sub-event definitions sub-event definitions and, and, based based on on the the comparison, determinesananevent comparison, determines eventororsub-event, sub-event,oror determinesor determines or updates updatesthe the state state of of an an event event or orsub-event. sub-event.In Insome some embodiments, event embodiments, event
comparator184 comparator 184includes includesevent eventdefinitions definitions186. 186.Event Eventdefinitions definitions186 186contain containdefinitions definitions of of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187- events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187- 2024200149
2), and 2), and others. others.In Insome some embodiments, sub-eventsininananevent embodiments, sub-events event187 187include, include,for forexample, example,touch touch begin, touch begin, end, touch touch end, movement, touch movement, touch touch cancellation,and cancellation, andmultiple multipletouching. touching.InInone one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double example, the definition for event 1 (187-1) is a double tap on a displayed object. The double
tap, for example, comprises a first touch (touch begin) on the displayed object for a tap, for example, comprises a first touch (touch begin) on the displayed object for a
predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch
(touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch
end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a
draggingon dragging onaa displayed displayedobject. object. The dragging,for The dragging, for example, example,comprises comprisesa atouch touch(or (orcontact) contact)onon the displayed the displayed object object for for aapredetermined phase, aa movement predetermined phase, movement ofof thetouch the touchacross acrosstouch- touch- sensitive display system 112, and lift-off of the touch (touch end). In some embodiments, the sensitive display system 112, and lift-off of the touch (touch end). In some embodiments, the
event also event also includes includes information for one information for one or or more associated event more associated event handlers handlers 190. 190.
[00128]
[00128] In some In embodiments, some embodiments, event event definition187187 definition includes includes a definitionofofananevent a definition event for aa respective for respective user-interface user-interfaceobject. object.InIn some someembodiments, event comparator embodiments, event comparator184 184performs performs a hit test to determine which user-interface object is associated with a sub-event. For a hit test to determine which user-interface object is associated with a sub-event. For
example, in an application view in which three user-interface objects are displayed on touch- example, in an application view in which three user-interface objects are displayed on touch-
sensitive display sensitive display system system 112, 112, when when aatouch touchisis detected detected on on touch-sensitive touch-sensitive display display system 112, system 112,
event comparator 184 performs a hit test to determine which of the three user-interface event comparator 184 performs a hit test to determine which of the three user-interface
objects is associated with the touch (sub-event). If each displayed object is associated with a objects is associated with the touch (sub-event). If each displayed object is associated with a
respective event handler 190, the event comparator uses the result of the hit test to determine respective event handler 190, the event comparator uses the result of the hit test to determine
whichevent which eventhandler handler190 190should shouldbebeactivated. activated.For Forexample, example,event eventcomparator comparator 184184 selects selects an an
event handler associated with the sub-event and the object triggering the hit test. event handler associated with the sub-event and the object triggering the hit test.
[00129]
[00129] In some In embodiments, some embodiments, thethe definitionfor definition foraarespective respective event event 187 187also also includes includes delayed actions that delay delivery of the event information until after it has been determined delayed actions that delay delivery of the event information until after it has been determined
whetherthe whether the sequence sequenceofofsub-events doesorordoes sub-eventsdoes doesnot notcorrespond correspondtoto theevent the eventrecognizer's recognizer’s event type. event type.
37
1005066680
[00130]
[00130] Whena arespective When respectiveevent eventrecognizer recognizer180 180determines determines thatthe that theseries seriesof of sub- sub- 10 Jan 2024
events do not match any of the events in event definitions 186, the respective event events do not match any of the events in event definitions 186, the respective event
recognizer 180 enters an event impossible, event failed, or event ended state, after which it recognizer 180 enters an event impossible, event failed, or event ended state, after which it
disregards subsequent sub-events of the touch-based gesture. In this situation, other event disregards subsequent sub-events of the touch-based gesture. In this situation, other event
recognizers, if any, that remain active for the hit view continue to track and process sub- recognizers, if any, that remain active for the hit view continue to track and process sub-
events of events of an an ongoing touch-basedgesture. ongoing touch-based gesture.
[00131]
[00131] In some In embodiments, some embodiments, a respective a respective event event recognizer recognizer 180 180 includes includes metadata metadata 2024200149
183 withconfigurable 183 with configurable properties, properties, flags, flags, and/or and/or lists lists that that indicate indicate how how the thedelivery event event delivery systemshould system shouldperform performsub-event sub-event deliverytotoactively delivery activelyinvolved involvedevent eventrecognizers. recognizers.InInsome some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate
how event recognizers interact, or are enabled to interact, with one another. In some how event recognizers interact, or are enabled to interact, with one another. In some
embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate
whethersub-events whether sub-eventsare aredelivered deliveredto to varying varying levels levels in the the view view or or programmatic hierarchy. programmatic hierarchy.
[00132]
[00132] In some In embodiments, some embodiments, a respective a respective event event recognizer recognizer 180 180 activates activates event event
handler 190 handler 190associated associated with withan an event eventwhen whenone oneorormore more particularsub-events particular sub-events ofof anan eventare event are recognized. In recognized. In some someembodiments, embodiments, a respective a respective event event recognizer recognizer 180180 delivers delivers event event
informationassociated information associated with with the the event event to to event handler handler 190. 190. Activating an event Activating an event handler handler 190 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some is distinct from sending (and deferred sending) sub-events to a respective hit view. In some
embodiments,event embodiments, eventrecognizer recognizer 180 180 throws throws a flag a flag associated associated with with therecognized the recognized event, event, and and
event handler 190 associated with the flag catches the flag and performs a predefined process. event handler 190 associated with the flag catches the flag and performs a predefined process.
[00133]
[00133] In some In embodiments, some embodiments, event event delivery delivery instructions188 instructions 188 include include sub-event sub-event
delivery instructions that deliver event information about a sub-event without activating an delivery instructions that deliver event information about a sub-event without activating an
event handler. Instead, the sub-event delivery instructions deliver event information to event event handler. Instead, the sub-event delivery instructions deliver event information to event
handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or to actively involved views. Event
handlers associated with the series of sub-events or with actively involved views receive the handlers associated with the series of sub-events or with actively involved views receive the
event information event information and andperform performa apredetermined predetermined process. process.
[00134]
[00134] In some In embodiments, some embodiments, data data updater updater 176176 creates creates andand updates updates data data used used in in application 136-1. application 136-1. For For example, data updater example, data updater176 176updates updatesthe thetelephone telephonenumber number used used in in contacts module contacts 137,ororstores module 137, stores aa video file used video file used in invideo videoand and music music player module 152.InIn module 152.
someembodiments, some embodiments, object object updater updater 177177 creates creates andand updates updates objects objects used used in in application application 136-1. 136-1.
For example, object updater 177 creates a new user-interface object or updates the position of For example, object updater 177 creates a new user-interface object or updates the position of
a user-interface a user-interface object. object.GUI GUI updater updater 178 updates the 178 updates the GUI. GUI.For Forexample, example,GUI GUI updater updater 178178
38
1005066680
prepares display prepares display information information and andsends sendsitit to graphics graphics module 132for module 132 fordisplay display on onaa touch- touch- 10 Jan 2024
sensitive display. sensitive display.
[00135]
[00135] In some In embodiments, some embodiments, event event handler(s) handler(s) 190190 includes includes or or hashas access access to to data data
updater 176, updater 176, object object updater 177, and updater 177, and GUI GUIupdater updater178. 178.InInsome some embodiments, embodiments, datadata updater updater
176, 176, object object updater updater 177, 177, and and GUI updater178 GUI updater 178are areincluded includedininaasingle single module moduleofofa arespective respective application 136-1 application or application view 136-1 or 191. In view 191. In other embodiments, theyare embodiments, they areincluded includedinintwo twooror moresoftware more softwaremodules. modules. 2024200149
[00136]
[00136] It shall be understood that the foregoing discussion regarding event handling It shall be understood that the foregoing discussion regarding event handling
of user touches on touch-sensitive displays also applies to other forms of user inputs to of user touches on touch-sensitive displays also applies to other forms of user inputs to
operate multifunction devices 100 with input-devices, not all of which are initiated on touch operate multifunction devices 100 with input-devices, not all of which are initiated on touch
screens. For screens. For example, mousemovement example, mouse movement and and mouse mouse buttonbutton presses, presses, optionally optionally coordinated coordinated
with single with single or or multiple multiple keyboard presses or keyboard presses or holds; holds; contact contact movements suchasastaps, movements such taps,drags, drags, scrolls, etc., on touch-pads; pen stylus inputs; movement of the device; oral instructions; scrolls, etc., on touch-pads; pen stylus inputs; movement of the device; oral instructions;
detected eye detected eye movements; movements; biometric biometric inputs;and/or inputs; and/orany anycombination combination thereof thereof areare optionally optionally
utilized as utilized asinputs inputscorresponding corresponding to to sub-events sub-events which define an which define an event event to to be be recognized. recognized.
[00137]
[00137] Figure 1C is a block diagram illustrating a tactile output module in accordance Figure 1C is a block diagram illustrating a tactile output module in accordance
with some with someembodiments. embodiments.In In some some embodiments, embodiments, I/O subsystem I/O subsystem 106 (e.g., 106 (e.g., haptichaptic feedback feedback
controller 161 (Figure 1A) and/or other input controller(s) 160 (Figure 1A)) includes at least controller 161 (Figure 1A) and/or other input controller(s) 160 (Figure 1A)) includes at least
someofofthe some the example examplecomponents components shown shown in Figure in Figure 1C.some 1C. In In some embodiments, embodiments, peripherals peripherals
interface 118 interface 118 includes at at least leastsome some of ofthe theexample example components shown components shown in in Figure Figure 1C.1C.
[00138]
[00138] In some In embodiments, some embodiments, thethe tactileoutput tactile outputmodule module includes includes hapticfeedback haptic feedback module133. module 133.InInsome someembodiments, embodiments, haptic haptic feedback feedback module module 133 aggregates 133 aggregates and combines and combines
tactile outputs for user interface feedback from software applications on the electronic device tactile outputs for user interface feedback from software applications on the electronic device
(e.g., feedback that is responsive to user inputs that correspond to displayed user interfaces (e.g., feedback that is responsive to user inputs that correspond to displayed user interfaces
and alerts and other notifications that indicate the performance of operations or occurrence of and alerts and other notifications that indicate the performance of operations or occurrence of
events in events in user user interfaces interfacesof ofthe theelectronic device). electronic Haptic device). feedback Haptic module feedback module 133 133 includes includes one one
or more or of: waveform more of: module waveform module 123123 (for(for providing providing waveforms waveforms used used for generating for generating tactile tactile
outputs), mixer outputs), mixer 125 (for mixing 125 (for waveforms,such mixing waveforms, such asas waveforms waveforms in different in different channels), channels),
compressor127 compressor 127(for (forreducing reducingororcompressing compressing a dynamic a dynamic range range of the of the waveforms), waveforms), low-pass low-pass
filter 129 filter 129(for (forfiltering outout filtering highhigh frequency signal frequency components signal componentsininthe waveforms), the waveforms), and and thermal thermal
controller 131 controller 131 (for (for adjusting adjustingthe thewaveforms in accordance waveforms in withthermal accordance with thermalconditions). conditions).In In some some embodiments,haptic embodiments, hapticfeedback feedback module module 133 133 is included is included in haptic in haptic feedback feedback controller controller 161161
39
1005066680
(Figure (Figure 1A). In some 1A). In embodiments, some embodiments, a separate a separate unitofofhaptic unit hapticfeedback feedbackmodule module 133133 (or (or a a 10 Jan 2024
separate implementation separate implementation ofofhaptic hapticfeedback feedbackmodule module 133) 133) is is alsoincluded also includedininananaudio audio controller (e.g., audio circuitry 110, Figure 1A) and used for generating audio signals. In controller (e.g., audio circuitry 110, Figure 1A) and used for generating audio signals. In
someembodiments, some embodiments, a single a single hapticfeedback haptic feedback module module 133 133 is used is used for for generating generating audio audio signals signals
and generating and generating waveforms waveforms forfor tactile outputs. tactile outputs.
[00139]
[00139] In some In embodiments, some embodiments, haptic haptic feedback feedback module module 133 133 also also includes includes trigger trigger
module121 module 121(e.g., (e.g., aa software application, operating software application, operating system, system, or or other other software software module that module that 2024200149
determines a tactile output is to be generated and initiates the process for generating the determines a tactile output is to be generated and initiates the process for generating the
correspondingtactile corresponding tactile output). output). In Insome some embodiments, triggermodule embodiments, trigger module 121 121 generates generates trigger trigger
signals for signals for initiating initiatinggeneration ofof generation waveforms waveforms (e.g., (e.g.,bybywaveform waveform module 123).For module 123). Forexample, example, trigger module 121 generates trigger signals based on preset timing criteria. In some trigger module 121 generates trigger signals based on preset timing criteria. In some
embodiments, embodiments, triggermodule trigger module121121 receives receives triggersignals trigger signalsfrom fromoutside outsidehaptic hapticfeedback feedback module133 module 133(e.g., (e.g., in in some embodiments, some embodiments, haptic haptic feedback feedback module module 133 133 receives receives trigger trigger signals signals
fromhardware from hardwareinput inputprocessing processingmodule module 146146 located located outside outside haptic haptic feedback feedback module module 133) 133) and and relays the relays the trigger triggersignals signalstoto other components other components within within haptic hapticfeedback feedback module 133(e.g., module 133 (e.g., waveform module 123) or software applications that trigger operations (e.g., with trigger waveform module 123) or software applications that trigger operations (e.g., with trigger
module 121) based on activation of a user interface element (e.g., an application icon or an module 121) based on activation of a user interface element (e.g., an application icon or an
affordance within an application) or a hardware input device (e.g., a home button or an affordance within an application) or a hardware input device (e.g., a home button or an
intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some
embodiments, trigger module 121 also receives tactile feedback generation instructions (e.g., embodiments, trigger module 121 also receives tactile feedback generation instructions (e.g.,
from haptic from haptic feedback feedbackmodule module 133, 133, Figures Figures 1A 1A andand 3). 3). In In some some embodiments, embodiments, trigger trigger module module
121 generates trigger 121 generates trigger signals signals in inresponse response to tohaptic hapticfeedback feedbackmodule 133 (or module 133 (or trigger trigger module module
121 inhaptic 121 in hapticfeedback feedback module module 133) receiving 133) receiving tactile feedback tactile feedback instructions instructions (e.g., from(e.g., hapticfrom haptic
feedbackmodule feedback module133, 133,Figures Figures 1A1A andand 3).3).
[00140]
[00140] Waveform Waveform module module 123 123 receives receives trigger trigger signals signals (e.g.,from (e.g., fromtrigger triggermodule module 121) 121)
as an input, and in response to receiving trigger signals, provides waveforms for generation of as an input, and in response to receiving trigger signals, provides waveforms for generation of
one or one or more tactile outputs more tactile outputs (e.g., (e.g.,waveforms waveforms selected selected from a predefined from a set of predefined set of waveforms waveforms
designated for designated for use use by waveformmodule by waveform module 123, 123, such such as the as the waveforms waveforms described described in greater in greater
detail below detail below with reference to with reference to Figures Figures 4F-4G). 4F-4G).
[00141]
[00141] Mixer125 Mixer 125receives receiveswaveforms waveforms (e.g.,from (e.g., from waveform waveform module module 123) 123) as anas an input, input,
and mixes and mixestogether togetherthe the waveforms. waveforms.For Forexample, example, when when mixer mixer 125 receives 125 receives twomore two or or more waveforms (e.g., a first waveform in a first channel and a second waveform that at least waveforms (e.g., a first waveform in a first channel and a second waveform that at least
40
1005066680
partially overlaps partially overlaps with with the thefirst firstwaveform waveform in inaasecond second channel) channel) mixer mixer 125 outputs aa combined 125 outputs combined 10 Jan 2024
waveformthat waveform thatcorresponds correspondstoto a asum sumofof thetwo the twoorormore more waveforms. waveforms. In some In some embodiments, embodiments,
mixer125 mixer 125also alsomodifies modifiesone oneorormore morewaveforms waveforms of the of the twotwo or more or more waveforms waveforms to emphasize to emphasize
particular waveform(s) over the rest of the two or more waveforms (e.g., by increasing a scale particular waveform(s) over the rest of the two or more waveforms (e.g., by increasing a scale
of the particular waveform(s) and/or decreasing a scale of the rest of the waveforms). In some of the particular waveform(s) and/or decreasing a scale of the rest of the waveforms). In some
circumstances, mixer circumstances, mixer125 125selects selectsone oneoror more morewaveforms waveforms to remove to remove fromfrom the combined the combined
waveform(e.g., waveform (e.g., the the waveform waveform from from thethe oldestsource oldest source isisdropped dropped when when there there areare waveforms waveforms 2024200149
frommore from morethan thanthree threesources sourcesthat that have havebeen beenrequested requestedtotobebeoutput outputconcurrently concurrentlybybytactile tactile output generator output generator 167). 167).
[00142]
[00142] Compressor127127 Compressor receives receives waveforms waveforms (e.g., (e.g., a combined a combined waveform waveform from from mixer mixer 125) as an 125) as an input, input, and and modifies modifies the the waveforms. In some waveforms. In someembodiments, embodiments, compressor compressor 127 127
reduces the waveforms (e.g., in accordance with physical specifications of tactile output reduces the waveforms (e.g., in accordance with physical specifications of tactile output
generators 167 (Figure 1A) or 357 (Figure 3)) so that tactile outputs corresponding to the generators 167 (Figure 1A) or 357 (Figure 3)) SO that tactile outputs corresponding to the
waveformsarearereduced. waveforms reduced.InInsome some embodiments, embodiments, compressor compressor 127 limits 127 limits the waveforms, the waveforms, such such as by as by enforcing a predefined enforcing a maximum predefined maximum amplitude amplitude for for the the waveforms. waveforms. For example, For example,
compressor127 compressor 127reduces reduces amplitudes amplitudes of of portions portions ofof waveforms waveforms thatthat exceed exceed a predefined a predefined
amplitude threshold while amplitude threshold whilemaintaining maintainingamplitudes amplitudesofof portionsofofwaveforms portions waveforms that that do do notnot
exceedthe exceed the predefined predefinedamplitude amplitudethreshold. threshold.InIn some someembodiments, embodiments, compressor compressor 127 reduces 127 reduces a a dynamicrange dynamic rangeofofthe thewaveforms. waveforms.In In some some embodiments, embodiments, compressor compressor 127 dynamically 127 dynamically
reduces the reduces the dynamic dynamicrange rangeofofthe thewaveforms waveformsSO so thatthethecombined that combined waveforms waveforms remain remain withinwithin
performance specifications of the tactile output generator 167 (e.g., force and/or moveable performance specifications of the tactile output generator 167 (e.g., force and/or moveable
massdisplacement mass displacementlimits). limits).
[00143]
[00143] Low-passfilter Low-pass filter 129 receives waveforms 129 receives waveforms(e.g., (e.g., compressed compressed waveforms waveforms fromfrom
compressor 127) as an input, and filters (e.g., smooths) the waveforms (e.g., removes or compressor 127) as an input, and filters (e.g., smooths) the waveforms (e.g., removes or
reduces high reduces high frequency frequencysignal signalcomponents componentsin in thewaveforms). the waveforms). ForFor example, example, in some in some
instances, compressor instances, 127includes, compressor 127 includes,in in compressed compressedwaveforms, waveforms, extraneous extraneous signals signals (e.g.,high (e.g., high frequency signal components) that interfere with the generation of tactile outputs and/or frequency signal components) that interfere with the generation of tactile outputs and/or
exceed performance specifications of tactile output generator 167 when the tactile outputs are exceed performance specifications of tactile output generator 167 when the tactile outputs are
generated in generated in accordance withthe accordance with thecompressed compressed waveforms. waveforms. Low-pass Low-pass filter filter 129 129 reduces reduces or or removessuch removes suchextraneous extraneoussignals signalsininthe thewaveforms. waveforms.
[00144]
[00144] Thermalcontroller Thermal controller131 131receives receiveswaveforms waveforms (e.g.,filtered (e.g., filtered waveforms waveformsfrom from low-passfilter low-pass filter 129) 129) as asan aninput, input,and andadjusts adjuststhe waveforms the waveforms in in accordance with thermal accordance with thermal
41
1005066680
conditions of device 100 (e.g., based on internal temperatures detected within device 100, conditions of device 100 (e.g., based on internal temperatures detected within device 100, 10 Jan 2024
such as such as the the temperature of haptic temperature of haptic feedback controller 161, feedback controller 161, and/or and/or external external temperatures temperatures
detected by detected by device device 100). 100). For For example, example,inin some somecases, cases,the theoutput outputof of haptic haptic feedback feedbackcontroller controller 161 variesdepending 161 varies depending on temperature on the the temperature (e.g. haptic (e.g. haptic feedbackfeedback controllercontroller 161, intoresponse to 161, in response
receiving same receiving waveforms, same waveforms, generates generates a first tactile a first tactile output output when haptic feedback when haptic feedbackcontroller controller 161 is at 161 is at aa first first temperature and temperature and generates generates a second a second tactile tactile outputoutput whenfeedback when haptic haptic feedback controller 161 is at a second temperature that is distinct from the first temperature). For controller 161 is at a second temperature that is distinct from the first temperature). For 2024200149
example,the example, the magnitude magnitude(or (orthe theamplitude) amplitude)ofofthe thetactile tactile outputs outputs may vary depending may vary dependingononthe the temperature. To temperature. Toreduce reducethe theeffect effect of of the the temperature temperature variations, variations,the thewaveforms are modified waveforms are modified (e.g., ananamplitude (e.g., amplitude of ofthe thewaveforms is increased waveforms is increased or or decreased decreased based on the based on the temperature). temperature).
[00145]
[00145] In some In embodiments, some embodiments, haptic haptic feedback feedback module module 133 133 (e.g., (e.g., trigger trigger module module 121)121)
is coupled is coupled to to hardware input processing hardware input processing module module146. 146.InInsome some embodiments, embodiments, other other input input
controller(s) 160 controller(s) 160 in inFigure Figure 1A 1A includes includes hardware input processing hardware input processingmodule module 146. 146. InIn some some
embodiments,hardware embodiments, hardware input input processing processing module module 146 146 receives receives inputs inputs fromfrom hardware hardware input input
device 145 (e.g., other input or control devices 116 in Figure 1A, such as a home button or an device 145 (e.g., other input or control devices 116 in Figure 1A, such as a home button or an
intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some
embodiments,hardware embodiments, hardware input input device device 145145 is any is any input input device device described described herein, herein, such such as as touch- touch-
sensitive display sensitive display system system 112 (Figure 1A), 112 (Figure 1A), keyboard/mouse keyboard/mouse 350350 (Figure (Figure 3),3), touchpad touchpad 355355
(Figure 3), one of other input or control devices 116 (Figure 1A), or an intensity-sensitive (Figure 3), one of other input or control devices 116 (Figure 1A), or an intensity-sensitive
homebutton. home button.InInsome someembodiments, embodiments, hardware hardware input input device device 145 consists 145 consists ofintensity- of an an intensity- sensitive home sensitive button, and home button, and not not touch-sensitive touch-sensitive display display system 112(Figure system 112 (Figure1A), 1A), keyboard/mouse keyboard/mouse 350350 (Figure (Figure 3),3), oror touchpad touchpad 355355 (Figure (Figure 3).3). In In some some embodiments, embodiments, in in response to response to inputs inputs from hardwareinput from hardware inputdevice device145 145(e.g., (e.g., an an intensity-sensitive intensity-sensitive home button home button
or aa touch or touch screen), screen), hardware input processing hardware input module146 processing module 146provides providesoneone or or more more trigger trigger
signals to haptic feedback module 133 to indicate that a user input satisfying predefined input signals to haptic feedback module 133 to indicate that a user input satisfying predefined input
criteria, such as an input corresponding to a “click” of a home button (e.g., a “down click” or criteria, such as an input corresponding to a "click" of a home button (e.g., a "down click" or
an "up an “up click"), click”), has has been been detected. detected. In Insome some embodiments, hapticfeedback embodiments, haptic feedback module module 133133
provides waveforms provides waveforms thatcorrespond that correspondto to the"click" the “click”ofofaahome homebutton buttonininresponse responsetotothe theinput input correspondingtoto the corresponding the "click" “click” of of aa home button, simulating home button, simulating aa haptic haptic feedback of pressing feedback of pressing aa physical home physical homebutton. button.
[00146]
[00146] In some In embodiments, some embodiments, thethe tactileoutput tactile outputmodule module includes includes hapticfeedback haptic feedback controller 161 (e.g., haptic feedback controller 161 in Figure 1A), which controls the controller 161 (e.g., haptic feedback controller 161 in Figure 1A), which controls the
generation of generation of tactile tactileoutputs. outputs.InIn some someembodiments, haptic feedback embodiments, haptic feedbackcontroller controller 161 161is is coupled coupled 42
1005066680
to a plurality of tactile output generators, and selects one or more tactile output generators of to a plurality of tactile output generators, and selects one or more tactile output generators of 10 Jan 2024
the plurality of tactile output generators and sends waveforms to the selected one or more the plurality of tactile output generators and sends waveforms to the selected one or more
tactile output generators for generating tactile outputs. In some embodiments, haptic feedback tactile output generators for generating tactile outputs. In some embodiments, haptic feedback
controller 161 coordinates tactile output requests that correspond to activation of hardware controller 161 coordinates tactile output requests that correspond to activation of hardware
input device 145 and tactile output requests that correspond to software events (e.g., tactile input device 145 and tactile output requests that correspond to software events (e.g., tactile
output requests output requests from haptic feedback from haptic feedbackmodule module133) 133) and and modifies modifies oneone or or more more waveforms waveforms of of the two the or more two or waveforms more waveforms to to emphasize emphasize particular particular waveform(s) waveform(s) overover the the restrest of of thethe two two or or 2024200149
morewaveforms more waveforms (e.g.,bybyincreasing (e.g., increasinga ascale scaleof of the the particular particular waveform(s) and/ordecreasing waveform(s) and/or decreasingaa scale of the rest of the waveforms, such as to prioritize tactile outputs that correspond to scale of the rest of the waveforms, such as to prioritize tactile outputs that correspond to
activations of hardware input device 145 over tactile outputs that correspond to software activations of hardware input device 145 over tactile outputs that correspond to software
events). events).
[00147]
[00147] In some In embodiments, some embodiments, as as shown shown in Figure in Figure 1C,1C, an output an output of haptic of haptic feedback feedback
controller 161 is coupled to audio circuitry of device 100 (e.g., audio circuitry 110, Figure controller 161 is coupled to audio circuitry of device 100 (e.g., audio circuitry 110, Figure
1A), and provides 1A), and provides audio audiosignals signals to to audio audio circuitry circuitry of ofdevice device100. 100.In Insome some embodiments, embodiments,
haptic feedback haptic controller 161 feedback controller provides both 161 provides both waveforms waveforms used used forfor generating generating tactileoutputs tactile outputs and audio and audio signals signals used used for for providing audio outputs providing audio outputs in in conjunction with generation conjunction with generation of of the the tactile outputs. tactile outputs.InIn some someembodiments, haptic feedback embodiments, haptic feedbackcontroller controller 161 161modifies modifiesaudio audiosignals signals and/or waveforms (used for generating tactile outputs) so that the audio outputs and the and/or waveforms (used for generating tactile outputs) SO that the audio outputs and the
tactile outputs are synchronized (e.g., by delaying the audio signals and/or waveforms). In tactile outputs are synchronized (e.g., by delaying the audio signals and/or waveforms). In
someembodiments, some embodiments, haptic haptic feedback feedback controller controller 161161 includes includes a digital-to-analogconverter a digital-to-analog converter used for used for converting digital waveforms converting digital into analog waveforms into analogsignals, signals, which are received which are received by by amplifier amplifier 163 and/ortactile 163 and/or tactileoutput output generator generator 167.167.
[00148]
[00148] In some In embodiments, some embodiments, thethe tactileoutput tactile outputmodule module includes includes amplifier163. amplifier 163.InIn someembodiments, some embodiments, amplifier amplifier 163163 receives receives waveforms waveforms (e.g., (e.g., fromfrom haptic haptic feedback feedback controller controller
161) and amplifies 161) and amplifies the the waveforms priortotosending waveforms prior sendingthe theamplified amplifiedwaveforms waveformsto to tactileoutput tactile output generator 167 (e.g., any of tactile output generators 167 (Figure 1A) or 357 (Figure 3)). For generator 167 (e.g., any of tactile output generators 167 (Figure 1A) or 357 (Figure 3)). For
example, amplifier 163 amplifies the received waveforms to signal levels that are in example, amplifier 163 amplifies the received waveforms to signal levels that are in
accordance with physical specifications of tactile output generator 167 (e.g., to a voltage accordance with physical specifications of tactile output generator 167 (e.g., to a voltage
and/or a current required by tactile output generator 167 for generating tactile outputs so that and/or a current required by tactile output generator 167 for generating tactile outputs SO that
the signals sent to tactile output generator 167 produce tactile outputs that correspond to the the signals sent to tactile output generator 167 produce tactile outputs that correspond to the
waveformsreceived waveforms receivedfrom from haptic haptic feedback feedback controller controller 161) 161) andand sends sends thethe amplified amplified waveforms waveforms
to tactile output generator 167. In response, tactile output generator 167 generates tactile to tactile output generator 167. In response, tactile output generator 167 generates tactile
43
1005066680
outputs (e.g., outputs (e.g., by byshifting shiftinga moveable a moveable mass mass back and forth back and forth in in one one or or more dimensionsrelative more dimensions relative 10 Jan 2024
to a neutral position of the moveable mass). to a neutral position of the moveable mass).
[00149]
[00149] In some In embodiments, some embodiments, thethe tactileoutput tactile outputmodule module includes includes sensor sensor 169, 169, which which is is coupled to tactile output generator 167. Sensor 169 detects states or state changes (e.g., coupled to tactile output generator 167. Sensor 169 detects states or state changes (e.g.,
mechanicalposition, mechanical position, physical physical displacement, displacement,and/or and/ormovement) movement)of of tactileoutput tactile outputgenerator generator167 167 or one or or more one or components more components of of tactileoutput tactile outputgenerator generator167 167(e.g., (e.g., one one or or more movingparts, more moving parts, such as such as aa membrane, usedtotogenerate membrane, used generatetactile tactile outputs). outputs). In In some embodiments,sensor some embodiments, sensor169169 is is a a 2024200149
magneticfield magnetic field sensor sensor (e.g., (e.g., a aHall Halleffect sensor) effect or or sensor) other displacement other and/or displacement movement and/or movement
sensor. In sensor. In some embodiments, some embodiments, sensor sensor 169 169 provides provides information information (e.g.,a aposition, (e.g., position,aa displacement, and/or displacement, and/or aa movement movement of of oneone or or more more parts parts in in tactileoutput tactile outputgenerator generator167) 167)toto haptic feedback haptic controller 161 feedback controller and, in 161 and, in accordance withthe accordance with the information informationprovided providedbybysensor sensor 169 aboutthethestate 169 about stateofoftactile tactileoutput output generator generator 167, 167, haptic haptic feedback feedback controller controller 161the 161 adjusts adjusts the waveformsoutput waveforms outputfrom from haptic haptic feedback feedback controller controller 161 161 (e.g.,waveforms (e.g., waveforms sent sent to to tactileoutput tactile output generator 167, optionally via amplifier 163). generator 167, optionally via amplifier 163).
[00150]
[00150] Figure 2 illustrates a portable multifunction device 100 having a touch screen Figure 2 illustrates a portable multifunction device 100 having a touch screen
(e.g., (e.g.,touch-sensitive touch-sensitivedisplay displaysystem system112, 112,Figure Figure1A) 1A) in in accordance accordance with with some embodiments. some embodiments.
Thetouch The touchscreen screenoptionally optionallydisplays displays one oneor or more moregraphics graphicswithin withinuser userinterface interface (UI) (UI) 200. 200. In In these embodiments, these embodiments, asaswell wellasasothers othersdescribed describedbelow, below,a auser useris is enabled to select enabled to select one one or or more more
of the of the graphics graphics by by making making aa gesture gesture on on the the graphics, graphics, for for example, with one example, with one or or more morefingers fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the
figure). InInsome figure). some embodiments, selectionofofone embodiments, selection oneorormore moregraphics graphicsoccurs occurswhen when thethe user user breaks breaks
contact with contact with the the one one or or more graphics. In more graphics. In some embodiments, some embodiments, thethe gesture gesture optionallyincludes optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or one or more taps, one or more swipes (from left to right, right to left, upward and/or
downward) and/or a rolling of a finger (from right to left, left to right, upward and/or downward) and/or a rolling of a finger (from right to left, left to right, upward and/or
downward) downward) thathas that hasmade made contact contact with with device device 100. 100. In In some some implementations implementations or or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a circumstances, inadvertent contact with a graphic does not select the graphic. For example, a
swipe gesture that sweeps over an application icon optionally does not select the swipe gesture that sweeps over an application icon optionally does not select the
corresponding application when the gesture corresponding to selection is a tap. corresponding application when the gesture corresponding to selection is a tap.
[00151]
[00151] Device100 Device 100optionally optionallyalso alsoincludes includes one oneoror more morephysical physicalbuttons, buttons,such suchasas “home”orormenu "home" menu button button 204. 204. As As described described previously, previously, menu menu button button 204 204 is, optionally, is, optionally, used used to to navigate to any application 136 in a set of applications that are, optionally executed on device navigate to any application 136 in a set of applications that are, optionally executed on device
44
1005066680
100. 100. Alternatively, Alternatively, in insome some embodiments, themenu embodiments, the menu button button is is implemented implemented as aassoft a soft keykey in in a a 10 Jan 2024
GUIdisplayed GUI displayedononthe thetouch-screen touch-screendisplay. display.
[00152]
[00152] In some In embodiments, some embodiments, device device 100100 includes includes thethe touch-screen touch-screen display, display, menu menu
button 204 button 204 (sometimes (sometimescalled calledhome home button button 204), 204), push push button button 206206 forfor powering powering the the device device
on/off and on/off locking the and locking the device, device, volume adjustmentbutton(s) volume adjustment button(s)208, 208,Subscriber SubscriberIdentity IdentityModule Module (SIM)card (SIM) cardslot slot 210, 210, head set jack head set jack 212, 212, and and docking/charging external port docking/charging external port 124. 124. Push Pushbutton button 206 is, optionally, used to turn the power on/off on the device by depressing the button and 206 is, optionally, used to turn the power on/off on the device by depressing the button and 2024200149
holding the button in the depressed state for a predefined time interval; to lock the device by holding the button in the depressed state for a predefined time interval; to lock the device by
depressing the button and releasing the button before the predefined time interval has depressing the button and releasing the button before the predefined time interval has
elapsed; and/or elapsed; and/or to to unlock unlock the the device device or or initiate initiateananunlock unlockprocess. process.InIn some someembodiments, embodiments,
device 100 also accepts verbal input for activation or deactivation of some functions through device 100 also accepts verbal input for activation or deactivation of some functions through
microphone113. microphone 113.Device Device 100100 also, also, optionally,includes optionally, includesone oneorormore more contactintensity contact intensitysensors sensors 165 fordetecting 165 for detectingintensities intensities of of contacts contacts on touch-sensitive on touch-sensitive display display system system 112 112 and/or oneand/or or one or more tactile output generators 167 for generating tactile outputs for a user of device 100. more tactile output generators 167 for generating tactile outputs for a user of device 100.
[00153]
[00153] Figure 33 is Figure is aa block block diagram of an diagram of an example multifunctiondevice example multifunction devicewith witha adisplay display and aa touch-sensitive and touch-sensitive surface surface in in accordance with some accordance with someembodiments. embodiments. Device Device 300 300 needneed not not be be portable. In portable. In some embodiments,device some embodiments, device 300 300 is is a alaptop laptopcomputer, computer,a a desktop desktop computer, computer, a tablet a tablet
computer,aa multimedia computer, multimediaplayer playerdevice, device,a anavigation navigationdevice, device,ananeducational educationaldevice device(such (suchasasaa child’s learning toy), a gaming system, or a control device (e.g., a home or industrial child's learning toy), a gaming system, or a control device (e.g., a home or industrial
controller). Device controller). Device 300 300 typically typically includes includes one one or or more more processing units (CPU’s) processing units 310,one (CPU's) 310, oneoror morenetwork more networkororother othercommunications communications interfaces interfaces 360, 360, memory memory 370, 370, andorone and one or more more communication communication buses buses 320320 forfor interconnecting interconnecting these these components. components. Communication Communication buses buses 320 320 optionally include circuitry (sometimes called a chipset) that interconnects and controls optionally include circuitry (sometimes called a chipset) that interconnects and controls
communications communications between between system system components. components. Device Device 300 includes 300 includes input/output input/output (I/O) (I/O) interface 330 comprising display 340, which is typically a touch-screen display. I/O interface interface 330 comprising display 340, which is typically a touch-screen display. I/O interface
330 also 330 also optionally optionally includes includes a a keyboard and/or mouse keyboard and/or mouse(or (orother otherpointing pointingdevice) device)350 350and and touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g., touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g.,
similar to tactile output generator(s) 167 described above with reference to Figure 1A), similar to tactile output generator(s) 167 described above with reference to Figure 1A),
sensors 359 (e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensors 359 (e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity
sensors similar to contact intensity sensor(s) 165 described above with reference to Figure sensors similar to contact intensity sensor(s) 165 described above with reference to Figure
1A). 1A). Memory 370includes Memory 370 includes high-speed high-speedrandom random access accessmemory, memory,such suchasas DRAM, DRAM, SRAM, SRAM,
DDR DDR RAM RAM or other or other random random access access solidsolid statestate memory memory devices; devices; and optionally and optionally includes includes non- non- volatile memory, volatile suchasasone memory, such oneorormore moremagnetic magnetic disk disk storagedevices, storage devices,optical opticaldisk diskstorage storage 45
1005066680
devices, flash devices, flash memory devices,ororother memory devices, other non-volatile non-volatile solid solid state statestorage storagedevices. devices.Memory 370 Memory 370 10 Jan 2024
optionally includes optionally includes one or more one or storage devices more storage devices remotely remotelylocated locatedfrom fromCPU(s) CPU(s) 310. 310. In In some some
embodiments,memory embodiments, memory 370 370 stores stores programs, programs, modules, modules, and structures and data data structures analogous analogous to to the the programs,modules, programs, modules,and anddata datastructures structuresstored storedin in memory memory 102 102 of of portable portable multifunction multifunction
device 100 device 100(Figure (Figure1A), 1A),or or aa subset subset thereof. thereof. Furthermore, memory Furthermore, memory 370 370 optionally optionally stores stores
additional programs, additional modules,and programs, modules, anddata datastructures structures not not present present in in memory 102 memory 102 of of portable portable
multifunction device multifunction device 100. 100. For For example, example,memory memory370 370 of device of device 300 300 optionally optionally stores stores drawing drawing 2024200149
module380, module 380,presentation presentationmodule module 382, 382, word word processing processing module module 384, 384, website website creation creation module module
386, disk 386, disk authoring authoring module 388,and/or module 388, and/orspreadsheet spreadsheetmodule module 390, 390, while while memory memory 102 102 of of portable multifunction portable device 100 multifunction device 100(Figure (Figure1A) 1A)optionally optionallydoes doesnot notstore store these these modules. modules.
[00154]
[00154] Each of the above identified elements in Figure 3 are, optionally, stored in one Each of the above identified elements in Figure 3 are, optionally, stored in one
or more or of the more of the previously mentionedmemory previously mentioned memory devices. devices. Each Each of the of the above above identified identified modules modules
correspondstoto aa set corresponds set of of instructions instructionsfor performing for performingaafunction functiondescribed describedabove. above.The The above above
identified modules or programs (i.e., sets of instructions) need not be implemented as identified modules or programs (i.e., sets of instructions) need not be implemented as
separate software separate programs,procedures software programs, proceduresorormodules, modules,andand thusvarious thus varioussubsets subsetsofofthese these modulesare, modules are, optionally, optionally, combined combined ororotherwise otherwisere-arranged re-arrangedininvarious variousembodiments. embodiments.In In some some
embodiments, embodiments, memory memory 370 370 optionally optionally stores stores a subset a subset of the of the modules modules and and datadata structures structures
identified above. identified above. Furthermore, memory Furthermore, memory 370370 optionally optionally storesadditional stores additionalmodules modulesandand data data
structures not described above. structures not described above.
[00155]
[00155] Attention is Attention is now directed towards now directed towardsembodiments embodimentsof of user user interfaces("UI") interfaces ("UI")that that are, optionally, are, optionally,implemented onportable implemented on portable multifunction multifunctiondevice device100. 100.
[00156]
[00156] Figure 4A Figure 4Aillustrates illustrates an an example user interface example user interface 400 400 for for aa menu of applications menu of applications on portable on portable multifunction device 100 multifunction device 100in in accordance accordancewith withsome some embodiments. embodiments. Similar Similar useruser
interfaces are, interfaces are,optionally, optionally,implemented implemented on on device 300. In device 300. In some embodiments, some embodiments, user user interface interface
400 includes the following elements, or a subset or superset thereof: 400 includes the following elements, or a subset or superset thereof:
Signal strength indicator(s) for wireless communication(s), such as cellular and Wi-Fi Signal strength indicator(s) for wireless communication(s), such as cellular and Wi-Fi
signals; signals;
Time; Time;
a Bluetooth indicator; a Bluetooth indicator;
a Battery status indicator; a Battery status indicator;
Tray 408 with icons for frequently used applications, such as: Tray 408 with icons for frequently used applications, such as:
46
1005066680
Icon 416 o Icon 416 for for telephone telephone module module138, 138,labeled labeled"Phone," “Phone,” which which optionally optionally 10 Jan 2024
includes an includes an indicator indicator 414 414 of of the the number of missed number of missedcalls calls or or voicemail voicemail
messages; messages;
o Icon Icon 418 418 for for e-mail client module e-mail client 140, labeled module 140, labeled "Mail," “Mail,” which whichoptionally optionally includes an includes an indicator indicator 410 410 of of the the number of unread number of unreade-mails; e-mails;
Icon 420 o Icon 420 for for browser browsermodule module 147, 147, labeled"Browser;" labeled “Browser;” andand 2024200149
o Icon Icon 422 422for for video video and and music musicplayer playermodule module 152, 152, labeled labeled “Music;” "Music;" andand
Icons for other applications, such as: Icons for other applications, such as:
o Icon Icon 424 424 for for IM module IMmodule 141, 141, labeled"Messages;" labeled “Messages;”
Icon 426 o Icon 426 for for calendar calendar module module148, 148,labeled labeled"Calendar;" “Calendar;”
Icon 428 o Icon 428 for for image imagemanagement management module module 144, 144, labeled labeled “Photos;” "Photos;"
Icon 430 o Icon 430 for for camera cameramodule module 143, 143, labeled"Camera;" labeled “Camera;”
Icon 432 o Icon 432 for for online online video module155, video module 155,labeled labeled"Online “OnlineVideo;" Video;”
o Icon Icon 434 434 for for stocks stocks widget labeled "Stocks;" 149-2, labeled widget 149-2, “Stocks;”
Icon 436 o Icon 436 for for map mapmodule module 154, 154, labeled labeled “Maps;” "Maps;"
o Icon Icon 438 438for for weather weatherwidget widget149-1, labeled"Weather;" 149-1,labeled “Weather;”
o Icon Icon 440 440 for for alarm alarm clock clock widget widget149-4, labeled"Clock;" 149-4,labeled “Clock;”
o Icon Icon 442 442for for workout workoutsupport supportmodule module 142, 142, labeled labeled “Workout "Workout Support;” Support;"
Icon 444 o Icon 444 for for notes notes module 153,labeled module 153, labeled"Notes;" “Notes;”and and
o Icon Icon 446 446 for for aa settings settings application applicationor ormodule, module, which which provides access to provides access to settings for device 100 and its various applications 136. settings for device 100 and its various applications 136.
[00157]
[00157] It should be noted that the icon labels illustrated in Figure 4A are merely It should be noted that the icon labels illustrated in Figure 4A are merely
examples. For example, other labels are, optionally, used for various application icons. In examples. For example, other labels are, optionally, used for various application icons. In
someembodiments, some embodiments, a label a label forfora arespective respectiveapplication applicationicon iconincludes includesaa name nameofofanan application corresponding application to the corresponding to the respective application icon. icon. In Insome some embodiments, embodiments, a alabel label for a particular application icon is distinct from a name of an application corresponding to the for a particular application icon is distinct from a name of an application corresponding to the
particular application icon. particular application icon.
47
1005066680
[00158]
[00158] Figure 4B illustrates an example user interface on a device (e.g., device 300, Figure 4B illustrates an example user interface on a device (e.g., device 300, 10 Jan 2024
Figure 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, Figure 3) that is Figure 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, Figure 3) that is
separate from separate the display from the display 450. 450. Although many Although many of of theexamples the examples that that follow follow willbebegiven will givenwith with reference to inputs on touch screen display 112 (where the touch sensitive surface and the reference to inputs on touch screen display 112 (where the touch sensitive surface and the
display are display are combined), in some combined), in someembodiments, embodiments,thethe device device detects detects inputsonon inputs a a touch-sensitive touch-sensitive
surface that surface that is isseparate separatefrom fromthe thedisplay, display,asas shown shownininFIG. FIG.4B. 4B.In Insome some embodiments, the embodiments, the
touch-sensitive surface (e.g., 451 in Figure 4B) has a primary axis (e.g., 452 in Figure 4B) touch-sensitive surface (e.g., 451 in Figure 4B) has a primary axis (e.g., 452 in Figure 4B) 2024200149
that corresponds to a primary axis (e.g., 453 in Figure 4B) on the display (e.g., 450). In that corresponds to a primary axis (e.g., 453 in Figure 4B) on the display (e.g., 450). In
accordancewith accordance withthese theseembodiments, embodiments,thethe device device detectscontacts detects contacts(e.g., (e.g., 460 460and and462 462ininFigure Figure 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations
on the display (e.g., in Figure 4B, 460 corresponds to 468 and 462 corresponds to 470). In on the display (e.g., in Figure 4B, 460 corresponds to 468 and 462 corresponds to 470). In
this way, this way, user user inputs inputs (e.g., (e.g.,contacts 460 contacts and 460 and462, 462,and andmovements thereof) detected movements thereof) detected by by the the device on the touch-sensitive surface (e.g., 451 in Figure 4B) are used by the device to device on the touch-sensitive surface (e.g., 451 in Figure 4B) are used by the device to
manipulate the user interface on the display (e.g., 450 in Figure 4B) of the multifunction manipulate the user interface on the display (e.g., 450 in Figure 4B) of the multifunction
device when the touch-sensitive surface is separate from the display. It should be understood device when the touch-sensitive surface is separate from the display. It should be understood
that similar methods are, optionally, used for other user interfaces described herein. that similar methods are, optionally, used for other user interfaces described herein.
[00159]
[00159] Additionally, while Additionally, the following while the examplesare following examples aregiven givenprimarily primarilywith withreference reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures, etc.), it to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures, etc.), it
should be should be understood understoodthat, that, in in some embodiments, some embodiments, oneone or or more more of of thethe finger finger inputsare inputs are replaced with input from another input device (e.g., a mouse based input or a stylus input). replaced with input from another input device (e.g., a mouse based input or a stylus input).
For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a
contact) followed contact) by movement followed by movement of of thethe cursoralong cursor along thepath the pathofofthe theswipe swipe(e.g., (e.g., instead instead of of
movement of the contact). As another example, a tap gesture is, optionally, replaced with a movement of the contact). As another example, a tap gesture is, optionally, replaced with a
mouse click while the cursor is located over the location of the tap gesture (e.g., instead of mouse click while the cursor is located over the location of the tap gesture (e.g., instead of
detection of the contact followed by ceasing to detect the contact). Similarly, when multiple detection of the contact followed by ceasing to detect the contact). Similarly, when multiple
user inputs user inputs are are simultaneously detected, ititshould simultaneously detected, shouldbe beunderstood understood that thatmultiple multiplecomputer mice computer mice
are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used
simultaneously. simultaneously.
[00160]
[00160] As used herein, the term “focus selector” refers to an input element that As used herein, the term "focus selector" refers to an input element that
indicates a current part of a user interface with which a user is interacting. In some indicates a current part of a user interface with which a user is interacting. In some
implementations that include a cursor or other location marker, the cursor acts as a “focus implementations that include a cursor or other location marker, the cursor acts as a "focus
selector,” so that when an input (e.g., a press input) is detected on a touch-sensitive surface selector," SO that when an input (e.g., a press input) is detected on a touch-sensitive surface
(e.g., (e.g., touchpad 355 touchpad 355 in in Figure Figure 3 or3touch-sensitive or touch-sensitive surface surface 451 in 451 in4B) Figure Figure while 4B) while the cursor the cursor
48
1005066680
is over a particular user interface element (e.g., a button, window, slider or other user is over a particular user interface element (e.g., a button, window, slider or other user 10 Jan 2024
interface element), the particular user interface element is adjusted in accordance with the interface element), the particular user interface element is adjusted in accordance with the
detected input. In some implementations that include a touch-screen display (e.g., touch- detected input. In some implementations that include a touch-screen display (e.g., touch-
sensitive display sensitive display system system 112 in Figure 112 in Figure 1A or the 1A or the touch screen in touch screen in Figure Figure 4A) that enables 4A) that enables
direct interaction with user interface elements on the touch-screen display, a detected contact direct interaction with user interface elements on the touch-screen display, a detected contact
on the touch-screen acts as a “focus selector,” so that when an input (e.g., a press input by the on the touch-screen acts as a "focus selector," SO that when an input (e.g., a press input by the
contact) is detected on the touch-screen display at a location of a particular user interface contact) is detected on the touch-screen display at a location of a particular user interface 2024200149
element (e.g., a button, window, slider or other user interface element), the particular user element (e.g., a button, window, slider or other user interface element), the particular user
interface element interface element is is adjusted adjusted in inaccordance accordance with with the the detected detected input. input.InInsome some implementations, implementations,
focus is moved from one region of a user interface to another region of the user interface focus is moved from one region of a user interface to another region of the user interface
without corresponding without correspondingmovement movementof aofcursor a cursor or or movement movement of a of a contact contact on aon a touch-screen touch-screen
display (e.g., display (e.g.,by byusing usinga atab key tab keyoror arrow arrowkeys keystotomove move focus focus from from one button to one button to another another
button); in button); in these theseimplementations, implementations, the the focus focus selector selectormoves in accordance moves in withmovement accordance with movementof of focus between different regions of the user interface. Without regard to the specific form focus between different regions of the user interface. Without regard to the specific form
taken by the focus selector, the focus selector is generally the user interface element (or taken by the focus selector, the focus selector is generally the user interface element (or
contact on a touch-screen display) that is controlled by the user so as to communicate the contact on a touch-screen display) that is controlled by the user SO as to communicate the
user’s intended interaction with the user interface (e.g., by indicating, to the device, the user's intended interaction with the user interface (e.g., by indicating, to the device, the
element of the user interface with which the user is intending to interact). For example, the element of the user interface with which the user is intending to interact). For example, the
location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective
button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch
screen) will indicate that the user is intending to activate the respective button (as opposed to screen) will indicate that the user is intending to activate the respective button (as opposed to
other user interface elements shown on a display of the device). other user interface elements shown on a display of the device).
[00161]
[00161] As used in the specification and claims, the term “intensity” of a contact on a As used in the specification and claims, the term "intensity" of a contact on a
touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a
finger contact or a stylus contact) on the touch-sensitive surface, or to a substitute (proxy) for finger contact or a stylus contact) on the touch-sensitive surface, or to a substitute (proxy) for
the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has
a range of values that includes at least four distinct values and more typically includes a range of values that includes at least four distinct values and more typically includes
hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined
(or measured) (or usingvarious measured) using variousapproaches approachesand andvarious varioussensors sensorsororcombinations combinationsof of sensors.ForFor sensors.
example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, example, one or more force sensors underneath or adjacent to the touch-sensitive surface are,
optionally, used to measure force at various points on the touch-sensitive surface. In some optionally, used to measure force at various points on the touch-sensitive surface. In some
implementations,force implementations, forcemeasurements measurements from from multiple multiple force force sensors sensors areare combined combined (e.g., (e.g., a a weighted average or a sum) to determine an estimated force of a contact. Similarly, a weighted average or a sum) to determine an estimated force of a contact. Similarly, a
49
1005066680
pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on 10 Jan 2024
the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch- the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-
sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface
proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive
surface proximate to the contact and/or changes thereto are, optionally, used as a substitute surface proximate to the contact and/or changes thereto are, optionally, used as a substitute
for the force or pressure of the contact on the touch-sensitive surface. In some for the force or pressure of the contact on the touch-sensitive surface. In some
implementations,the implementations, thesubstitute substitute measurements forcontact measurements for contactforce forceororpressure pressureare are used used directly directly 2024200149
to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is
described in described in units units corresponding to the corresponding to the substitute substitutemeasurements). In some measurements). In implementations, some implementations,
the substitute measurements for contact force or pressure are converted to an estimated force the substitute measurements for contact force or pressure are converted to an estimated force
or pressure and the estimated force or pressure is used to determine whether an intensity or pressure and the estimated force or pressure is used to determine whether an intensity
threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in
units of pressure). Using the intensity of a contact as an attribute of a user input allows for units of pressure). Using the intensity of a contact as an attribute of a user input allows for
user access to additional device functionality that may otherwise not be readily accessible by user access to additional device functionality that may otherwise not be readily accessible by
the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a
touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a
touch-sensitive surface, or a physical/mechanical control such as a knob or a button). touch-sensitive surface, or a physical/mechanical control such as a knob or a button).
[00162]
[00162] In some In embodiments, some embodiments, contact/motion contact/motion module module 130 uses 130 uses a seta of setone of one or more or more
intensity thresholds to determine whether an operation has been performed by a user (e.g., to intensity thresholds to determine whether an operation has been performed by a user (e.g., to
determinewhether determine whethera auser userhas has"clicked" “clicked”ononananicon). icon). In In some someembodiments, embodiments,at at leasta asubset least subsetofof the intensity thresholds is determined in accordance with software parameters (e.g., the the intensity thresholds is determined in accordance with software parameters (e.g., the
intensity thresholds are not determined by the activation thresholds of particular physical intensity thresholds are not determined by the activation thresholds of particular physical
actuators and actuators and can be adjusted can be adjusted without changingthe without changing thephysical physicalhardware hardwareofofdevice device100). 100).For For example,aa mouse example, mouse"click" “click”threshold thresholdofofaatrackpad trackpadorortouch-screen touch-screendisplay displaycan canbebeset set to to any of any of
a large a large range range of of predefined predefined thresholds thresholds values values without without changing the trackpad changing the trackpad or or touch-screen touch-screen display hardware. display Additionally, in hardware. Additionally, in some implementations some implementations a userofofthe a user thedevice deviceisis provided provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by with software settings for adjusting one or more of the set of intensity thresholds (e.g., by
adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds
at once with a system-level click “intensity” parameter). at once with a system-level click "intensity" parameter).
[00163]
[00163] As used in the specification and claims, the term “characteristic intensity” of a As used in the specification and claims, the term "characteristic intensity" of a
contact refers to a characteristic of the contact based on one or more intensities of the contact. contact refers to a characteristic of the contact based on one or more intensities of the contact.
In some In embodiments, some embodiments, thethe characteristicintensity characteristic intensity is is based based on multiple intensity on multiple intensity samples. samples. The The
characteristic intensity is, optionally, based on a predefined number of intensity samples, or a characteristic intensity is, optionally, based on a predefined number of intensity samples, or a
50
1005066680
set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 10 Jan 2024
1, 1, 2, 2, 5, 5, 10 10 seconds) relativetotoa apredefined seconds) relative predefined event event (e.g., (e.g., afterafter detecting detecting the contact, the contact, prior to prior to
detecting liftoff of the contact, before or after detecting a start of movement of the contact, detecting liftoff of the contact, before or after detecting a start of movement of the contact,
prior to detecting an end of the contact, before or after detecting an increase in intensity of the prior to detecting an end of the contact, before or after detecting an increase in intensity of the
contact, and/or before or after detecting a decrease in intensity of the contact). A contact, and/or before or after detecting a decrease in intensity of the contact). A
characteristic intensity of a contact is, optionally based on one or more of: a maximum value characteristic intensity of a contact is, optionally based on one or more of: a maximum value
of the intensities of the contact, a mean value of the intensities of the contact, an average of the intensities of the contact, a mean value of the intensities of the contact, an average 2024200149
value of the intensities of the contact, a top 10 percentile value of the intensities of the value of the intensities of the contact, a top 10 percentile value of the intensities of the
contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent
maximum of the intensities of the contact, a value produced by low-pass filtering the maximum of the intensities of the contact, a value produced by low-pass filtering the
intensity of the contact over a predefined period or starting at a predefined time, or the like. intensity of the contact over a predefined period or starting at a predefined time, or the like.
In some embodiments, the duration of the contact is used in determining the characteristic In some embodiments, the duration of the contact is used in determining the characteristic
intensity (e.g., when the characteristic intensity is an average of the intensity of the contact intensity (e.g., when the characteristic intensity is an average of the intensity of the contact
over time). In some embodiments, the characteristic intensity is compared to a set of one or over time). In some embodiments, the characteristic intensity is compared to a set of one or
moreintensity more intensity thresholds thresholds to to determine whetherananoperation determine whether operationhas hasbeen beenperformed performedby by a user. a user.
For example, the set of one or more intensity thresholds may include a first intensity For example, the set of one or more intensity thresholds may include a first intensity
threshold and a second intensity threshold. In this example, a contact with a characteristic threshold and a second intensity threshold. In this example, a contact with a characteristic
intensity that does not exceed the first intensity threshold results in a first operation, a contact intensity that does not exceed the first intensity threshold results in a first operation, a contact
with a characteristic intensity that exceeds the first intensity threshold and does not exceed with a characteristic intensity that exceeds the first intensity threshold and does not exceed
the second intensity threshold results in a second operation, and a contact with a the second intensity threshold results in a second operation, and a contact with a
characteristic intensity that exceeds the second intensity threshold results in a third operation. characteristic intensity that exceeds the second intensity threshold results in a third operation.
In some In embodiments, some embodiments, a comparison a comparison between between the characteristic the characteristic intensity intensity andand oneone or or more more
intensity thresholds intensity thresholds isisused usedtotodetermine determinewhether whether or or not notto toperform perform one one or or more more operations operations
(e.g., whether to perform a respective option or forgo performing the respective operation) (e.g., whether to perform a respective option or forgo performing the respective operation)
rather than being used to determine whether to perform a first operation or a second rather than being used to determine whether to perform a first operation or a second
operation. operation.
[00164]
[00164] In some In embodiments, some embodiments, a portion a portion ofof a agesture gestureisis identified identified for for purposes purposes of of
determining a characteristic intensity. For example, a touch-sensitive surface may receive a determining a characteristic intensity. For example, a touch-sensitive surface may receive a
continuous swipe contact transitioning from a start location and reaching an end location continuous swipe contact transitioning from a start location and reaching an end location
(e.g., (e.g., aa drag gesture),atat which drag gesture), which point point thethe intensity intensity of the of the contact contact increases. increases. Inexample, In this this example, the the characteristic intensity of the contact at the end location may be based on only a portion of characteristic intensity of the contact at the end location may be based on only a portion of
the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the
swipe contact swipe contact at at the the end end location). location).In Insome some embodiments, embodiments, a asmoothing smoothing algorithm algorithm maymay be be 51
1005066680
applied to the intensities of the swipe contact prior to determining the characteristic intensity applied to the intensities of the swipe contact prior to determining the characteristic intensity 10 Jan 2024
of the of the contact. contact.For Forexample, example, the the smoothing algorithmoptionally smoothing algorithm optionallyincludes includesone oneorormore moreof: of:an an unweightedsliding-average unweighted sliding-averagesmoothing smoothing algorithm, algorithm, a triangularsmoothing a triangular smoothing algorithm, algorithm, a median a median
filter smoothing filter smoothing algorithm, algorithm, and/or and/or an an exponential exponential smoothing algorithm.InInsome smoothing algorithm. some circumstances, these circumstances, these smoothing smoothingalgorithms algorithmseliminate eliminatenarrow narrow spikes spikes or or dipsininthe dips theintensities intensities of the swipe contact for purposes of determining a characteristic intensity. of the swipe contact for purposes of determining a characteristic intensity.
[00165]
[00165] The user interface figures described herein optionally include various intensity The user interface figures described herein optionally include various intensity 2024200149
diagrams that show the current intensity of the contact on the touch-sensitive surface relative diagrams that show the current intensity of the contact on the touch-sensitive surface relative
to one or more intensity thresholds (e.g., a contact detection intensity threshold IT , a light to one or more intensity thresholds (e.g., a contact detection intensity threshold ITo, a light 0
press intensity threshold IT , a deep press intensity threshold IT (e.g., that is at least initially press intensity threshold ITL, aL deep press intensity threshold ITD (e.g., D that is at least initially
higher than IT ), and/or one or more other intensity thresholds (e.g., an intensity threshold higher than ITL),L and/or one or more other intensity thresholds (e.g., an intensity threshold
IT that is lower than IT )). This intensity diagram is typically not part of the displayed user ITHHthat is lower than ITL)). L This intensity diagram is typically not part of the displayed user
interface, but is provided to aid in the interpretation of the figures. In some embodiments, the interface, but is provided to aid in the interpretation of the figures. In some embodiments, the
light press intensity threshold corresponds to an intensity at which the device will perform light press intensity threshold corresponds to an intensity at which the device will perform
operations typically associated with clicking a button of a physical mouse or a trackpad. In operations typically associated with clicking a button of a physical mouse or a trackpad. In
someembodiments, some embodiments,thethe deep deep press press intensitythreshold intensity thresholdcorresponds corresponds to to anan intensityatat which intensity which the device will perform operations that are different from operations typically associated with the device will perform operations that are different from operations typically associated with
clicking a button clicking button of of aaphysical physicalmouse or aa trackpad. mouse or trackpad. In Insome some embodiments, when embodiments, when a contact a contact is is
detected with a characteristic intensity below the light press intensity threshold (e.g., and detected with a characteristic intensity below the light press intensity threshold (e.g., and
aboveaa nominal above nominalcontact-detection contact-detectionintensity intensity threshold threshold ITo IT0 below belowwhich whichthe thecontact contactisis no no longer detected), longer detected), the the device device will willmove a focus move a focus selector selector in inaccordance accordance with with movement movement ofof the the
contact on contact the touch-sensitive on the touch-sensitive surface surface without without performing an operation performing an operation associated associated with with the the light press intensity threshold or the deep press intensity threshold. Generally, unless light press intensity threshold or the deep press intensity threshold. Generally, unless
otherwise stated, these intensity thresholds are consistent between different sets of user otherwise stated, these intensity thresholds are consistent between different sets of user
interface figures. interface figures.
[00166]
[00166] In some In embodiments, some embodiments, thethe response response of of thethe device device toto inputsdetected inputs detectedbybythe the device depends on criteria based on the contact intensity during the input. For example, for device depends on criteria based on the contact intensity during the input. For example, for
some “light press” inputs, the intensity of a contact exceeding a first intensity threshold some "light press" inputs, the intensity of a contact exceeding a first intensity threshold
during the input triggers a first response. In some embodiments, the response of the device to during the input triggers a first response. In some embodiments, the response of the device to
inputs detected by the device depends on criteria that include both the contact intensity during inputs detected by the device depends on criteria that include both the contact intensity during
the input and time-based criteria. For example, for some “deep press” inputs, the intensity of the input and time-based criteria. For example, for some "deep press" inputs, the intensity of
a contact exceeding a second intensity threshold during the input, greater than the first a contact exceeding a second intensity threshold during the input, greater than the first
intensity threshold for a light press, triggers a second response only if a delay time has intensity threshold for a light press, triggers a second response only if a delay time has
52
1005066680
elapsed between elapsed betweenmeeting meetingthethefirst first intensity intensity threshold threshold and and meeting the second meeting the intensity second intensity 10 Jan 2024
threshold. This delay time is typically less than 200 ms (milliseconds) in duration (e.g., 40, threshold. This delay time is typically less than 200 ms (milliseconds) in duration (e.g., 40,
100, 100, or or 120 120 ms, dependingononthe ms, depending themagnitude magnitudeof of thesecond the second intensitythreshold, intensity threshold,with withthe thedelay delay time increasing as the second intensity threshold increases). This delay time helps to avoid time increasing as the second intensity threshold increases). This delay time helps to avoid
accidental recognition of deep accidental deep press press inputs. inputs.As As another another example, for some example, for “deeppress" some "deep press” inputs, there is a reduced-sensitivity time period that occurs after the time at which the first inputs, there is a reduced-sensitivity time period that occurs after the time at which the first
intensity threshold is met. During the reduced-sensitivity time period, the second intensity intensity threshold is met. During the reduced-sensitivity time period, the second intensity 2024200149
threshold is increased. This temporary increase in the second intensity threshold also helps to threshold is increased. This temporary increase in the second intensity threshold also helps to
avoid accidental deep press inputs. For other deep press inputs, the response to detection of a avoid accidental deep press inputs. For other deep press inputs, the response to detection of a
deep press deep press input input does not depend does not ontime-based depend on time-basedcriteria. criteria.
[00167]
[00167] In some In embodiments, some embodiments, oneone or or more more of the of the input input intensitythresholds intensity thresholdsand/or and/orthe the corresponding outputs vary based on one or more factors, such as user settings, contact corresponding outputs vary based on one or more factors, such as user settings, contact
motion, input timing, application running, rate at which the intensity is applied, number of motion, input timing, application running, rate at which the intensity is applied, number of
concurrent inputs, user history, environmental factors (e.g., ambient noise), focus selector concurrent inputs, user history, environmental factors (e.g., ambient noise), focus selector
position, and the like. Example factors are described in U.S. Patent Application Serial Nos. position, and the like. Example factors are described in U.S. Patent Application Serial Nos.
14/399,606 14/399,606 andand 14/624,296, 14/624,296, which which are incorporated are incorporated by herein by reference reference herein in their in their entireties. entireties.
[00168]
[00168] For example, For example,Figure Figure4C4Cillustrates illustrates aa dynamic intensity threshold dynamic intensity threshold 480 480 that that changesover changes overtime timebased basedininpart part on on the the intensity intensity of oftouch touch input input476 476 over over time. time. Dynamic Dynamic
intensity threshold intensity threshold 480 480 is isaasum sum of of two two components, first component components, first 474that component 474 thatdecays decaysover over time after a predefined delay time p1 from when touch input 476 is initially detected, and time after a predefined delay time p1 from when touch input 476 is initially detected, and
second component 478 that trails the intensity of touch input 476 over time. The initial high second component 478 that trails the intensity of touch input 476 over time. The initial high
intensity threshold of first component 474 reduces accidental triggering of a “deep press” intensity threshold of first component 474 reduces accidental triggering of a "deep press"
response, while response, while still allowingan still allowing animmediate “deeppress" immediate "deep press” response responseifif touch touch input input 476 476 provides provides sufficient intensity. sufficient intensity.Second Secondcomponent 478reduces component 478 reducesunintentional unintentionaltriggering triggeringof of aa "deep “deep press" press” response by response bygradual gradualintensity intensity fluctuations fluctuations of ofin ina atouch touchinput. InIn input. some someembodiments, when embodiments, when
touch input 476 satisfies dynamic intensity threshold 480 (e.g., at point 481 in Figure 4C), the touch input 476 satisfies dynamic intensity threshold 480 (e.g., at point 481 in Figure 4C), the
“deep press” response is triggered. "deep press" response is triggered.
[00169]
[00169] Figure 4D illustrates another dynamic intensity threshold 486 (e.g., intensity Figure 4D illustrates another dynamic intensity threshold 486 (e.g., intensity
threshold I ). Figure 4D also illustrates two other intensity thresholds: a first intensity threshold ID). D Figure 4D also illustrates two other intensity thresholds: a first intensity
threshold I and a second intensity threshold I . In Figure 4D, although touch input 484 threshold IH Hand a second intensity threshold IL. In L Figure 4D, although touch input 484
satisfies the first intensity threshold I and the second intensity threshold I prior to time p2, satisfies the first intensity threshold IH Hand the second intensity threshold IL prior L to time p2,
no response is provided until delay time p2 has elapsed at time 482. Also in Figure 4D, no response is provided until delay time p2 has elapsed at time 482. Also in Figure 4D,
53
1005066680
dynamic intensity threshold 486 decays over time, with the decay starting at time 488 after a dynamic intensity threshold 486 decays over time, with the decay starting at time 488 after a 10 Jan 2024
predefined delay predefined delay time time p1 p1has haselapsed elapsedfrom fromtime time482 482(when (when thethe response response associated associated with with thethe
second intensity threshold I was triggered). This type of dynamic intensity threshold reduces second intensity threshold IL was L triggered). This type of dynamic intensity threshold reduces
accidental triggering of a response associated with the dynamic intensity threshold ID accidental triggering of a response associated with the dynamic intensity threshold ID
immediately after, or concurrently with, triggering a response associated with a lower immediately after, or concurrently with, triggering a response associated with a lower
intensity threshold, such as the first intensity threshold I or the second intensity threshold IL. intensity threshold, such as the first intensity threshold IH or Hthe second intensity threshold IL.
[00170]
[00170] Figure 4E illustrate yet another dynamic intensity threshold 492 (e.g., intensity Figure 4E illustrate yet another dynamic intensity threshold 492 (e.g., intensity 2024200149
threshold I ). In Figure 4E, a response associated with the intensity threshold I is triggered threshold ID). D In Figure 4E, a response associated with the intensity threshold IL is triggered L
after the delay time p2 has elapsed from when touch input 490 is initially detected. after the delay time p2 has elapsed from when touch input 490 is initially detected.
Concurrently, dynamic Concurrently, dynamicintensity intensitythreshold threshold492 492decays decaysafter afterthe thepredefined predefineddelay delaytime timep1p1has has elapsed from when touch input 490 is initially detected. So a decrease in intensity of touch elapsed from when touch input 490 is initially detected. So a decrease in intensity of touch
input 490 after triggering the response associated with the intensity threshold I , followed by input 490 after triggering the response associated with the intensity threshold IL, followed L by
an increase in the intensity of touch input 490, without releasing touch input 490, can trigger an increase in the intensity of touch input 490, without releasing touch input 490, can trigger
a response associated with the intensity threshold I (e.g., at time 494) even when the a response associated with the intensity threshold ID (e.g., D at time 494) even when the
intensity of touch input 490 is below another intensity threshold, for example, the intensity intensity of touch input 490 is below another intensity threshold, for example, the intensity
threshold I . threshold IL.L
[00171]
[00171] An increase of characteristic intensity of the contact from an intensity below An increase of characteristic intensity of the contact from an intensity below
the light press intensity threshold IT to an intensity between the light press intensity the light press intensity threshold ITL toL an intensity between the light press intensity
threshold IT and the deep press intensity threshold IT is sometimes referred to as a “light threshold ITL and L the deep press intensity threshold ITD is sometimes D referred to as a "light
press” input. An increase of characteristic intensity of the contact from an intensity below the press" input. An increase of characteristic intensity of the contact from an intensity below the
deep press intensity threshold IT to an intensity above the deep press intensity threshold ITD deep press intensity threshold ITD to Dan intensity above the deep press intensity threshold ITD
is sometimes referred to as a “deep press” input. An increase of characteristic intensity of the is sometimes referred to as a "deep press" input. An increase of characteristic intensity of the
contact from an intensity below the contact-detection intensity threshold IT to an intensity contact from an intensity below the contact-detection intensity threshold IT to an intensity 0
between the contact-detection intensity threshold IT and the light press intensity threshold between the contact-detection intensity threshold ITo and the 0 light press intensity threshold
IT is sometimes referred to as detecting the contact on the touch-surface. A decrease of ITLLis sometimes referred to as detecting the contact on the touch-surface. A decrease of
characteristic intensity of the contact from an intensity above the contact-detection intensity characteristic intensity of the contact from an intensity above the contact-detection intensity
threshold IT to an intensity below the contact-detection intensity threshold IT is sometimes threshold ITo to 0 an intensity below the contact-detection intensity threshold ITo is sometimes 0
referred to as detecting liftoff of the contact from the touch-surface. In some embodiments referred to as detecting liftoff of the contact from the touch-surface. In some embodiments
IT is zero. In some embodiments, IT is greater than zero. In some illustrations a shaded ITo0 is zero. In some embodiments, ITo is greater 0 than zero. In some illustrations a shaded
circle or oval is used to represent intensity of a contact on the touch-sensitive surface. In circle or oval is used to represent intensity of a contact on the touch-sensitive surface. In
some illustrations, a circle or oval without shading is used represent a respective contact on some illustrations, a circle or oval without shading is used represent a respective contact on
the touch-sensitive surface without specifying the intensity of the respective contact. the touch-sensitive surface without specifying the intensity of the respective contact.
54
1005066680
[00172]
[00172] In some In embodiments, some embodiments, described described herein, herein, oneone or or more more operations operations areare performed performed 10 Jan 2024
in response to detecting a gesture that includes a respective press input or in response to in response to detecting a gesture that includes a respective press input or in response to
detecting the respective press input performed with a respective contact (or a plurality of detecting the respective press input performed with a respective contact (or a plurality of
contacts), where the respective press input is detected based at least in part on detecting an contacts), where the respective press input is detected based at least in part on detecting an
increase in intensity of the contact (or plurality of contacts) above a press-input intensity increase in intensity of the contact (or plurality of contacts) above a press-input intensity
threshold. In threshold. In some embodiments, some embodiments, therespective the respectiveoperation operationisisperformed performedininresponse responsetoto detecting the increase in intensity of the respective contact above the press-input intensity detecting the increase in intensity of the respective contact above the press-input intensity 2024200149
threshold (e.g., the respective operation is performed on a “down stroke” of the respective threshold (e.g., the respective operation is performed on a "down stroke" of the respective
press input). In some embodiments, the press input includes an increase in intensity of the press input). In some embodiments, the press input includes an increase in intensity of the
respective contact above the press-input intensity threshold and a subsequent decrease in respective contact above the press-input intensity threshold and a subsequent decrease in
intensity of the contact below the press-input intensity threshold, and the respective operation intensity of the contact below the press-input intensity threshold, and the respective operation
is performed in response to detecting the subsequent decrease in intensity of the respective is performed in response to detecting the subsequent decrease in intensity of the respective
contact below the press-input threshold (e.g., the respective operation is performed on an “up contact below the press-input threshold (e.g., the respective operation is performed on an "up
stroke” of the respective press input). stroke" of the respective press input).
[00173]
[00173] In some In embodiments, some embodiments, thethe device device employs employs intensity intensity hysteresis hysteresis toto avoid avoid
accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis accidental inputs sometimes termed "jitter," where the device defines or selects a hysteresis
intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., intensity threshold with a predefined relationship to the press-input intensity threshold (e.g.,
the hysteresis intensity threshold is X intensity units lower than the press-input intensity the hysteresis intensity threshold is X intensity units lower than the press-input intensity
threshold or threshold or the the hysteresis hysteresisintensity intensitythreshold is is threshold 75%, 90%, 75%, 90%,or orsome some reasonable reasonable proportion of proportion of
the press-input intensity threshold). Thus, in some embodiments, the press input includes an the press-input intensity threshold). Thus, in some embodiments, the press input includes an
increase in intensity of the respective contact above the press-input intensity threshold and a increase in intensity of the respective contact above the press-input intensity threshold and a
subsequent decrease in intensity of the contact below the hysteresis intensity threshold that subsequent decrease in intensity of the contact below the hysteresis intensity threshold that
corresponds to the press-input intensity threshold, and the respective operation is performed corresponds to the press-input intensity threshold, and the respective operation is performed
in response to detecting the subsequent decrease in intensity of the respective contact below in response to detecting the subsequent decrease in intensity of the respective contact below
the hysteresis intensity threshold (e.g., the respective operation is performed on an “up the hysteresis intensity threshold (e.g., the respective operation is performed on an "up
stroke” of the respective press input). Similarly, in some embodiments, the press input is stroke" of the respective press input). Similarly, in some embodiments, the press input is
detected only when the device detects an increase in intensity of the contact from an intensity detected only when the device detects an increase in intensity of the contact from an intensity
at or below the hysteresis intensity threshold to an intensity at or above the press-input at or below the hysteresis intensity threshold to an intensity at or above the press-input
intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an
intensity at or below the hysteresis intensity, and the respective operation is performed in intensity at or below the hysteresis intensity, and the respective operation is performed in
response to detecting the press input (e.g., the increase in intensity of the contact or the response to detecting the press input (e.g., the increase in intensity of the contact or the
decrease in intensity of the contact, depending on the circumstances). decrease in intensity of the contact, depending on the circumstances).
55
1005066680
[00174]
[00174] For ease of explanation, the description of operations performed in response to For lease of explanation, the description of operations performed in response to 10 Jan 2024
a press input associated with a press-input intensity threshold or in response to a gesture a press input associated with a press-input intensity threshold or in response to a gesture
including the press input are, optionally, triggered in response to detecting: an increase in including the press input are, optionally, triggered in response to detecting: an increase in
intensity of a contact above the press-input intensity threshold, an increase in intensity of a intensity of a contact above the press-input intensity threshold, an increase in intensity of a
contact from an intensity below the hysteresis intensity threshold to an intensity above the contact from an intensity below the hysteresis intensity threshold to an intensity above the
press-input intensity threshold, a decrease in intensity of the contact below the press-input press-input intensity threshold, a decrease in intensity of the contact below the press-input
intensity threshold, or a decrease in intensity of the contact below the hysteresis intensity intensity threshold, or a decrease in intensity of the contact below the hysteresis intensity 2024200149
threshold corresponding to the press-input intensity threshold. Additionally, in examples threshold corresponding to the press-input intensity threshold. Additionally, in examples
whereananoperation where operationisis described described as as being performedininresponse being performed responsetotodetecting detecting aa decrease decrease in in intensity of a contact below the press-input intensity threshold, the operation is, optionally, intensity of a contact below the press-input intensity threshold, the operation is, optionally,
performed in response to detecting a decrease in intensity of the contact below a hysteresis performed in response to detecting a decrease in intensity of the contact below a hysteresis
intensity threshold corresponding to, and lower than, the press-input intensity threshold. As intensity threshold corresponding to, and lower than, the press-input intensity threshold. As
described above, described above, in in some someembodiments, embodiments,thethe triggeringofofthese triggering theseresponses responsesalso alsodepends dependson on
time-based criteria being met (e.g., a delay time has elapsed between a first intensity time-based criteria being met (e.g., a delay time has elapsed between a first intensity
threshold being threshold being met metand andaa second secondintensity intensity threshold threshold being being met). met).
[00175]
[00175] As used in the specification and claims, the term “tactile output” refers to As used in the specification and claims, the term "tactile output" refers to
physical displacement of a device relative to a previous position of the device, physical physical displacement of a device relative to a previous position of the device, physical
displacement of a component (e.g., a touch-sensitive surface) of a device relative to another displacement of a component (e.g., a touch-sensitive surface) of a device relative to another
component(e.g., component (e.g., housing) housing)ofofthe the device, device, or or displacement of the displacement of the component relativetoto aa component relative
center of mass of the device that will be detected by a user with the user’s sense of touch. For center of mass of the device that will be detected by a user with the user's sense of touch. For
example, in situations where the device or the component of the device is in contact with a example, in situations where the device or the component of the device is in contact with a
surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user’s hand), surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand),
the tactile output generated by the physical displacement will be interpreted by the user as a the tactile output generated by the physical displacement will be interpreted by the user as a
tactile sensation corresponding to a perceived change in physical characteristics of the device tactile sensation corresponding to a perceived change in physical characteristics of the device
or the or the component ofthe component of the device. device. For For example, example,movement movementof aoftouch-sensitive a touch-sensitive surface surface (e.g.,aa (e.g.,
touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” touch-sensitive display or trackpad) is, optionally, interpreted by the user as a "down click"
or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation or "up click" of a physical actuator button. In some cases, a user will feel a tactile sensation
such as such as an an “down click”oror "up "down click" “upclick" click” even evenwhen whenthere thereisisno nomovement movementof of a physical a physical actuator actuator
button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) button associated with the touch-sensitive surface that is physically pressed (e.g., displaced)
by the by the user’s user's movements. movements. AsAs anotherexample, another example, movement movement of touch-sensitive of the the touch-sensitive surface surface is, is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, optionally, interpreted or sensed by the user as "roughness" of the touch-sensitive surface,
even when even whenthere thereisis no no change changeininsmoothness smoothnessofof thetouch-sensitive the touch-sensitivesurface. surface.While Whilesuch such interpretations of touch by a user will be subject to the individualized sensory perceptions of interpretations of touch by a user will be subject to the individualized sensory perceptions of
56
1005066680
the user, the user, there thereare aremany many sensory sensory perceptions of touch perceptions of touch that that are arecommon common totoaalarge large majority majority of of 10 Jan 2024
users. Thus, when a tactile output is described as corresponding to a particular sensory users. Thus, when a tactile output is described as corresponding to a particular sensory
perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise perception of a user (e.g., an "up click," a "down click," "roughness"), unless otherwise
stated, the generated tactile output corresponds to physical displacement of the device or a stated, the generated tactile output corresponds to physical displacement of the device or a
component thereof that will generate the described sensory perception for a typical (or component thereof that will generate the described sensory perception for a typical (or
average) user. Using tactile outputs to provide haptic feedback to a user enhances the average) user. Using tactile outputs to provide haptic feedback to a user enhances the
operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping 2024200149
the user the user to to provide provide proper proper inputs inputs and and reducing reducing user user mistakes whenoperating/interacting mistakes when operating/interacting with with the device) the device) which, additionally, reduces which, additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device
by enabling the user to use the device more quickly and efficiently. by enabling the user to use the device more quickly and efficiently.
[00176]
[00176] In some In someembodiments, embodiments, a tactile a tactile outputoutput patternpattern specifies specifies characteristics characteristics of a of a tactile output, tactile output,such suchasasthe amplitude the amplitudeofofthe tactile the output, tactile the the output, shape of aofmovement shape waveform a movement waveform
of the tactile output, the frequency of the tactile output, and/or the duration of the tactile of the tactile output, the frequency of the tactile output, and/or the duration of the tactile
output. output.
[00177]
[00177] When tactile outputs with different tactile output patterns are generated by a When tactile outputs with different tactile output patterns are generated by a
device (e.g., device (e.g., via viaone oneor ormore more tactile tactileoutput generators output that generators move that movea amoveable moveable mass to generate mass to generate
tactile outputs), the tactile outputs may invoke different haptic sensations in a user holding or tactile outputs), the tactile outputs may invoke different haptic sensations in a user holding or
touching the device. While the sensation of the user is based on the user’s perception of the touching the device. While the sensation of the user is based on the user's perception of the
tactile output, most users will be able to identify changes in waveform, frequency, and tactile output, most users will be able to identify changes in waveform, frequency, and
amplitudeof amplitude of tactile tactile outputs outputs generated generated by by the the device. device.Thus, Thus, the thewaveform, frequencyand waveform, frequency and amplitude can be adjusted to indicate to the user that different operations have been amplitude can be adjusted to indicate to the user that different operations have been
performed. As such, tactile outputs with tactile output patterns that are designed, selected, performed. As such, tactile outputs with tactile output patterns that are designed, selected,
and/or engineered to simulate characteristics (e.g., size, material, weight, stiffness, and/or engineered to simulate characteristics (e.g., size, material, weight, stiffness,
smoothness, etc.); behaviors (e.g., oscillation, displacement, acceleration, rotation, expansion, smoothness, etc.); behaviors (e.g., oscillation, displacement, acceleration, rotation, expansion,
etc.); and/or interactions (e.g., collision, adhesion, repulsion, attraction, friction, etc.) of etc.); and/or interactions (e.g., collision, adhesion, repulsion, attraction, friction, etc.) of
objects in a given environment (e.g., a user interface that includes graphical features and objects in a given environment (e.g., a user interface that includes graphical features and
objects, a simulated physical environment with virtual boundaries and virtual objects, a real objects, a simulated physical environment with virtual boundaries and virtual objects, a real
physical environment physical environmentwith withphysical physicalboundaries boundaries and and physical physical objects,and/or objects, and/ora acombination combinationof of
any of the above) will, in some circumstances, provide helpful feedback to users that reduces any of the above) will, in some circumstances, provide helpful feedback to users that reduces
input errorsand input errors andincreases increases thethe efficiency efficiency of user's of the the user’s operation operation of the of the device. device. Additionally, Additionally,
tactile outputs are, optionally, generated to correspond to feedback that is unrelated to a tactile outputs are, optionally, generated to correspond to feedback that is unrelated to a
simulated physical characteristic, such as an input threshold or a selection of an object. Such simulated physical characteristic, such as an input threshold or a selection of an object. Such
57
1005066680
tactile outputs will, in some circumstances, provide helpful feedback to users that reduces tactile outputs will, in some circumstances, provide helpful feedback to users that reduces 10 Jan 2024
input errors and increases the efficiency of the user’s operation of the device. input errors and increases the efficiency of the user's operation of the device.
[00178]
[00178] In some embodiments, a tactile output with a suitable tactile output pattern In some embodiments, a tactile output with a suitable tactile output pattern
serves as a cue for the occurrence of an event of interest in a user interface or behind the serves as a cue for the occurrence of an event of interest in a user interface or behind the
scenes in a device. Examples of the events of interest include activation of an affordance scenes in a device. Examples of the events of interest include activation of an affordance
(e.g., a real or virtual button, or toggle switch) provided on the device or in a user interface, (e.g., a real or virtual button, or toggle switch) provided on the device or in a user interface,
success or failure of a requested operation, reaching or crossing a boundary in a user success or failure of a requested operation, reaching or crossing a boundary in a user 2024200149
interface, entry into a new state, switching of input focus between objects, activation of a new interface, entry into a new state, switching of input focus between objects, activation of a new
mode, reaching or crossing an input threshold, detection or recognition of a type of input or mode, reaching or crossing an input threshold, detection or recognition of a type of input or
gesture, etc. In some embodiments, tactile outputs are provided to serve as a warning or an gesture, etc. In some embodiments, tactile outputs are provided to serve as a warning or an
alert for an impending event or outcome that would occur unless a redirection or interruption alert for an impending event or outcome that would occur unless a redirection or interruption
input is timely detected. Tactile outputs are also used in other contexts to enrich the user input is timely detected. Tactile outputs are also used in other contexts to enrich the user
experience, improve the accessibility of the device to users with visual or motor difficulties experience, improve the accessibility of the device to users with visual or motor difficulties
or other accessibility needs, and/or improve efficiency and functionality of the user interface or other accessibility needs, and/or improve efficiency and functionality of the user interface
and/or the and/or the device. device. Tactile Tactileoutputs outputsare areoptionally optionallyaccompanied with audio accompanied with audio outputs outputs and/or and/or visible user visible user interface changes,which interfacechanges, which further furtherenhance enhance a a user’s user's experience experience when the user when the user interacts with a user interface and/or the device, and facilitate better conveyance of interacts with a user interface and/or the device, and facilitate better conveyance of
information regarding the state of the user interface and/or the device, and which reduce input information regarding the state of the user interface and/or the device, and which reduce input
errors and increase the efficiency of the user’s operation of the device. errors and increase the efficiency of the user's operation of the device.
[00179]
[00179] Figures 4F-4H provide a set of sample tactile output patterns that may be used, Figures 4F-4H provide a set of sample tactile output patterns that may be used,
either individually or in combination, either as is or through one or more transformations either individually or in combination, either as is or through one or more transformations
(e.g., modulation, amplification, truncation, etc.), to create suitable haptic feedback in various (e.g., modulation, amplification, truncation, etc.), to create suitable haptic feedback in various
scenarios and scenarios for various and for various purposes, purposes, such as those such as those mentioned aboveand mentioned above andthose thosedescribed describedwith with respect to the user interfaces and methods discussed herein. This example of a palette of respect to the user interfaces and methods discussed herein. This example of a palette of
tactile outputs tactile outputsshows shows how how aa set set of of three three waveforms andeight waveforms and eightfrequencies frequenciescan canbebeused usedtoto produce an array of tactile output patterns. In addition to the tactile output patterns shown in produce an array of tactile output patterns. In addition to the tactile output patterns shown in
these figures, each of these tactile output patterns is optionally adjusted in amplitude by these figures, each of these tactile output patterns is optionally adjusted in amplitude by
changing a gain value for the tactile output pattern, as shown, for example for FullTap 80Hz, changing a gain value for the tactile output pattern, as shown, for example for FullTap 80Hz,
FullTap 200Hz, FullTap 200Hz, MiniTap 80Hz, MiniTap MiniTap 80Hz, 200Hz, MicroTap MiniTap 200Hz, MicroTap80Hz, 80Hz,and and MicroTap MicroTap200Hz 200Hzinin Figures 4I-4K, Figures 4I-4K, which whichare areeach eachshown shown with with variantshaving variants having a gain a gain ofof 1.0,0.75, 1.0, 0.75,0.5, 0.5, and 0.25. and 0.25.
As shown As shownininFigures Figures4I-4K, 4I-4K,changing changing thethe gain gain ofof a atactile tactile output output pattern pattern changes the changes the
amplitudeof amplitude of the the pattern pattern without without changing the frequency changing the frequencyofofthe the pattern pattern or or changing the shape changing the shape of the of the waveform. Insome waveform. In someembodiments, embodiments, changing changing the the frequency frequency of a of a tactile tactile output output pattern pattern 58
1005066680
also results also resultsinina alower loweramplitude amplitude as assome some tactile tactileoutput outputgenerators generatorsare limited are bybyhow limited howmuch much 10 Jan 2024
force can force can be be applied applied to to the the moveable massand moveable mass andthus thushigher higherfrequency frequencymovements movements of the of the massmass
are constrained to lower amplitudes to ensure that the acceleration needed to create the are constrained to lower amplitudes to ensure that the acceleration needed to create the
waveform does not require force outside of an operational force range of the tactile output waveform does not require force outside of an operational force range of the tactile output
generator (e.g., generator (e.g., the thepeak peakamplitudes amplitudes of of the theFullTap FullTap at at230Hz, 230Hz, 270Hz, and300Hz 270Hz, and 300Hz arelower are lower than the than the amplitudes of the amplitudes of the FullTap at 80Hz, FullTap at 100Hz,125Hz, 80Hz, 100Hz, 125Hz, and and 200Hz). 200Hz).
[00180]
[00180] Figures 4F-4K Figures 4F-4Kshow show tactileoutput tactile outputpatterns patterns that that have have aa particular particular waveform. waveform. 2024200149
The waveform of a tactile output pattern represents the pattern of physical displacements The waveform of a tactile output pattern represents the pattern of physical displacements
relative to a neutral position (e.g., xzero) versus time that a moveable mass goes through to relative to a neutral position (e.g., xzero) versus time that a moveable mass goes through to
generate a tactile output with that tactile output pattern. For example, a first set of tactile generate a tactile output with that tactile output pattern. For example, a first set of tactile
output patterns shown in Figure 4F (e.g., tactile output patterns of a “FullTap”) each have a output patterns shown in Figure 4F (e.g., tactile output patterns of a "FullTap") each have a
waveform that includes an oscillation with two complete cycles (e.g., an oscillation that starts waveform that includes an oscillation with two complete cycles (e.g., an oscillation that starts
and ends in a neutral position and crosses the neutral position three times). A second set of and ends in a neutral position and crosses the neutral position three times). A second set of
tactile output patterns shown in Figure 4G (e.g., tactile output patterns of a “MiniTap”) each tactile output patterns shown in Figure 4G (e.g., tactile output patterns of a "MiniTap") each
have a waveform that includes an oscillation that includes one complete cycle (e.g., an have a waveform that includes an oscillation that includes one complete cycle (e.g., an
oscillation that starts and ends in a neutral position and crosses the neutral position one time). oscillation that starts and ends in a neutral position and crosses the neutral position one time).
A third set of tactile output patterns shown in Figure 4H (e.g., tactile output patterns of a A third set of tactile output patterns shown in Figure 4H (e.g., tactile output patterns of a
“MicroTap”)each "MicroTap") each have have a waveform a waveform thatthat includes includes an oscillation an oscillation thatinclude that includeone onehalf halfofofaa complete cycle (e.g., an oscillation that starts and ends in a neutral position and does not complete cycle (e.g., an oscillation that starts and ends in a neutral position and does not
cross the neutral position). The waveform of a tactile output pattern also includes a start cross the neutral position). The waveform of a tactile output pattern also includes a start
buffer and buffer an end and an end buffer buffer that that represent represent the thegradual gradualspeeding speeding up up and and slowing downofofthe slowing down the moveablemass moveable massatat thestart the start and and at at the the end end of of the thetactile tactileoutput. The output. example The examplewaveforms waveforms
shownininFigures shown Figures4F-4K 4F-4K include include xmin xmin andand xmax xmax values values whichwhich represent represent the maximum the maximum and and minimum minimum extent extent of of movement movement of the of the moveable moveable mass.mass. For larger For larger electronic electronic devices devices with with larger larger
moveablemasses, moveable masses,there theremay maybe be largerororsmaller larger smallerminimum minimumand and maximum maximum extentsextents of of movement movement of of themass. the mass. The The examples examples shown shown in Figures in Figures 4F-4K4F-4K describe describe movement movement of of a mass a mass in 11 dimension, in howeversimilar dimension, however similarprinciples principles would wouldalso alsoapply applytotomovement movementof of a moveable a moveable
massin mass in two twoor or three three dimensions. dimensions.
[00181]
[00181] As shown As shownininFigures Figures4F-4K, 4F-4K, each each tactileoutput tactile outputpattern patternalso also has has aa corresponding characteristic frequency that affects the “pitch” of a haptic sensation that is felt corresponding characteristic frequency that affects the "pitch" of a haptic sensation that is felt
by a user from a tactile output with that characteristic frequency. For a continuous tactile by a user from a tactile output with that characteristic frequency. For a continuous tactile
output, the characteristic frequency represents the number of cycles that are completed within output, the characteristic frequency represents the number of cycles that are completed within
a given period of time (e.g., cycles per second) by the moveable mass of the tactile output a given period of time (e.g., cycles per second) by the moveable mass of the tactile output
59
1005066680
generator. For a discrete tactile output, a discrete output signal (e.g., with 0.5, 1, or 2 cycles) generator. For a discrete tactile output, a discrete output signal (e.g., with 0.5, 1, or 2 cycles) 10 Jan 2024
is generated, is generated, and and the the characteristic characteristicfrequency frequencyvalue valuespecifies specifieshow how fast fastthe moveable the moveable mass mass
needs to move to generate a tactile output with that characteristic frequency. As shown in needs to move to generate a tactile output with that characteristic frequency. As shown in
Figures 4F-4H, for each type of tactile output (e.g., as defined by a respective waveform, Figures 4F-4H, for each type of tactile output (e.g., as defined by a respective waveform,
such as such as FullTap, MiniTap,ororMicroTap), FullTap, MiniTap, MicroTap),a a higherfrequency higher frequency value value corresponds corresponds to to faster faster
movement(s)byby movement(s) themoveable the moveable mass, mass, and and hence, hence, in general, in general, a shorter a shorter time time to to complete complete thethe
tactile output (e.g., including the time to complete the required number of cycle(s) for the tactile output (e.g., including the time to complete the required number of cycle(s) for the 2024200149
discrete tactile output, plus a start and an end buffer time). For example, a FullTap with a discrete tactile output, plus a start and an end buffer time). For example, a FullTap with a
characteristic frequency of 80Hz takes longer to complete than FullTap with a characteristic characteristic frequency of 80Hz takes longer to complete than FullTap with a characteristic
frequency of 100Hz (e.g., 35.4ms vs. 28.3ms in Figure 4F). In addition, for a given frequency of 100Hz (e.g., 35.4ms VS. 28.3ms in Figure 4F). In addition, for a given
frequency, a tactile output with more cycles in its waveform at a respective frequency takes frequency, a tactile output with more cycles in its waveform at a respective frequency takes
longer to complete than a tactile output with fewer cycles its waveform at the same respective longer to complete than a tactile output with fewer cycles its waveform at the same respective
frequency. For frequency. For example, example,a aFullTap FullTapatat150Hz 150Hz takeslonger takes longertotocomplete complete than than a MiniTap a MiniTap at at 150Hz (e.g., 19.4ms 150Hz (e.g., vs. 12.8ms), 19.4ms VS. 12.8ms),and andaa MiniTap MiniTapatat150Hz 150Hz takes takes longer longer to to complete complete than than a a
MicroTapatat150Hz MicroTap 150Hz (e.g.,12.8ms (e.g., 12.8ms vs.9.4ms). VS. 9.4ms).However, However, forfor tactileoutput tactile outputpatterns patternswith with different frequencies this rule may not apply (e.g., tactile outputs with more cycles but a different frequencies this rule may not apply (e.g., tactile outputs with more cycles but a
higher frequency higher frequencymay maytake takea ashorter shorteramount amountofoftime timetotocomplete complete than than tactileoutputs tactile outputswith with fewer cycles fewer cycles but but aa lower frequency, and lower frequency, and vice vice versa). versa). For For example, at 300Hz, example, at 300Hz, aaFullTap FullTaptakes takes as long as a MiniTap (e.g., 9.9 ms). as long as a MiniTap (e.g., 9.9 ms).
[00182]
[00182] As shown As shown in in Figures Figures 4F-4K, 4F-4K, a tactile a tactile output output pattern pattern also also has has a characteristic a characteristic
amplitude that affects the amount of energy that is contained in a tactile signal, or a amplitude that affects the amount of energy that is contained in a tactile signal, or a
“strength” of a haptic sensation that may be felt by a user through a tactile output with that "strength" of a haptic sensation that may be felt by a user through a tactile output with that
characteristic amplitude. In some embodiments, the characteristic amplitude of a tactile characteristic amplitude. In some embodiments, the characteristic amplitude of a tactile
output pattern output pattern refers refersto toan anabsolute absoluteoror normalized normalizedvalue valuethat thatrepresents thethemaximum represents maximum
displacementofof the displacement the moveable moveablemass mass from from a neutral a neutral positionwhen position when generating generating thethe tactile tactile
output. In some embodiments, the characteristic amplitude of a tactile output pattern is output. In some embodiments, the characteristic amplitude of a tactile output pattern is
adjustable, e.g., by a fixed or dynamically determined gain factor (e.g., a value between 0 and adjustable, e.g., by a fixed or dynamically determined gain factor (e.g., a value between 0 and
1), 1), in in accordance with accordance with various various conditions conditions (e.g.,(e.g., customized customized based onbased on user contexts user interface interface contexts and behaviors) and/or preconfigured metrics (e.g., input-based metrics, and/or user-interface- and behaviors) and/or preconfigured metrics (e.g., input-based metrics, and/or user-interface-
based metrics). based metrics). In In some embodiments, some embodiments, an an input-based input-based metric metric (e.g.,ananintensity-change (e.g., intensity-changemetric metric or an input-speed metric) measures a characteristic of an input (e.g., a rate of change of a or an input-speed metric) measures a characteristic of an input (e.g., a rate of change of a
characteristic intensity of a contact in a press input or a rate of movement of the contact characteristic intensity of a contact in a press input or a rate of movement of the contact
across a touch-sensitive surface) during the input that triggers generation of a tactile output. across a touch-sensitive surface) during the input that triggers generation of a tactile output.
60
1005066680
In some In embodiments, some embodiments, a user-interface-based a user-interface-based metric metric (e.g.,aa speed-across-boundary (e.g., speed-across-boundary metric) metric) 10 Jan 2024
measures a characteristic of a user interface element (e.g., a speed of movement of the measures a characteristic of a user interface element (e.g., a speed of movement of the
element across a hidden or visible boundary in a user interface) during the user interface element across a hidden or visible boundary in a user interface) during the user interface
change that triggers generation of the tactile output. In some embodiments, the characteristic change that triggers generation of the tactile output. In some embodiments, the characteristic
amplitudeof amplitude of aa tactile tactile output outputpattern patternmay may be be modulated byan modulated by an"envelope" “envelope”and andthe thepeaks peaksofof adjacent cycles adjacent cycles may havedifferent may have different amplitudes, amplitudes,where whereone oneofofthe thewaveforms waveforms shown shown above above is is further modified further by multiplication modified by multiplication by an envelope by an envelopeparameter parameterthat thatchanges changesover overtime time(e.g., (e.g., 2024200149
from 0 to 1) to gradually adjust amplitude of portions of the tactile output over time as the from 0 to 1) to gradually adjust amplitude of portions of the tactile output over time as the
tactile output is being generated. tactile output is being generated.
[00183]
[00183] Althoughonly Although onlyspecific specificfrequencies, frequencies, amplitudes, amplitudes, and andwaveforms waveformsareare
represented in the sample tactile output patterns in Figures 4F-4K for illustrative purposes, represented in the sample tactile output patterns in Figures 4F-4K for illustrative purposes,
tactile output tactile outputpatterns patternswith withother otherfrequencies, amplitudes, frequencies, amplitudes,and andwaveforms maybebeused waveforms may usedfor for similar purposes. similar purposes. For For example, waveforms example, waveforms thathave that have between between 0.50.5 to to 4 cycles 4 cycles can can be be used. used.
Other frequencies Other frequencies in in the the range range of of 60Hz-400Hz 60Hz-400Hz maymay be used be used as well. as well.
[00184]
[00184] Attention is Attention is now directed towards now directed towardsembodiments embodimentsof of user user interfaces("UI") interfaces (“UI”)and and associated processes associated that may processes that be implemented may be implemented on on an an electronicdevice, electronic device,such suchasasportable portable multifunction device 100 or device 300, with a display, a touch-sensitive surface, (optionally) multifunction device 100 or device 300, with a display, a touch-sensitive surface, (optionally)
one or more tactile output generators for generating tactile outputs, and (optionally) one or one or more tactile output generators for generating tactile outputs, and (optionally) one or
more sensors to detect intensities of contacts with the touch-sensitive surface. more sensors to detect intensities of contacts with the touch-sensitive surface.
[00185]
[00185] Figures 5A-5AT Figures 5A-5AT illustrateexample illustrate exampleuser userinterfaces interfacesfor for displaying displaying aa representation of a virtual object while switching from displaying a first user interface region representation of a virtual object while switching from displaying a first user interface region
to displaying to displaying aa second second user user interface interface region, region,ininaccordance accordance with with some embodiments. some embodiments. The The user user
interfaces in these figures are used to illustrate the processes described below, including the interfaces in these figures are used to illustrate the processes described below, including the
processes in processes in Figures Figures 8A-8E, 9A-9D, 8A-8E, 9A-9D, 10A-10D, 10A-10D, 16A-16G, 16A-16G, 17A-17D, 17A-17D, 18A-18I,18A-18I, 19A-19H,19A-19H, and and 20A-20F.For 20A-20F. Forconvenience convenienceof of explanation, explanation, some some of the of the embodiments embodiments will will be discussed be discussed with with
reference to reference to operations operations performed onaa device performed on devicewith withaa touch-sensitive touch-sensitive display display system system112. 112.In In such embodiments, the focus selector is, optionally: a respective finger or stylus contact, a such embodiments, the focus selector is, optionally: a respective finger or stylus contact, a
representative point corresponding to a finger or stylus contact (e.g., a centroid of a representative point corresponding to a finger or stylus contact (e.g., a centroid of a
respective contact or a point associated with a respective contact), or a centroid of two or respective contact or a point associated with a respective contact), or a centroid of two or
morecontacts more contactsdetected detectedon onthe the touch-sensitive touch-sensitive display display system 112. However, system 112. However, analogous analogous
operations are, operations are, optionally, optionally,performed performed on on a device device with a display display 450 450 and a separate touch- and a touch-
61
1005066680
sensitive surface 451 in response to detecting the contacts on the touch-sensitive surface 451 sensitive surface 451 in response to detecting the contacts on the touch-sensitive surface 451 10 Jan 2024
while displaying the user interfaces shown in the figures on the display 450, along with a while displaying the user interfaces shown in the figures on the display 450, along with a
focus selector. focus selector.
[00186]
[00186] Figure 5A illustrates a real-world context in which user interfaces described Figure 5A illustrates a real-world context in which user interfaces described
with regard with regard to to 5B-5AT areused. 5B-5AT are used.
[00187]
[00187] Figure 5A illustrates physical space 5002 in which a table 5004 is located. Figure 5A illustrates physical space 5002 in which a table 5004 is located.
Device 100 is held by a user in the user’s hand 5006. 2024200149
Device 100 is held by a user in the user's hand 5006.
[00188]
[00188] Figure 5B illustrates a messaging user interface 5008, displayed on display Figure 5B illustrates a messaging user interface 5008, displayed on display
112. 112. The messaginguser The messaging userinterface interface5008 5008includes includesa amessage message bubble bubble 5010 5010 that that includes includes a a
received text received text message 5012,aamessage message 5012, messagebubble bubble 5014 5014 that that includes includes a senttext a sent textmessage message 5016, 5016, a a message bubble 5018 that includes a virtual object (e.g., virtual chair 5020) received in a message bubble 5018 that includes a virtual object (e.g., virtual chair 5020) received in a
message, and a virtual object indicator 5022 to indicate that the virtual chair 5020 is an object message, and a virtual object indicator 5022 to indicate that the virtual chair 5020 is an object
that is viewable in an augmented reality view (e.g., within a representation of field of view of that is viewable in an augmented reality view (e.g., within a representation of field of view of
one or one or more camerasofofdevice more cameras device100). 100).Messaging Messaging user user interface5008 interface 5008 also also includes includes a message a message
input region 5024 that is configured to display message input. input region 5024 that is configured to display message input.
[00189]
[00189] Figures 5C-5G illustrate an input that that causes a portion of the messaging Figures 5C-5G illustrate an input that that causes a portion of the messaging
user interface 5008 to be replaced by a field of view of one or more cameras of device 100. In user interface 5008 to be replaced by a field of view of one or more cameras of device 100. In
Figure 5C, Figure 5C, aa contact contact 5026 withtouch 5026 with touchscreen screen112 112ofofdevice device100 100isisdetected. detected. AAcharacteristic characteristic intensity of the contact is above a contact detection intensity threshold IT and below a hint intensity of the contact is above a contact detection intensity threshold IT0 and below 0 a hint
press intensity threshold IT , as illustrated by intensity level meter 5028. In Figure 5D, an press intensity threshold ITH, as H illustrated by intensity level meter 5028. In Figure 5D, an
increase in the characteristic intensity of the contact 5026 above the hint press intensity increase in the characteristic intensity of the contact 5026 above the hint press intensity
threshold IT , as illustrated by intensity level meter 5028, has caused the area of message threshold ITH, Has illustrated by intensity level meter 5028, has caused the area of message
bubble 5018 to increase, the size of the virtual chair 5020 to increase, and messaging user bubble 5018 to increase, the size of the virtual chair 5020 to increase, and messaging user
interface 5008 interface to begin 5008 to begin to to be be blurred blurred behind behind message bubble5018 message bubble 5018(e.g., (e.g., to to provide provide visual visual feedback to the user of the effect of increasing the characteristic intensity of the contact). In feedback to the user of the effect of increasing the characteristic intensity of the contact). In
Figure 5E, an increase in the characteristic intensity of the contact 5026 above the light press Figure 5E, an increase in the characteristic intensity of the contact 5026 above the light press
intensity threshold IT , as illustrated by intensity level meter 5028, has caused message intensity threshold ITL, L as illustrated by intensity level meter 5028, has caused message
bubble 5018 to be replaced by a platter 5030, the size of the virtual chair 5020 to increase bubble 5018 to be replaced by a platter 5030, the size of the virtual chair 5020 to increase
further, and increased blurring of messaging user interface 5008 behind platter 5030. In further, and increased blurring of messaging user interface 5008 behind platter 5030. In
Figure 5F, an increase in the characteristic intensity of the contact 5026 above the deep press Figure 5F, an increase in the characteristic intensity of the contact 5026 above the deep press
intensity threshold IT , as illustrated by intensity level meter 5028, causes tactile output intensity threshold ITD, D as illustrated by intensity level meter 5028, causes tactile output
generators 167 of the device 100 to output a tactile output (as illustrated at 5032) to indicate generators 167 of the device 100 to output a tactile output (as illustrated at 5032) to indicate
62
1005066680
that criteria have been met for replacing a portion of the messaging user interface 5008 with a that criteria have been met for replacing a portion of the messaging user interface 5008 with a 10 Jan 2024
field of field ofview view of of one one or or more more cameras of device cameras of device100. 100.
[00190]
[00190] In some In embodiments, some embodiments, before before thethe characteristicintensity characteristic intensity of of the the contact contact 5026 5026
reaches the deep press intensity threshold IT , as illustrated in Figure 5F, the progression reaches the deep press intensity threshold ITD, as D illustrated in Figure 5F, the progression
illustrated in Figures 5C-5E is reversible. For example, reducing the characteristic intensity of illustrated in Figures 5C-5E is reversible. For example, reducing the characteristic intensity of
the contact 5026 after the increases illustrated in Figure 5D and/or 5E will cause the interface the contact 5026 after the increases illustrated in Figure 5D and/or 5E will cause the interface
state that corresponds to the decreased intensity level of the contact 5026 to be displayed state that corresponds to the decreased intensity level of the contact 5026 to be displayed 2024200149
(e.g., the interface as shown in Figure 5E is shown in accordance with a determination that (e.g., the interface as shown in Figure 5E is shown in accordance with a determination that
the reduced characteristic intensity of the contact is above the light press intensity threshold the reduced characteristic intensity of the contact is above the light press intensity threshold
ITL, the ITL, the interface interfaceasasshown shown in in Figure Figure 5D is shown 5D is in accordance shown in accordancewith withaadetermination determinationthat thatthe the reduced characteristic intensity of the contact is above the hint press intensity threshold ITH, reduced characteristic intensity of the contact is above the hint press intensity threshold ITH,
and the and the interface interface as asshown in Figure shown in Figure 5C is shown 5C is in accordance shown in accordancewith witha adetermination determinationthat thatthe the reduced characteristic intensity of the contact is below the hint press intensity threshold ITH). reduced characteristic intensity of the contact is below the hint press intensity threshold ITH).
In some embodiments, reducing the characteristic intensity of the contact 5026 after the In some embodiments, reducing the characteristic intensity of the contact 5026 after the
increases illustrated in Figures 5D and/or 5E will cause the interface as shown in Figure 5C to increases illustrated in Figures 5D and/or 5E will cause the interface as shown in Figure 5C to
be redisplayed. be redisplayed.
[00191]
[00191] Figures 5F-5J illustrate an animated transition during which a portion of the Figures 5F-5J illustrate an animated transition during which a portion of the
messaginguser messaging userinterface interface is is replaced replaced with with the the field fieldof ofview viewof ofone oneor ormore more cameras cameras
(hereinafter “the (hereinafter "the camera(s)”) camera(s)") of of device device 100. 100. From Figure5F From Figure 5Ftoto Figure Figure5G, 5G,contact contact5026 5026has has lifted off of touch screen 112 and the virtual chair 5020 has rotated toward its final position in lifted off of touch screen 112 and the virtual chair 5020 has rotated toward its final position in
Figure 5I. In Figure 5G, the field of view 5034 of the camera(s) has begun to fade into view Figure 5I. In Figure 5G, the field of view 5034 of the camera(s) has begun to fade into view
in platter 5030 (as indicated by the dotted lines). In Figure 5H, the field of view 5034 of the in platter 5030 (as indicated by the dotted lines). In Figure 5H, the field of view 5034 of the
camera(s) (e.g., camera(s) (e.g., showing showing aa view view of of physical physical space space 5002 5002asascaptured capturedbybythe thecamera(s)) camera(s))has has completed fading into view in platter 5030. From Figure 5H to Figure 5I, the virtual chair completed fading into view in platter 5030. From Figure 5H to Figure 5I, the virtual chair
5020 has continued its rotation toward its final position in Figure 5I. In Figure 5I, the tactile 5020 has continued its rotation toward its final position in Figure 5I. In Figure 5I, the tactile
output generators 167 have output a tactile output (as illustrated at 5036) to indicate that at output generators 167 have output a tactile output (as illustrated at 5036) to indicate that at
least one plane (e.g., a floor surface 5038) has been detected in the field of view 5034 of the least one plane (e.g., a floor surface 5038) has been detected in the field of view 5034 of the
camera(s). The virtual chair 5020 is placed on the detected plane (e.g., in accordance with a camera(s). The virtual chair 5020 is placed on the detected plane (e.g., in accordance with a
determination by device 100 that the virtual object is configured to be placed in an upright determination by device 100 that the virtual object is configured to be placed in an upright
orientation on a detected horizontal surface, such as floor surface 5038). The size of the orientation on a detected horizontal surface, such as floor surface 5038). The size of the
virtual chair 5020 is continuously adjusted on display 112 as the portion of the messaging virtual chair 5020 is continuously adjusted on display 112 as the portion of the messaging
user interface is transformed into a representation of the field of view 5034 of the camera(s) user interface is transformed into a representation of the field of view 5034 of the camera(s)
on display 112. For example, the scale of the virtual chair 5020 relative to the physical space on display 112. For example, the scale of the virtual chair 5020 relative to the physical space
63
1005066680
5002 as shown 5002 as shownininthe thefield field of view 5034of view 5034 of the the camera(s) camera(s)is is determined basedonona apredefined determined based predefined 10 Jan 2024
“real world” size of the virtual chair 5020 and/or a detected size of objects (such as table "real world" size of the virtual chair 5020 and/or a detected size of objects (such as table
5004) 5004) ininthe thefield fieldofofview view 5034 5034 of camera(s). of the the camera(s). In Figure In Figure 5J, the 5J, the virtual virtual chair chair 5020 is 5020 is
displayed at its final position with a predefined orientation relative to the detected floor displayed at its final position with a predefined orientation relative to the detected floor
surface in the field of view 5034 of the camera(s). In some embodiments, the initial landing surface in the field of view 5034 of the camera(s). In some embodiments, the initial landing
position of the virtual chair 5020 is a predefined position relative to the detected plane in the position of the virtual chair 5020 is a predefined position relative to the detected plane in the
field of view of the camera(s), such as in the center of an unoccupied region of the detected field of view of the camera(s), such as in the center of an unoccupied region of the detected 2024200149
plane. In some embodiments, the initial landing position of the virtual chair 5020 is plane. In some embodiments, the initial landing position of the virtual chair 5020 is
determined in accordance with a lift-off position of the contact 5026 (e.g., the lift-off position determined in accordance with a lift-off position of the contact 5026 (e.g., the lift-off position
of the contact of contact 5026 5026 may bedifferent may be different from from the the initial initial touch-down location of the touch-down location the contact contact 5026 5026
due to due to movement movement ofof contact5026 contact 5026 across across thetouch-screen the touch-screen 112112 afterthethecriteria after criteria for for transitioning to transitioning tothe theaugmented reality environment augmented reality havebeen environment have beenmet metininFigure Figure5F). 5F).
[00192]
[00192] Figures 5K-5L Figures illustrate movement 5K-5Lillustrate (e.g.,by movement (e.g., byuser's user’s hands hands5006) 5006)ofofdevice device100 100 that adjusts the field of view 5034 of the camera(s). As the device 100 is moved relative to that adjusts the field of view 5034 of the camera(s). As the device 100 is moved relative to
physical space physical space 5002, 5002, the the displayed displayed field field of of view view 5034 of the 5034 of the camera(s) changesand camera(s) changes andvirtual virtual chair 5020 remains affixed to the same position and orientation relative to floor surface 5038 chair 5020 remains affixed to the same position and orientation relative to floor surface 5038
in the displayed field of view 5034 of the camera(s). in the displayed field of view 5034 of the camera(s).
[00193]
[00193] Figures 5M-5Q Figures 5M-5Q illustratean illustrate an input input that that causes causes movement movement ofof virtualchair virtual chair 5020 5020 across floor surface 5038 in the displayed field of view 5034 of the camera(s). In Figure 5N, across floor surface 5038 in the displayed field of view 5034 of the camera(s). In Figure 5N,
a contact 5040 with touch screen 112 of device 100 is detected at a location that corresponds a contact 5040 with touch screen 112 of device 100 is detected at a location that corresponds
to virtual to virtualchair chair5020. 5020.InInFigures Figures5N-5O, 5N-50, as as the the contact contact5040 5040 moves alongaapath moves along pathindicated indicated by by arrow 5042, virtual chair 5020 is dragged by the contact 5040. As the virtual chair 5020 is arrow 5042, virtual chair 5020 is dragged by the contact 5040. As the virtual chair 5020 is
moved by contact 5040, the size of the virtual chair 5020 changes to maintain the scale of the moved by contact 5040, the size of the virtual chair 5020 changes to maintain the scale of the
virtual chair 5020 relative to physical space 5002 as shown in the field of view 5034 of the virtual chair 5020 relative to physical space 5002 as shown in the field of view 5034 of the
camera(s). For camera(s). For example, example,ininFigures Figures5N-5P, 5N-5P,asasvirtual virtual chair chair 5020 movesfrom 5020 moves from thethe foreground foreground
of the field of view 5034 of the camera(s) to a position that is further from device 100 and of the field of view 5034 of the camera(s) to a position that is further from device 100 and
closer to table 5004 in the field of view 5034 of the camera(s), the size of the virtual chair closer to table 5004 in the field of view 5034 of the camera(s), the size of the virtual chair
5020 decreases 5020 decreases (e.g., (e.g., such such thatthat the the scale scale of chair of the the chair relative relative to table to table 5004 5004 in the in theoffield field viewof view
5034 of the camera(s) is maintained). Additionally, as the virtual chair 5020 is moved by 5034 of the camera(s) is maintained). Additionally, as the virtual chair 5020 is moved by
contact 5040, planes identified in the field of view 5034 of the camera(s) are highlighted. For contact 5040, planes identified in the field of view 5034 of the camera(s) are highlighted. For
example,floor example, floor plane plane 5038 5038isis highlighted highlighted in in Figure Figure 5O. In Figures 50. In 5O-5P,asasthe Figures 50-5P, the contact contact 5040 5040 movesalong moves alonga apath pathindicated indicatedbybyarrow arrow5044, 5044,virtual virtualchair chair 5020 5020continues continuestotobebedragged draggedbyby the contact 5040. In Figure 5Q, the contact 5040 has lifted off of touch screen 112. In some the contact 5040. In Figure 5Q, the contact 5040 has lifted off of touch screen 112. In some
64
1005066680
embodiments,asasshown embodiments, shown in in Figures Figures 5N-5Q, 5N-5Q, the the movement movement path path of theofvirtual the virtual chair chair 50205020 is is 10 Jan 2024
constrained by the floor surface 5038 in the field of view 5034 of the cameras, as if the constrained by the floor surface 5038 in the field of view 5034 of the cameras, as if the
virtual chair 5020 is dragged across the floor surface 5038 by the contact 5040. In some virtual chair 5020 is dragged across the floor surface 5038 by the contact 5040. In some
embodiments,contact embodiments, contact5040 5040 as as described described with with regard regard to to Figures Figures 5N-5P 5N-5P is continuation is a a continuation of of
contact 5026 as described with regard to Figures 5C-5F (e.g., the there is no lift-off of contact contact 5026 as described with regard to Figures 5C-5F (e.g., the there is no lift-off of contact
5026and 5026 andsame samecontact contactthat thatcauses causesthe theportion portion of of messaging messaginguser userinterface interface5008 5008totobebe replaced by the field of view 5034 of the camera(s) also drags the virtual chair 5020 in the replaced by the field of view 5034 of the camera(s) also drags the virtual chair 5020 in the 2024200149
field of view 5034 of the camera(s)). field of view 5034 of the camera(s)).
[00194]
[00194] Figures 5Q-5U Figures 5Q-5Uillustrate illustrate an an input input that that causes causes movement movement ofofvirtual virtual chair chair 5020 5020 from floor surface 5038 to a different plane (e.g., table surface 5046) detected in the field of from floor surface 5038 to a different plane (e.g., table surface 5046) detected in the field of
view 5034 view 5034ofofthe the camera(s). camera(s).In In Figure Figure 5R, 5R,aa contact contact 5050 5050with withtouch touchscreen screen112 112ofofdevice device 100 is detected 100 is detectedatata alocation location that that corresponds corresponds to virtual to virtual chairchair 5020. 5020. In Figures In Figures 5R-5S, 5R-5S, as the as the contact 5048 contact movesalong 5048 moves along a a pathindicated path indicatedbybyarrow arrow 5050, 5050, virtualchair virtual chair5020 5020isisdragged draggedbyby the contact 5048. As the virtual chair 5020 is moved by contact 5048, the size of the virtual the contact 5048. As the virtual chair 5020 is moved by contact 5048, the size of the virtual
chair 5020 changes to maintain the scale of the virtual chair 5020 relative to the physical chair 5020 changes to maintain the scale of the virtual chair 5020 relative to the physical
space 5002 as shown in the field of view 5034 of the camera(s). Additionally, as the virtual space 5002 as shown in the field of view 5034 of the camera(s). Additionally, as the virtual
chair 5020 chair is moved 5020 is bycontact moved by contact5040, 5040,table tablesurface surfaceplane plane5046 5046isis highlighted highlighted (e.g., (e.g., as asshown shown
in Figure in Figure 5S). 5S). In In Figures Figures 5S-5T, 5S-5T, as as the the contact contact5048 5048 moves alongaapath moves along pathindicated indicated by byarrow arrow 5052, virtual 5052, virtual chair chair5020 5020 continues continues to to be be dragged by the dragged by the contact contact 5040. In Figure 5040. In Figure 5U, the 5U, the
contact 5048 has lifted off of touch screen 112, and virtual chair 5020 is placed on the table contact 5048 has lifted off of touch screen 112, and virtual chair 5020 is placed on the table
surface plane 5046 in an upright orientation facing the same direction as before. surface plane 5046 in an upright orientation facing the same direction as before.
[00195]
[00195] Figures 5U-5AD illustrate an input that drags the virtual chair 5020 to the Figures 5U-5AD illustrate an input that drags the virtual chair 5020 to the
edge of edge of touch touch screen screen display display 112, 112, which whichcauses causesthe thefield field of of view 5034of view 5034 of the the camera(s) camera(s)to to cease to cease to be be displayed. displayed. In In Figure Figure 5V, 5V, aa contact contact 5054 5054 with with touch screen 112 touch screen 112 of of device device 100 100is is detected at a location that corresponds to virtual chair 5020. In Figures 5V-5W, as the contact detected at a location that corresponds to virtual chair 5020. In Figures 5V-5W, as the contact
5054moves 5054 movesalong along a a pathindicated path indicatedbybyarrow arrow 5056, 5056, virtualchair virtual chair5020 5020isisdragged draggedbybythe the contact 5054. contact In Figures 5054. In Figures 5W-5X, 5W-5X, asasthe thecontact contact5054 5054moves moves along along a path a path indicated indicated by by arrow arrow
5058, virtual 5058, virtual chair chair5020 5020 continues continues to to be be dragged by the dragged by the contact contact 5054 to aa position 5054 to position shown in shown in
Figure 5X. Figure 5X.
[00196]
[00196] The input by contact 5054 illustrated in Figure 5U-5X causes a transition, as The input by contact 5054 illustrated in Figure 5U-5X causes a transition, as
shownininFigures shown Figures5Y-5AD, 5Y-5AD, from from displaying displaying the the field field of of view view 5034 5034 of the of the camera(s) camera(s) in in platter platter
5030 to ceasing to display the field of view 5034 of the camera(s) and returning to fully 5030 to ceasing to display the field of view 5034 of the camera(s) and returning to fully
65
1005066680
displaying the messaging user interface 5008. In Figure 5Y, the field of view 5034 of the displaying the messaging user interface 5008. In Figure 5Y, the field of view 5034 of the 10 Jan 2024
camera(s) begins to fade out in platter 5030. In Figures 5Y-5Z, platter 5030 transitions to camera(s) begins to fade out in platter 5030. In Figures 5Y-5Z, platter 5030 transitions to
messagebubble message bubble5018. 5018. InIn Figure5Z,5Z,thethefield Figure fieldof of view view5034 5034ofofthe thecamera(s) camera(s)isisno nolonger longer displayed. In displayed. In Figure Figure 5AA, messaging 5AA, messaging userinterface user interface5008 5008ceases ceasestotobebeblurred blurredand andthe thesize sizeof of messagebubble message bubble5018 5018 returnstotothe returns theoriginal original size size of message bubble5018 message bubble 5018(e.g., (e.g., as as shown showninin Figure 5B). Figure 5B).
[00197]
[00197] Figures 5AA-5AD Figures 5AA-5AD illustrateanananimated illustrate animated transitionofofvirtual transition virtual chair chair 5020 5020that that 2024200149
occurs as occurs as virtual virtual chair chair5020 5020 moves fromthe moves from theposition position that that corresponds to contact corresponds to contact 5054 in 5054 in
Figure 5AA to the original position of virtual chair 5020 in messaging user interface 5008 Figure 5AA to the original position of virtual chair 5020 in messaging user interface 5008
(e.g., as shown in Figure 5B). In Figure 5AB, contact 5054 has lifted off of touch screen 112. (e.g., as shown in Figure 5B). In Figure 5AB, contact 5054 has lifted off of touch screen 112.
In Figures In Figures 5AB-5AC, virtualchair 5AB-5AC, virtual chair5020 5020gradually gradually increasesininsize increases sizeand androtates rotates toward towardits its final position in Figure 5AD. final position in Figure 5AD.
[00198]
[00198] In Figures In Figures 5B-5AD, thevirtual 5B-5AD, the virtualchair chair 5020 5020has hassubstantially substantially the the same three- same three-
dimensionalappearance dimensional appearancewithin withinthethemessaging messaging user user interface5008 interface 5008 andand within within thethe displayed displayed
field of view 5034 of the camera(s), and the virtual chair 5020 maintains that same three- field of view 5034 of the camera(s), and the virtual chair 5020 maintains that same three-
dimensionalappearance dimensional appearanceduring during thetransition the transitionfrom fromdisplaying displayingthe themessaging messaging userinterface user interface 5008 to displaying the field of view 5034 of the camera(s) and during the reverse transition. 5008 to displaying the field of view 5034 of the camera(s) and during the reverse transition.
In some In embodiments, some embodiments, thethe representationofofvirtual representation virtualchair chair 5020 5020has hasaadifferent different appearance in appearance in
the application user interface (e.g., the messaging user interface) than in the augmented the application user interface (e.g., the messaging user interface) than in the augmented
reality environment (e.g., in the displayed field of view of the camera(s)). For example, the reality environment (e.g., in the displayed field of view of the camera(s)). For example, the
virtual chair 5020 optionally has a two-dimensional or more stylized look in the application virtual chair 5020 optionally has a two-dimensional or more stylized look in the application
user interface, while having a three-dimensional and more realistic and textured look in the user interface, while having a three-dimensional and more realistic and textured look in the
augmentedreality augmented realityenvironment; environment;and andthetheintermediate intermediateappearances appearances of of thethe virtualchair virtual chair5020 5020 during the transition between displaying the application user interface and the displaying the during the transition between displaying the application user interface and the displaying the
augmentedreality augmented realityenvironment environment area aseries are seriesof of interpolated interpolated appearances betweenthe appearances between thetwo- two- dimensionallook dimensional lookand andthe thethree-dimensional three-dimensionallook lookofofthe thevirtual virtual chair chair 5020. 5020.
[00199]
[00199] Figure 5AE illustrates an Internet browser user interface 5060. The Internet Figure 5AE illustrates an Internet browser user interface 5060. The Internet
browseruser browser user interface interface 5060 includes aa URL/search 5060 includes URL/search inputregion input region5062 5062 thatisisconfigured that configuredtoto display aa URL/search display inputfor URL/search input foraa web webbrowser browserandand browser browser controls controls 5064 5064 (e.g.,navigation (e.g., navigation controls including a back button and a forward button, a share control for displaying a sharing controls including a back button and a forward button, a share control for displaying a sharing
interface, a bookmark control for displaying a bookmarks interface, and a tabs control for interface, a bookmark control for displaying a bookmarks interface, and a tabs control for
displaying a tabs interface). Internet browser user interface 5060 also includes web objects displaying a tabs interface). Internet browser user interface 5060 also includes web objects
66
1005066680
5066, 5068, 5066, 5068, 5070, 5070,5072, 5072,5074, 5074,and and5076. 5076.InInsome some embodiments, embodiments, a respective a respective web web object object 10 Jan 2024
includes a link, such that in response to a tap input on the respective web object, a linked includes a link, such that in response to a tap input on the respective web object, a linked
Internet location that corresponds to the web object is displayed in the Internet browser user Internet location that corresponds to the web object is displayed in the Internet browser user
interface 5060 (e.g., replacing display of the respective web object). Web objects 5066, 5068, interface 5060 (e.g., replacing display of the respective web object). Web objects 5066, 5068,
and 5072 and 5072include includetwo-dimensional two-dimensional representations representations of of three-dimensional three-dimensional virtualobjects virtual objectsasas indicated by indicated virtual object by virtual objectindicators indicators5078, 5078,5080, 5080,and and 5082, 5082, respectively. respectively.Web objects 5070, Web objects 5070,
5074, and 5074, and 5076 5076include includetwo-dimensional two-dimensional images images (but(but thethe two-dimensional two-dimensional images images of of web web 2024200149
objects 5070, objects 5074, and 5070, 5074, and 5076 5076dodonot notcorrespond correspondtotothree-dimensional three-dimensional virtualobjects, virtual objects, as as indicated by the absence of the virtual object indicators). The virtual object that corresponds indicated by the absence of the virtual object indicators). The virtual object that corresponds
to web to object 5068 web object 5068isis aa lamp object 5084. lamp object 5084.
[00200]
[00200] Figures 5AF-5AH illustrate an input that that causes a portion of the Internet Figures 5AF-5AH illustrate an input that that causes a portion of the Internet
browser user interface 5060 to be replaced by the field of view 5034 of the camera(s). In browser user interface 5060 to be replaced by the field of view 5034 of the camera(s). In
Figure 5AF, Figure 5AF,aacontact contact 5086 5086with withtouch touchscreen screen112 112ofofdevice device100 100 isisdetected. detected.AAcharacteristic characteristic intensity of the contact is above a contact detection intensity threshold IT and below a hint intensity of the contact is above a contact detection intensity threshold ITo and below 0 a hint
press intensity threshold IT , as illustrated by intensity level meter 5028. In Figure 5AG, an press intensity threshold ITH, as H illustrated by intensity level meter 5028. In Figure 5AG, an
increase in the characteristic intensity of the contact 5026 above the light press intensity increase in the characteristic intensity of the contact 5026 above the light press intensity
threshold IT , as illustrated by intensity level meter 5028, has caused the field of view 5034 threshold ITL, Las illustrated by intensity level meter 5028, has caused the field of view 5034
of the camera(s) to be displayed in web object 5068 (e.g., overlayed by virtual lamp 5084). In of the camera(s) to be displayed in web object 5068 (e.g., overlayed by virtual lamp 5084). In
Figure 5AH, an increase in the characteristic intensity of the contact 5086 above the deep Figure 5AH, an increase in the characteristic intensity of the contact 5086 above the deep
press intensity threshold IT , as illustrated by intensity level meter 5028, causes the field of press intensity threshold ITD, as D illustrated by intensity level meter 5028, causes the field of
view 5034 of the camera(s) to replace a larger portion of Internet browser user interface 5060 view 5034 of the camera(s) to replace a larger portion of Internet browser user interface 5060
(e.g., leaving (e.g., leavingonly onlyURL/search input region URL/search input region 5062 5062and andbrowser browsercontrols controls5064), 5064),and andtactile tactile output generators 167 of the device 100 output a tactile output (as illustrated at 5088) to output generators 167 of the device 100 output a tactile output (as illustrated at 5088) to
indicate that criteria have been met for replacing a portion of the Internet browser user indicate that criteria have been met for replacing a portion of the Internet browser user
interface 5060 interface with the 5060 with the field fieldof ofview view 5034 5034 of of the the camera(s). camera(s). In In some some embodiments, embodiments, inin
response to response to the the input input described described with with regard regard to to Figures Figures 5AF-5AH, thefield 5AF-5AH, the fieldofof view view5034 5034ofof the camera(s) fully replaces Internet browser user interface 506 on touch screen display 112. the camera(s) fully replaces Internet browser user interface 506 on touch screen display 112.
[00201]
[00201] Figures 5AI-5AM Figures 5AI-5AM illustrateananinput illustrate inputthat that causes causes movement movementof of virtuallamp virtual lamp 5084. In 5084. In Figures 5AI-5AJ,asasthe Figures 5AI-5AJ, the contact contact 5086 5086moves moves along along a path a path indicatedbybyarrow indicated arrow 5090, 5090,
virtual lamp virtual lamp 5084 is dragged 5084 is by the dragged by the contact contact 5086. 5086. As Asthe the virtual virtual lamp 5084isis moved lamp 5084 movedbyby contact 5086, the size of the virtual lamp 5084 is unchanged and the path of the virtual lamp contact 5086, the size of the virtual lamp 5084 is unchanged and the path of the virtual lamp
5084 is optionally unconstrained by the structure of the physical space captured in the field of 5084 is optionally unconstrained by the structure of the physical space captured in the field of
view of view of the the camera(s). As the camera(s). As the virtual virtual lamp lamp 5084 is moved 5084 is moved bybycontact contact5086, 5086,planes planesidentified identified 67
1005066680
in the field of view 5034 of the camera(s) are highlighted. For example, floor plane 5038 is in the field of view 5034 of the camera(s) are highlighted. For example, floor plane 5038 is 10 Jan 2024
highlighted in highlighted in Figure Figure 5AJ, as the 5AJ, as the virtual virtuallamp lamp 5084 5084 moves overthe moves over thefloor floor plane plane 5038. 5038. In In Figures 5AJ-5AK, Figures 5AJ-5AK, as as thecontact the contact5086 5086 moves moves along along a path a path indicated indicated by by arrow arrow 5092, 5092, virtual virtual
lamp5084 lamp 5084continues continuestotobebedragged draggedbyby thecontact the contact5086. 5086.InInFigures Figures5AK-5AL, 5AK-5AL, as the as the contact contact
5086moves 5086 movesalong along a a pathindicated path indicatedbybyarrow arrow 5094, 5094, virtuallamp virtual lamp 5084 5084 continues continues to to be be dragged dragged
by the contact 5086, floor plane 5038 ceases to be highlighted, and table surface 5046 is by the contact 5086, floor plane 5038 ceases to be highlighted, and table surface 5046 is
highlighted as highlighted as the the virtual virtuallamp lamp 5084 5084 moves overthe moves over thetable table 5004. 5004. In In Figure Figure 5AM, 5AM, thecontact the contact 2024200149
5086 has lifted off of touch screen 112. When the contact 5086 has lifted off, the size of the 5086 has lifted off of touch screen 112. When the contact 5086 has lifted off, the size of the
virtual lamp 5086 is adjusted to have a correct scale relative to table 5004 in the field of view virtual lamp 5086 is adjusted to have a correct scale relative to table 5004 in the field of view
5034 of the camera(s) and the virtual lamp 5086 is placed in an upright orientation on the 5034 of the camera(s) and the virtual lamp 5086 is placed in an upright orientation on the
table surface 5046 in the field of view 5034 of the cameras. table surface 5046 in the field of view 5034 of the cameras.
[00202]
[00202] Figures 5AM-5AQ Figures 5AM-5AQ illustrate illustrate an an input input thatdrags that dragsthe thevirtual virtual lamp lamp5084 5084totothe the edge of edge of touch touch screen screen display display 112, 112, which whichcauses causesthe thefield field of of view 5034of view 5034 of the the camera(s) camera(s)to to cease to be displayed and the Internet browser user interface 5060 to be restored. In Figure cease to be displayed and the Internet browser user interface 5060 to be restored. In Figure
5AN, a contact 5096 with touch screen 112 of device 100 is detected at a location that 5AN, a contact 5096 with touch screen 112 of device 100 is detected at a location that
correspondsto corresponds to virtual virtual lamp 5084. In lamp 5084. In Figures Figures 5AN-5AO, 5AN-5AO, as as thethe contact contact 5096 5096 moves moves along along a a path indicated path indicated by by arrow 5098,the arrow 5098, the virtual virtual lamp 5084is lamp 5084 is dragged draggedby bythe the contact contact 5096. 5096.In In Figures 5AO-5AP, Figures 5AO-5AP, as as thethe contact5054 contact 5054 moves moves along along a path a path indicated indicated by arrow by arrow 5100, 5100, the the virtual lamp virtual lamp 5084 continuesto 5084 continues to be be dragged draggedbybythe thecontact contact 5096 5096totoaa position position shown shownininFigure Figure 5AP.In 5AP. In Figure Figure5AQ, 5AQ,the thecontact contact5096 5096hashaslifted liftedoff off of of touch screen 112. touch screen 112.
[00203]
[00203] Theinput The input by bycontact contact 5096 5096illustrated illustrated in in Figure Figure 5AM-5AP causes 5AM-5AP causes a transition, a transition,
as shown as in Figures shown in Figures5AQ-5AT, 5AQ-5AT, from from displaying displaying the the field field of of view view 5034 5034 of the of the camera(s) camera(s) to to ceasing to display the field of view 5034 of the camera(s) and returning to fully displaying ceasing to display the field of view 5034 of the camera(s) and returning to fully displaying
the Internet browser user interface 5060. In Figure 5AR, the field of view 5034 of the the Internet browser user interface 5060. In Figure 5AR, the field of view 5034 of the
camera(s) begins to fade out (as indicated by the dotted lines). In Figures 5AR-5AT, virtual camera(s) begins to fade out (as indicated by the dotted lines). In Figures 5AR-5AT, virtual
lamp 5084 increases in size and moves toward its original position in the Internet browser lamp 5084 increases in size and moves toward its original position in the Internet browser
user interface 5060. In Figure 5AS, the field of view 5034 of the camera(s) is no longer user interface 5060. In Figure 5AS, the field of view 5034 of the camera(s) is no longer
displayed and the Internet browser user interface 5060 begins to fade in (as indicated by the displayed and the Internet browser user interface 5060 begins to fade in (as indicated by the
dotted lines). In Figure 5AT, the Internet browser user interface 5060 is fully displayed and dotted lines). In Figure 5AT, the Internet browser user interface 5060 is fully displayed and
virtual lamp 5084 has returned to its original size and location within Internet browser user virtual lamp 5084 has returned to its original size and location within Internet browser user
interface 5060. interface 5060.
68
1005066680
[00204]
[00204] Figures 6A-6AJ illustrate example user interfaces for displaying a first Figures 6A-6AJ illustrate example user interfaces for displaying a first 10 Jan 2024
representation of a virtual object in a first user interface region, a second representation of the representation of a virtual object in a first user interface region, a second representation of the
virtual object in the second user interface region, and a third representation of the virtual virtual object in the second user interface region, and a third representation of the virtual
object with a representation of a field of view of one or more cameras, in accordance with object with a representation of a field of view of one or more cameras, in accordance with
some embodiments. The user interfaces in these figures are used to illustrate the processes some embodiments. The user interfaces in these figures are used to illustrate the processes
described below, described below,including includingthe the processes processesin in Figures Figures 8A-8E, 8A-8E,9A-9D, 9A-9D, 10A-10D, 10A-10D, 16A-16G, 16A-16G,
17A-17D, 18A-18I, 17A-17D, 18A-18I, 19A-19H, 19A-19H, and and 20A-20F. 20A-20F. For convenience For convenience of explanation, of explanation, some ofsome the of the 2024200149
embodiments embodiments willbebediscussed will discussed with with reference reference toto operationsperformed operations performed on on a device a device with with a a touch-sensitive display system 112. In such embodiments, the focus selector is, optionally: a touch-sensitive display system 112. In such embodiments, the focus selector is, optionally: a
respective finger or stylus contact, a representative point corresponding to a finger or stylus respective finger or stylus contact, a representative point corresponding to a finger or stylus
contact (e.g., a centroid of a respective contact or a point associated with a respective contact (e.g., a centroid of a respective contact or a point associated with a respective
contact), or a centroid of two or more contacts detected on the touch-sensitive display system contact), or a centroid of two or more contacts detected on the touch-sensitive display system
112. 112. However, analogous However, analogous operations operations are,optionally, are, optionally,performed performedonona adevice devicewith witha adisplay display 450 and a separate touch-sensitive surface 451 in response to detecting the contacts on the 450 and a separate touch-sensitive surface 451 in response to detecting the contacts on the
touch-sensitive surface 451 while displaying the user interfaces shown in the figures on the touch-sensitive surface 451 while displaying the user interfaces shown in the figures on the
display 450, along with a focus selector. display 450, along with a focus selector.
[00205]
[00205] Figure 6A Figure 6Aillustrates illustrates aamessaging user interface messaging user interface 5008 5008 that that includes includes aamessage message
bubble 5010 bubble 5010that that includes includes aa received received text text message 5012,aamessage message 5012, messagebubble bubble 5014 5014 that that includes includes
a sent text message 5016, and a message bubble 5018 that includes a virtual object (e.g., a sent text message 5016, and a message bubble 5018 that includes a virtual object (e.g.,
virtual chair 5020) received in a message and a virtual object indicator 5022 to indicate that virtual chair 5020) received in a message and a virtual object indicator 5022 to indicate that
the virtual chair 5020 is an object that is viewable in an augmented reality view (e.g., within the virtual chair 5020 is an object that is viewable in an augmented reality view (e.g., within
in aa displayed in displayed field fieldof ofview viewof ofone oneor ormore more cameras of device cameras of 100). Messaging device 100). userinterface Messaging user interface 5008 5008 isisdescribed describedin in further further detail detail with with regard regard to Figure to Figure 5B. 5B.
[00206]
[00206] Figures 6B-6C illustrate an input that causes rotation of the virtual chair 5020. Figures 6B-6C illustrate an input that causes rotation of the virtual chair 5020.
In Figure In Figure 6B, a contact 6B, a contact 6002 with touch 6002 with touchscreen screen 112 112ofofdevice device100 100isis detected. detected. The Thecontact contact 6002moves 6002 movesacross acrosstouch touchscreen screen112112 along along a path a path indicatedbyby indicated arrow arrow 6004. 6004. In In Figure Figure 6C,6C, in in response to response to the the movement movement ofof thecontact, the contact,messaging messaging userinterface user interface5008 5008isisscrolled scrolled upward upward (causing message (causing messagebubble bubble5010 5010 to to scrolloff scroll off of of the the display, display, causing causing message bubbles5014 message bubbles 5014andand 5018 to scroll 5018 to scroll upward, and revealing upward, and revealing an an additional additional message bubble6005) message bubble 6005)andand virtualchair virtual chair 5020 5020 isisrotated rotated(e.g., (e.g.,tilted tiltedupward). upward).TheThe magnitude magnitude and direction and direction of the rotation of the rotation of virtualof virtual
chair 5020 chair correspondtotothe 5020 correspond the movement movement of of contact contact 6002 6002 along along thethe path path indicated indicated by by arrow arrow
6004. In Figure 6D, the contact 6002 has lifted off of touch screen 112. In some 6004. In Figure 6D, the contact 6002 has lifted off of touch screen 112. In some
embodiments,this embodiments, thisrotational rotational behavior behaviorof of the the virtual virtual chair chair5020 5020 within within the the message bubble message bubble 69
1005066680
5018 5018 isisused usedasasanan indication indication thatthat the the virtual virtual chair chair 5020 5020 is a virtual is a virtual objectobject that isthat is viewable viewable in in 10 Jan 2024
an augmented an augmentedreality realityenvironment environment including including thefield the fieldofof view viewofofthe the camera(s) camera(s)ofofthe the device device 100. 100.
[00207]
[00207] Figures 6E-6L illustrate an input that that causes the messaging user interface Figures 6E-6L illustrate an input that that causes the messaging user interface
5008 to be 5008 to be replaced replaced by by aa staging staging user user interface interface 6010 6010 and and that that subsequently changesthe subsequently changes the orientation of virtual chair 5020. In Figure 6E, a contact 6006 with touch screen 112 of orientation of virtual chair 5020. In Figure 6E, a contact 6006 with touch screen 112 of
device 100 is detected. A characteristic intensity of the contact is above a contact detection device 100 is detected. A characteristic intensity of the contact is above a contact detection 2024200149
intensity threshold IT and below a hint press intensity threshold IT , as illustrated by intensity threshold ITo and 0 below a hint press intensity threshold ITH, as illustrated H by
intensity level meter 5028. In Figure 6F, an increase in the characteristic intensity of the intensity level meter 5028. In Figure 6F, an increase in the characteristic intensity of the
contact 6006 above the hint press intensity threshold IT , as illustrated by intensity level contact 6006 above the hint press intensity threshold ITH, as illustrated H by intensity level
meter 5028, has caused the area of message bubble 5018 to increase, the size of the virtual meter 5028, has caused the area of message bubble 5018 to increase, the size of the virtual
chair 5020 chair to increase, 5020 to increase, and and messaging user interface messaging user interface 5008 to begin 5008 to begin to to be blurred behind behind
message bubble 5018 (e.g., to provide visual feedback to the user of the effect of increasing message bubble 5018 (e.g., to provide visual feedback to the user of the effect of increasing
the characteristic intensity of the contact). In Figure 6G, an increase in the characteristic the characteristic intensity of the contact). In Figure 6G, an increase in the characteristic
intensity of the contact 6006 above the light press intensity threshold IT , as illustrated by intensity of the contact 6006 above the light press intensity threshold ITL, as illustrated L by
intensity level intensity levelmeter meter5028, 5028, has has caused caused message bubble5018 message bubble 5018totobebereplaced replacedbybya aplatter platter 6008, 6008, the size of the virtual chair 5020 to increase further, and increased blurring of messaging user the size of the virtual chair 5020 to increase further, and increased blurring of messaging user
interface 5008 behind platter 6008. In Figure 6H, an increase in the characteristic intensity of interface 5008 behind platter 6008. In Figure 6H, an increase in the characteristic intensity of
the contact 6006 above the deep press intensity threshold IT , as illustrated by intensity level the contact 6006 above the deep press intensity threshold ITD, as illustrated D by intensity level
meter 5028, causes messaging user interface 5008 to cease to be displayed and initiates fade- meter 5028, causes messaging user interface 5008 to cease to be displayed and initiates fade-
in (indicated by dotted lines) of staging user interface 6010. Additionally, the increase in the in (indicated by dotted lines) of staging user interface 6010. Additionally, the increase in the
characteristic intensity of the contact 6006 above the deep press intensity threshold ITD, as characteristic intensity of the contact 6006 above the deep press intensity threshold ITD, as
illustrated in Figure 6H, causes tactile output generators 167 of the device 100 to output a illustrated in Figure 6H, causes tactile output generators 167 of the device 100 to output a
tactile output (as illustrated at 6012) to indicate that criteria have been met for replacing the tactile output (as illustrated at 6012) to indicate that criteria have been met for replacing the
messaginguser messaging userinterface interface 5008 5008with withthe thestaging staginguser user interface interface 6010. 6010.
[00208]
[00208] In some In embodiments, some embodiments, before before thethe characteristicintensity characteristic intensity of of the contact contact 6006 6006
reaches the deep press intensity threshold IT , as illustrated in Figure 6H, the progression reaches the deep press intensity threshold ITD, as D illustrated in Figure 6H, the progression
illustrated in Figures 6E-6G is reversible. For example, reducing the characteristic intensity illustrated in Figures 6E-6G is reversible. For example, reducing the characteristic intensity
of the contact 6006 after the increases illustrated in Figure 6F and/or 6G will cause the of the contact 6006 after the increases illustrated in Figure 6F and/or 6G will cause the
interface state that corresponds to the decreased intensity level of the contact 6006 to be interface state that corresponds to the decreased intensity level of the contact 6006 to be
displayed (e.g., displayed (e.g., the theinterface asas interface shown shownininFigure Figure6G 6G is isshown shown in in accordance with aa accordance with
determination that the reduced characteristic intensity of the contact is above the light press determination that the reduced characteristic intensity of the contact is above the light press
intensity threshold IT , the interface as shown in Figure 6F is shown in accordance with a intensity threshold ITL, L the interface as shown in Figure 6F is shown in accordance with a
70
1005066680
determination that the reduced characteristic intensity of the contact is above the hint press determination that the reduced characteristic intensity of the contact is above the hint press 10 Jan 2024
intensity threshold intensity threshold IT H, and ITH, and the theinterface interfaceasas shown shown in inFigure Figure6E 6E is isshown shown in in accordance with accordance with
a determination that the reduced characteristic intensity of the contact is below the hint press a determination that the reduced characteristic intensity of the contact is below the hint press
intensity threshold IT ). In some embodiments, reducing the characteristic intensity of the intensity threshold ITH).HIn some embodiments, reducing the characteristic intensity of the
contact 6006 after the increases illustrated in Figures 6F and/or 6G will cause the interface as contact 6006 after the increases illustrated in Figures 6F and/or 6G will cause the interface as
shownininFigure shown Figure6E6Etotobeberedisplayed. redisplayed.
[00209]
[00209] In Figure 6I, staging user interface 6010 is displayed. Staging user interface In Figure 6I, staging user interface 6010 is displayed. Staging user interface 2024200149
6010includes 6010 includesstage stage 6014 6014ononwhich whichvirtual virtualchair chair5020 5020isis displayed. displayed. From FromFigure Figure6H-6I, 6H-6I, virtual chair 5020 is animated to indicate the transition from a position of virtual chair 5020 virtual chair 5020 is animated to indicate the transition from a position of virtual chair 5020
in Figure 6H to a position of virtual chair 5020 in Figure 6I. For example, virtual chair 5020 in Figure 6H to a position of virtual chair 5020 in Figure 6I. For example, virtual chair 5020
is rotated to a predefined position, orientation and/or distance relative to stage 6014 (e.g., is rotated to a predefined position, orientation and/or distance relative to stage 6014 (e.g.,
such that virtual chair appears to be supported by stage 6014). Staging user interface 6010 such that virtual chair appears to be supported by stage 6014). Staging user interface 6010
also includes back control 6016, that, when activated (e.g., by a tap input at a location that also includes back control 6016, that, when activated (e.g., by a tap input at a location that
corresponds to back control 6016), causes the previously displayed user interface (e.g., corresponds to back control 6016), causes the previously displayed user interface (e.g.,
messaginguser messaging userinterface interface 5008) 5008)toto be be redisplayed. redisplayed. Staging Staging user user interface interface 6010 also includes 6010 also includes
toggle control 6018 that indicates that a current display mode (e.g., the current display mode toggle control 6018 that indicates that a current display mode (e.g., the current display mode
is a staging user interface mode, as indicated by the highlighted “3D” indicator) and that, is a staging user interface mode, as indicated by the highlighted "3D" indicator) and that,
whenactivated, when activated, causes causes transition transition to to aaselected selecteddisplay displaymode. mode. For For example, while the example, while the staging user interface 6010 is displayed, a tap input by a contact at a location that corresponds to user interface 6010 is displayed, a tap input by a contact at a location that corresponds to
toggle control 6018 (e.g., a location that corresponds to a portion of toggle control 6018 that toggle control 6018 (e.g., a location that corresponds to a portion of toggle control 6018 that
includes the text “World”) causes the staging user interface 6010 to be replaced by a field of includes the text "World") causes the staging user interface 6010 to be replaced by a field of
view of the camera(s). Staging user interface 6010 also includes share control 6020 (e.g., for view of the camera(s). Staging user interface 6010 also includes share control 6020 (e.g., for
displaying a sharing interface). displaying a sharing interface).
[00210]
[00210] Figures 6J-6L illustrate rotation of virtual chair 5020 relative to stage 6014 Figures 6J-6L illustrate rotation of virtual chair 5020 relative to stage 6014
caused by caused bymovement movementof of contact contact 6006. 6006. In In Figures Figures 6J-6K, 6J-6K, as as thethe contact contact 6006 6006 moves moves along along a a path indicated by arrow 6022, virtual chair 5020 is rotated (e.g., about a first axis that is path indicated by arrow 6022, virtual chair 5020 is rotated (e.g., about a first axis that is
perpendicular to perpendicular to the the movement movement ofof thecontact the contact6066). 6066).InInFigures Figures6K-6L, 6K-6L,asas thecontact the contact6006 6006 movesalong moves alonga apath pathindicated indicatedbybyarrow arrow6024, 6024,and and subsequently subsequently along along a path a path indicated indicated by by
arrow 6025, virtual chair 5020 is rotated (e.g., about a second axis that is perpendicular to the arrow 6025, virtual chair 5020 is rotated (e.g., about a second axis that is perpendicular to the
movement movement of of thecontact the contact6066). 6066).InInFigure Figure6M, 6M, thecontact the contact6006 6006 hashas liftedoff lifted offof of touch touchscreen screen 112. 112. In In some embodiments, some embodiments, as as shown shown in Figures in Figures 6J-6L, 6J-6L, thethe rotation rotation ofof virtualchair virtual chair 5020 5020isis constrained by the surface of the stage 6014. For example, at least one leg of the virtual chair constrained by the surface of the stage 6014. For example, at least one leg of the virtual chair
5020 remains 5020 remains in contact in contact with with the surface the surface of theof the 6014 stage stageduring 6014the during the rotation(s) rotation(s) of the virtual of the virtual
71
1005066680
chair. In chair. Insome some embodiments, thesurface embodiments, the surfaceofofthe the stage stage 6014 6014serves servesasas aa frame frameof of reference reference for for 10 Jan 2024
the free rotation and vertical translation of the virtual chair 5020 without placing particular the free rotation and vertical translation of the virtual chair 5020 without placing particular
constraints on constraints on the the movement movement ofofthe thevirtual virtual chair chair 5020. 5020.
[00211]
[00211] Figures 6N-6O illustrate an input that adjusts the displayed size of virtual chair Figures 6N-60 illustrate an input that adjusts the displayed size of virtual chair
5020. In Figure 5020. In 6N, aa first Figure 6N, first contact contact6026 6026 and and aa second second contact contact 6030 with touch 6030 with touch screen screen 112 112are are detected. First detected. Firstcontact contact6026 6026 moves alongaa path moves along path indicated indicated by by arrow arrow6028 6028and, and,simultaneously simultaneously with the with the movement movement ofof first contact first contact 6026, 6026, second secondcontact contact6030 6030moves moves along along a path a path indicated indicated 2024200149
by arrow by arrow6032. 6032.InInFigures Figures6N-60, 6N-6O,asas thefirst the first contact contact 6026 andthe 6026 and the second secondcontact contact6030 6030move move along the paths indicated by arrows 6028 and 6032, respectively (e.g., in a depinch gesture), a along the paths indicated by arrows 6028 and 6032, respectively (e.g., in a depinch gesture), a
displayed size of virtual chair 5020 increases. In Figure 6P, first contact 6030 and second displayed size of virtual chair 5020 increases. In Figure 6P, first contact 6030 and second
contact 6026 have lifted off of touch screen 112 and virtual chair 5020 maintains the contact 6026 have lifted off of touch screen 112 and virtual chair 5020 maintains the
increased size after the lift-off of contacts 6026 and 6030. increased size after the lift-off of contacts 6026 and 6030.
[00212]
[00212] Figures 6Q-6U illustrate an input that that causes the staging user interface Figures 6Q-6U illustrate an input that that causes the staging user interface
6010to 6010 to be be replaced replaced by by aa field field of ofview view 6036 of one 6036 of or more one or camerasofofdevice more cameras device100. 100.InInFigure Figure 6Q, a contact 6034 with touch screen 112 of device 100 is detected. A characteristic intensity 6Q, a contact 6034 with touch screen 112 of device 100 is detected. A characteristic intensity
of the contact is above a contact detection intensity threshold IT and below a hint press of the contact is above a contact detection intensity threshold ITo and below 0 a hint press
intensity threshold IT , as illustrated by intensity level meter 5028. In Figure 6R, an increase intensity threshold ITH, H as illustrated by intensity level meter 5028. In Figure 6R, an increase
in the characteristic intensity of the contact 5026 above the hint press intensity threshold ITH, in the characteristic intensity of the contact 5026 above the hint press intensity threshold ITH,
as illustrated by intensity level meter 5028, has caused staging user interface 6010 to begin to as illustrated by intensity level meter 5028, has caused staging user interface 6010 to begin to
be blurred behind virtual chair 5020 (as indicated by the dotted lines). In Figure 6S, an be blurred behind virtual chair 5020 (as indicated by the dotted lines). In Figure 6S, an
increase in the characteristic intensity of the contact 6034 above the light press intensity increase in the characteristic intensity of the contact 6034 above the light press intensity
threshold IT , as illustrated by intensity level meter 5028, has caused staging user interface threshold ITL, Las illustrated by intensity level meter 5028, has caused staging user interface
6010 to cease to be displayed and initiates fade-in (indicated by dotted lines) of the field of 6010 to cease to be displayed and initiates fade-in (indicated by dotted lines) of the field of
view 6036 of the camera(s). In Figure 6T, an increase in the characteristic intensity of the view 6036 of the camera(s). In Figure 6T, an increase in the characteristic intensity of the
contact 6034 above the deep press intensity threshold IT , as illustrated by intensity level contact 6034 above the deep press intensity threshold ITD, as illustrated D by intensity level
meter 5028, causes the field of view 6036 of the camera(s) to be displayed. Additionally, the meter 5028, causes the field of view 6036 of the camera(s) to be displayed. Additionally, the
increase in the characteristic intensity of the contact 6034 above the deep press intensity increase in the characteristic intensity of the contact 6034 above the deep press intensity
threshold IT , as illustrated in Figure 6T, causes tactile output generators 167 of the device threshold ITD, Das illustrated in Figure 6T, causes tactile output generators 167 of the device
100 tooutput 100 to outputa atactile tactileoutput output(as(as illustrated illustrated at at 6038) 6038) to indicate to indicate that that criteria criteria have have been been met formet for
replacing display of the staging user interface 6010 with display of field of view 6036 of the replacing display of the staging user interface 6010 with display of field of view 6036 of the
camera(s). In Figure 6U, the contact 6034 has lifted off of touch screen 112. In some camera(s). In Figure 6U, the contact 6034 has lifted off of touch screen 112. In some
embodiments, before the characteristic intensity of the contact 6034 reaches the deep press embodiments, before the characteristic intensity of the contact 6034 reaches the deep press
intensity threshold IT , as illustrated in Figure 6T, the progression illustrated in Figures 6Q- intensity threshold ITD, D as illustrated in Figure 6T, the progression illustrated in Figures 6Q-
72
1005066680
6T is reversible. For example, reducing the characteristic intensity of the contact 6034 after 6T is reversible. For example, reducing the characteristic intensity of the contact 6034 after 10 Jan 2024
the increases illustrated in Figure 6R and/or 6S will cause the interface state that corresponds the increases illustrated in Figure 6R and/or 6S will cause the interface state that corresponds
to the decreased intensity level of the contact 6034 to be displayed. to the decreased intensity level of the contact 6034 to be displayed.
[00213]
[00213] From Figures 6Q-6U, virtual chair 5020 is placed on a detected plane (e.g., in From Figures 6Q-6U, virtual chair 5020 is placed on a detected plane (e.g., in
accordance with a determination by device 100 that the virtual chair 5020 is configured to be accordance with a determination by device 100 that the virtual chair 5020 is configured to be
placed in an upright orientation on a detected horizontal surface, such as floor surface 5038) placed in an upright orientation on a detected horizontal surface, such as floor surface 5038)
and the size of virtual chair 5020 is adjusted (e.g., the scale of the virtual chair 5020 relative and the size of virtual chair 5020 is adjusted (e.g., the scale of the virtual chair 5020 relative 2024200149
to the to the physical physical space space 5002 as shown 5002 as in the shown in the field field of of view view 6036 of the 6036 of the camera(s) is determined camera(s) is determined
based on a defined “real world” size of the virtual chair 5020 and/or a detected size of objects based on a defined "real world" size of the virtual chair 5020 and/or a detected size of objects
(such astable (such as table5004) 5004)in in thethe field field of of view view 6036 6036 of theof the camera(s)). camera(s)). The orientation The orientation of virtualof virtual
chair 5020 caused by rotation of virtual chair 5020 while the staging interface 6010 was chair 5020 caused by rotation of virtual chair 5020 while the staging interface 6010 was
displayed (e.g., as described with regard to Figures 6J-6K) is maintained as the virtual chair displayed (e.g., as described with regard to Figures 6J-6K) is maintained as the virtual chair
5020 transitions from staging user interface 6010 to the field of view 6036 of the camera(s). 5020 transitions from staging user interface 6010 to the field of view 6036 of the camera(s).
For example, the orientation of virtual chair 5020 relative to floor surface 5038 is the same as For example, the orientation of virtual chair 5020 relative to floor surface 5038 is the same as
the final orientation of virtual chair 5020 relative to the surface of the stage 5014. In some the final orientation of virtual chair 5020 relative to the surface of the stage 5014. In some
embodiments, the adjustment to the size of virtual object 5020 in the staging user interface is embodiments, the adjustment to the size of virtual object 5020 in the staging user interface is
taken into account when the size of virtual chair 5020 is adjusted in the field of view 6036 taken into account when the size of virtual chair 5020 is adjusted in the field of view 6036
relative to the size of physical space 5002. relative to the size of physical space 5002.
[00214]
[00214] Figures 6V-6Y illustrate an input that that causes the field of view 6036 of the Figures 6V-6Y illustrate an input that that causes the field of view 6036 of the
camera(s) to be replaced by the staging user interface 6010. In Figure 6V, an input (e.g., a tap camera(s) to be replaced by the staging user interface 6010. In Figure 6V, an input (e.g., a tap
input) by contact 6040 is detected at a location that corresponds to toggle control 6018 (e.g., a input) by contact 6040 is detected at a location that corresponds to toggle control 6018 (e.g., a
location that corresponds to a portion of toggle control 6018 that includes the text “3D”). In location that corresponds to a portion of toggle control 6018 that includes the text "3D"). In
Figures 6W-6Y, Figures 6W-6Y, inin response response toto theinput the inputbybycontact contact6040, 6040,the thefield field of of view 6036ofof the view 6036 the camera(s) fades out (as indicated by the dotted lines in Figure 6W), the staging user interface camera(s) fades out (as indicated by the dotted lines in Figure 6W), the staging user interface
6010 fades in (as indicated by the dotted lines in Figure 6X), and staging user interface 6010 6010 fades in (as indicated by the dotted lines in Figure 6X), and staging user interface 6010
is fully displayed (as shown in Figure 6Y). From Figures 6V-6Y, the size of virtual chair is fully displayed (as shown in Figure 6Y). From Figures 6V-6Y, the size of virtual chair
5020 is adjusted and the position of virtual chair 5020 changes (e.g., to return virtual chair 5020 is adjusted and the position of virtual chair 5020 changes (e.g., to return virtual chair
5020 to a predefined position and size for the staging user interface). 5020 to a predefined position and size for the staging user interface).
[00215]
[00215] Figures 6Z-6AC illustrate an input that that causes the staging user interface Figures 6Z-6AC illustrate an input that that causes the staging user interface
6010 to be replaced by the messaging user interface 5008. In Figure 6Z, an input (e.g., a tap 6010 to be replaced by the messaging user interface 5008. In Figure 6Z, an input (e.g., a tap
input) by contact 6042 is detected at a location that corresponds to back control 6016. In input) by contact 6042 is detected at a location that corresponds to back control 6016. In
Figures 6AA-6AC, Figures 6AA-6AC, in in response response to to thethe inputbyby input contact6042, contact 6042, thestaging the staginguser userinterface interface6010 6010
73
1005066680
fades out (as indicated by the dotted lines in Figure 6AA), the messaging user interface 5008 fades out (as indicated by the dotted lines in Figure 6AA), the messaging user interface 5008 10 Jan 2024
fades in (as indicated by the dotted lines in Figure 6AB), and messaging user interface 5008 fades in (as indicated by the dotted lines in Figure 6AB), and messaging user interface 5008
is fully is fullydisplayed displayed(as (asshown shown in in Figure Figure 6AC). FromFigures 6AC). From Figures6Z-6AB, 6Z-6AB,thethe size,orientation, size, orientation,and and position of virtual chair 5020 are continuously adjusted on the display (e.g., to return virtual position of virtual chair 5020 are continuously adjusted on the display (e.g., to return virtual
chair 5020 to a predefined position, size, and orientation for the messaging user interface chair 5020 to a predefined position, size, and orientation for the messaging user interface
5008). 5008).
[00216]
[00216] Figures 6AD-6AJ Figures 6AD-6AJ illustrateananinput illustrate inputthat that that that causes causes the the messaging user messaging user 2024200149
interface 5008 to be replaced by the field of view 6036 of the camera(s) (e.g., bypassing interface 5008 to be replaced by the field of view 6036 of the camera(s) (e.g., bypassing
display of the staging user interface 6010). In Figure 6AD, a contact 6044 is detected at a display of the staging user interface 6010). In Figure 6AD, a contact 6044 is detected at a
location that corresponds to virtual chair 5020. The input by contact 6044 includes a long location that corresponds to virtual chair 5020. The input by contact 6044 includes a long
touch gesture (during which the contact 6044 is maintained at the location on the touch- touch gesture (during which the contact 6044 is maintained at the location on the touch-
sensitive surface that corresponds to the representation of the virtual object 5020 with less sensitive surface that corresponds to the representation of the virtual object 5020 with less
than aa threshold than threshold amount of movement amount of movementforfor at at leastaapredefined least predefinedthreshold thresholdamount amountofof time) time)
followedby followed byan anupward upwardswipe swipe gesture gesture (thatdrags (that dragsthe thevirtual virtual chair chair 5020 upward).AsAsshown 5020 upward). shownin in Figures 6AD-6AE, Figures 6AD-6AE, thethe virtualchair virtual chair5020 5020isisdragged draggedupward upward as as thethe contact contact 6044 6044 moves moves along along
a path a path indicated indicated by by arrow 6046. In arrow 6046. In Figure Figure 6AE, 6AE,messaging messaging user user interface5008 interface 5008 fadesoutout fades
behind virtual behind virtual chair chair 5020. 5020. As As shown inFigures shown in Figures6AE-6AF, 6AE-6AF,thethe virtualchair virtual chair5020 5020 continues continues to to
be dragged be draggedupward upwardasasthe thecontact contact6044 6044moves moves along along a path a path indicated indicated by by arrow arrow 6048. 6048. In In Figure 6AF, the field of view 5036 of the camera(s) fades in behind virtual chair 5020. In Figure 6AF, the field of view 5036 of the camera(s) fades in behind virtual chair 5020. In
Figure 6AG, Figure 6AG,ininresponse responsetotothe theinput input by by contact contact 6044 6044that that includes includes the the long long touch gesture touch gesture
followedby followed bythe the upward upwardswipe swipe gesture,the gesture, thefield field of of view view 5036 5036ofofthe the camera(s) camera(s)isis fully fully displayed. In displayed. In Figure Figure 6AH, thecontact 6AH, the contact 6044 6044lifts lifts off off of oftouch touchscreen screen112. 112.In InFigures Figures6AH- 6AH-
6AJ, in response to the lift off of the contact 6044, the virtual chair 5020 is released (e.g., 6AJ, in response to the lift off of the contact 6044, the virtual chair 5020 is released (e.g.,
because the virtual chair 5020 is no longer restrained or dragged by the contact) and drops to because the virtual chair 5020 is no longer restrained or dragged by the contact) and drops to
a plane (e.g., the floor surface 5038, in accordance with a determination that a horizontal a plane (e.g., the floor surface 5038, in accordance with a determination that a horizontal
(floor) surface corresponds to the virtual chair 5020). Additionally, as illustrated in Figure (floor) surface corresponds to the virtual chair 5020). Additionally, as illustrated in Figure
6AJ, tactile output generators 167 of the device 100 output a tactile output (as illustrated at 6AJ, tactile output generators 167 of the device 100 output a tactile output (as illustrated at
6050) 6050) totoindicate indicatethat thatthethe virtual virtual chair chair 5020 5020 has landed has landed on the on thesurface floor floor surface 5038. 5038.
[00217]
[00217] Figures 7A-7P illustrate example user interfaces for displaying an item with a Figures 7A-7P illustrate example user interfaces for displaying an item with a
visual indication to indicate that the item corresponds to a virtual three-dimensional object, in visual indication to indicate that the item corresponds to a virtual three-dimensional object, in
accordancewith accordance withsome some embodiments. embodiments. The The user user interfaces interfaces in these in these figures figures areused are used toto illustrate illustrate
the processes the processes described below,including described below, includingthe the processes processes in in Figures Figures 8A-8E, 8A-8E,9A-9D, 9A-9D, 10A-10D, 10A-10D,
16A-16G, 17A-17D, 16A-16G, 17A-17D, 18A-18I, 18A-18I, 19A-19H, 19A-19H, and 20A-20F. and 20A-20F. For convenience For convenience of explanation, of explanation, 74
1005066680
someofofthe some the embodiments embodiments will will bebe discussed discussed with with reference reference to to operationsperformed operations performed on on a a 10 Jan 2024
device with device with aa touch-sensitive touch-sensitive display display system 112. In system 112. In such embodiments,thethefocus such embodiments, focusselector selectoris, is, optionally: a respective finger or stylus contact, a representative point corresponding to a optionally: a respective finger or stylus contact, a representative point corresponding to a
finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a
respective contact), or a centroid of two or more contacts detected on the touch-sensitive respective contact), or a centroid of two or more contacts detected on the touch-sensitive
display system display 112. However, system 112. However,analogous analogous operations operations are, are, optionally,performed optionally, performedon on a device a device
with a display 450 and a separate touch-sensitive surface 451 in response to detecting the with a display 450 and a separate touch-sensitive surface 451 in response to detecting the 2024200149
contacts on the touch-sensitive surface 451 while displaying the user interfaces shown in the contacts on the touch-sensitive surface 451 while displaying the user interfaces shown in the
figures on the display 450, along with a focus selector. figures on the display 450, along with a focus selector.
[00218]
[00218] Figure 7A illustrates an input detected while a user interface 400 for a menu of Figure 7A illustrates an input detected while a user interface 400 for a menu of
applications is displayed. The input corresponds to a request to display a first user interface applications is displayed. The input corresponds to a request to display a first user interface
(e.g., Internet browser user interface 5060). In Figure 7A, an input (e.g., a tap input) by (e.g., Internet browser user interface 5060). In Figure 7A, an input (e.g., a tap input) by
contact 7000 is detected at a location that corresponds to icon 420 for browser module 147. In contact 7000 is detected at a location that corresponds to icon 420 for browser module 147. In
response to the input, Internet browser user interface 5060 is displayed, as shown in Figure response to the input, Internet browser user interface 5060 is displayed, as shown in Figure
7B. 7B.
[00219]
[00219] Figure 7B illustrates Internet browser user interface 5060 (e.g., as described in Figure 7B illustrates Internet browser user interface 5060 (e.g., as described in
detail with detail with regard regard to toFigure Figure5AE). 5AE). The Internet browser The Internet user interface browser user interface 5060 includes web 5060 includes web objects 5066, objects 5068, 5070, 5066, 5068, 5070, 5072, 5072,5074, 5074,and and5076. 5076.Web Web objects objects 5066, 5066, 5068, 5068, and and 50725072 include include
two-dimensional representations of three-dimensional virtual objects as indicated by virtual two-dimensional representations of three-dimensional virtual objects as indicated by virtual
object indicators object indicators 5078, 5078, 5080, 5080, and 5082, respectively. and 5082, respectively. Web objects5070, Web objects 5070,5074, 5074,and and5076 5076 include two-dimensional include two-dimensionalimages images (butthethetwo-dimensional (but two-dimensional images images of web of web objects objects 5070, 5070, 5074, 5074,
and 5076 and 5076dodonot notcorrespond correspondtotothree-dimensional three-dimensionalvirtual virtualobjects, objects, as as indicated indicated by by the the absence absence
of the virtual object indicators). of the virtual object indicators).
[00220]
[00220] Figures 7C-7D illustrate an input that causes translation (e.g., scrolling) of Figures 7C-7D illustrate an input that causes translation (e.g., scrolling) of
Internet browser user interface 5060. In Figure 7B, a contact 7002 with touch screen 112 is Internet browser user interface 5060. In Figure 7B, a contact 7002 with touch screen 112 is
detected. In detected. In Figures Figures 7C-7D, as the 7C-7D, as the contact contact 7002 movesalong 7002 moves alonga apath pathindicated indicatedbybyarrow arrow7004, 7004, webobjects web objects 5066, 5066,5068, 5068,5070, 5070,5072, 5072,5074, 5074,andand 5076, 5076, scrollupward, scroll upward, revealing revealing additionalweb additional web objects 7003 objects and7005. 7003 and 7005.Additionally, Additionally,asas the the contact contact 7002 movesalong 7002 moves along thepath the pathindicated indicatedbyby arrow 7004, virtual objects in the web objects 5066, 5068, and 5072 that include virtual arrow 7004, virtual objects in the web objects 5066, 5068, and 5072 that include virtual
object indicators 5078, 5080, and 5082, respectively, rotate (e.g., tilt upward) in accordance object indicators 5078, 5080, and 5082, respectively, rotate (e.g., tilt upward) in accordance
with the (upward vertical) direction of the input. For example, virtual lamp 5084 tilts upward with the (upward vertical) direction of the input. For example, virtual lamp 5084 tilts upward
from a first orientation in Figure 7C to a second orientation in Figure 7D. The two- from a first orientation in Figure 7C to a second orientation in Figure 7D. The two-
75
1005066680
dimensionalimages dimensional imagesofofweb web objects5070, objects 5070, 5074, 5074, andand 5076 5076 do not do not rotate rotate as as thecontact the contactscrolls scrolls 10 Jan 2024
the Internet browser user interface 5060. In Figure 7E, the contact 7002 has lifted off of touch the Internet browser user interface 5060. In Figure 7E, the contact 7002 has lifted off of touch
screen 112. screen 112. In In some embodiments, some embodiments, thethe rotationbehavior rotation behaviorofofthe theobjects objectsdepicted depictedininweb web objects 5066, 5068, and 5072 is used as a visual indication that these web objects have objects 5066, 5068, and 5072 is used as a visual indication that these web objects have
correspondingthree-dimensional corresponding three-dimensionalvirtual virtualobjects objects that that are are viewable in an viewable in an augmented reality augmented reality
environment,while environment, whilethe theabsence absenceofofsuch suchrotational rotational behavior behaviorofofthe the objects objects depicted in web depicted in web
objects 5070, 5074, and 5076 is used as a visual indication that these web objects do not have objects 5070, 5074, and 5076 is used as a visual indication that these web objects do not have 2024200149
correspondingthree-dimensional corresponding three-dimensionalvirtual virtualobjects objects that that are are viewable in an viewable in an augmented reality augmented reality
environment. environment.
[00221]
[00221] Figures 7F-7G illustrate a parallax effect in which the virtual objects rotate on Figures 7F-7G illustrate a parallax effect in which the virtual objects rotate on
the display in response to a change in the orientation of the device 100 relative to the physical the display in response to a change in the orientation of the device 100 relative to the physical
world. world.
[00222]
[00222] Figure 7F1 Figure illustrates aadevice 7F1illustrates device100 100 held held by by user user 7006 7006 in in user’s user'shand hand 5006 such 5006 such
that the device 100 has a substantially vertical orientation. Figure 7F2 illustrates Internet that the device 100 has a substantially vertical orientation. Figure 7F2 illustrates Internet
browseruser browser user interface interface 5060 as displayed 5060 as displayed by bydevice device100 100when when thedevice the device 100 100 is is ininthe the orientation illustrated in Figure 7F1. orientation illustrated in Figure 7F1.
[00223]
[00223] Figure 7G1 Figure 7G1illustrates illustrates aa device device 100 100 held held by by user user 7006 in user’s 7006 in user's hand hand 5006 such 5006 such
that the device 100 has a substantially horizontal orientation. Figure 7G2 illustrates Internet that the device 100 has a substantially horizontal orientation. Figure 7G2 illustrates Internet
browseruser browser user interface interface 5060 as displayed 5060 as displayed by bydevice device100 100when when thedevice the device 100 100 is is ininthe the orientation illustrated in Figure 7G1. From 7F2 to 7G2, the orientation of virtual objects in orientation illustrated in Figure 7G1. From 7F2 to 7G2, the orientation of virtual objects in
the web the objects 5066, web objects 5066, 5068, 5068,and and5072 5072that thatinclude includevirtual virtual object object indicators indicators 5078, 5078, 5080, 5080, and and
5082, respectively, rotate (e.g., tilt upward) in accordance with the change in orientation of 5082, respectively, rotate (e.g., tilt upward) in accordance with the change in orientation of
the device. For example, virtual lamp 5084 tilts upward from a first orientation in Figure 7F2 the device. For example, virtual lamp 5084 tilts upward from a first orientation in Figure 7F2
to aa second to second orientation orientation in inFigure Figure 7G2, 7G2, in in accordance with aa concurrent accordance with changeinin device concurrent change device orientation in orientation in the thephysical physicalspace. space.The Thetwo-dimensional imagesofofweb two-dimensional images webobjects objects5070, 5070,5074, 5074, and 5076 and 5076dodonot notrotate rotate as as the the orientation orientationof ofthe thedevice devicechanges. changes.In Insome some embodiments, the embodiments, the
rotation behavior of the objects depicted in web objects 5066, 5068, and 5072 is used as a rotation behavior of the objects depicted in web objects 5066, 5068, and 5072 is used as a
visual indication that these web objects have corresponding three-dimensional virtual objects visual indication that these web objects have corresponding three-dimensional virtual objects
that are that are viewable viewable in in an an augmented reality environment, augmented reality whilethe environment, while theabsence absenceofofsuch suchrotational rotational behavior of the objects depicted in web objects 5070, 5074, and 5076 is used as a visual behavior of the objects depicted in web objects 5070, 5074, and 5076 is used as a visual
indication that indication that these theseweb web objects objects do do not not have have corresponding three-dimensionalvirtual corresponding three-dimensional virtual objects objects that are that are viewable viewable in in an an augmented reality environment. augmented reality environment.
76
1005066680
[00224]
[00224] Figures 7H-7L illustrate input that corresponds to a request to display a second Figures 7H-7L illustrate input that corresponds to a request to display a second 10 Jan 2024
user interface (e.g., messaging user interface 5008). In Figure 7H, a contact 7008 is detected user interface (e.g., messaging user interface 5008). In Figure 7H, a contact 7008 is detected
at a location that corresponds to the lower edge of display 112. In Figures 7H-7I, the contact at a location that corresponds to the lower edge of display 112. In Figures 7H-7I, the contact
7008moves 7008 movesupward upward along along a path a path indicated indicated by by arrow arrow 7010. 7010. In Figure In Figure 7I-7J, 7I-7J, thethe contact contact 7008 7008
continues to continues to move upward move upward along along a path a path indicatedbyby indicated arrow arrow 7012. 7012. In In Figures Figures 7H-7J, 7H-7J, as as contact contact
7008moves 7008 movesupward upward from from the the lower lower edgeedge of display of display 112,112, the the size size of of Internetbrowser Internet browser user user
interface 5060 decreases, as shown in Figure 7I, and in Figure 7J, a multitasking user interface 5060 decreases, as shown in Figure 7I, and in Figure 7J, a multitasking user 2024200149
interface 7012 is displayed (e.g., in response to the upward edge swipe gesture by contact interface 7012 is displayed (e.g., in response to the upward edge swipe gesture by contact
7008). The multitasking user interface 7012 is configured to allow selection of an interface 7008). The multitasking user interface 7012 is configured to allow selection of an interface
from among various applications with retained states (e.g., the retained state is the last state from among various applications with retained states (e.g., the retained state is the last state
of aa respective of respective application application when the respective when the respective application application was was the the foreground application foreground application
executed on the device), and various control interfaces (e.g., control center user interface executed on the device), and various control interfaces (e.g., control center user interface
7014, Internet browser user interface 5060, and messaging user interface 5008, as illustrated 7014, Internet browser user interface 5060, and messaging user interface 5008, as illustrated
in Figure 7J). In Figure 7K, contact 7008 lifts off of touch screen 112. In Figure 7L an input in Figure 7J). In Figure 7K, contact 7008 lifts off of touch screen 112. In Figure 7L an input
(e.g., a tap input) by contact 7016 is detected at a location that corresponds to messaging user (e.g., a tap input) by contact 7016 is detected at a location that corresponds to messaging user
interface 5008. In response to the input by contact 7016, messaging user interface 5008 is interface 5008. In response to the input by contact 7016, messaging user interface 5008 is
displayed, as illustrated in Figure 7M. displayed, as illustrated in Figure 7M.
[00225]
[00225] Figure 7M illustrates a messaging user interface 5008 (e.g., as described in Figure 7M illustrates a messaging user interface 5008 (e.g., as described in
further detail with regard to Figure 5B) that includes a message bubble 5018 that includes a further detail with regard to Figure 5B) that includes a message bubble 5018 that includes a
virtual object (e.g., virtual chair 5020) received in a message and a virtual object indicator virtual object (e.g., virtual chair 5020) received in a message and a virtual object indicator
5022 to indicate that the virtual chair 5020 is a virtual three-dimensional object (e.g., an 5022 to indicate that the virtual chair 5020 is a virtual three-dimensional object (e.g., an
object that is viewable in an augmented reality view and/or an object that is viewable from object that is viewable in an augmented reality view and/or an object that is viewable from
different angles). different angles).Messaging user interface Messaging user interface 5008 also includes 5008 also includes message bubble6005 message bubble 6005that that includes aa sent includes sent text textmessage message and messagebubble and message bubble7018 7018 thatincludes that includesa areceived receivedtext textmessage message that includes that includes an an emoji emoji 7020. 7020. Emoji 7020isisaa two-dimensional Emoji 7020 two-dimensionalimage image that that does does not not correspond correspond
to a virtual three-dimensional object. For this reason, emoji 7020 is displayed without a to a virtual three-dimensional object. For this reason, emoji 7020 is displayed without a
virtual object indicator. virtual object indicator.
[00226]
[00226] Figure 7N Figure 7Nillustrates illustrates aamap map user user interface interface 7022 7022 that that includes includes aamap map 7024, 7024, point point
of interest information region 7026 for a first point of interest, and point of interest of interest information region 7026 for a first point of interest, and point of interest
information region 7032 for a second point of interest. For example, the first point of interest information region 7032 for a second point of interest. For example, the first point of interest
and the second point of interest are search results within or proximate to an area shown by and the second point of interest are search results within or proximate to an area shown by
map 7024 that correspond to a search entry “Apple” in search input region 7025. In first point map 7024 that correspond to a search entry "Apple" in search input region 7025. In first point
of interest information region 7026, a first point of interest object 7028 is displayed with a of interest information region 7026, a first point of interest object 7028 is displayed with a
77
1005066680
virtual object indicator 7030 to indicate that the first point of interest object 7028 is a virtual virtual object indicator 7030 to indicate that the first point of interest object 7028 is a virtual 10 Jan 2024
three-dimensional object. In second point of interest information region 7032, a second point three-dimensional object. In second point of interest information region 7032, a second point
of interest object 7034 is displayed without a virtual object indicator because second point of of interest object 7034 is displayed without a virtual object indicator because second point of
interest object 7034 does not correspond to a virtual three-dimensional object viewable in an interest object 7034 does not correspond to a virtual three-dimensional object viewable in an
augmentedreality augmented realityview. view.
[00227]
[00227] Figure 7O illustrates a file management user interface 7036 that includes file Figure 70 illustrates a file management user interface 7036 that includes file
management management controls controls 7038, 7038, a filemanagement a file management search search input input region region 7040, 7040, filefile information information 2024200149
region 7042 for a first file (e.g., a portable document format (PDF) file), file information region 7042 for a first file (e.g., a portable document format (PDF) file), file information
region 7044 for a second file (e.g., a photograph file), file information region 7046 for a third region 7044 for a second file (e.g., a photograph file), file information region 7046 for a third
file (e.g., a virtual chair object), and file information region 7048 for a fourth file (e.g., a PDF file (e.g., a virtual chair object), and file information region 7048 for a fourth file (e.g., a PDF
file). The third file information region 7046 includes a virtual object indicator 7050 displayed file). The third file information region 7046 includes a virtual object indicator 7050 displayed
adjacent to a file preview object 7045 of file information region 7046 to indicate that the third adjacent to a file preview object 7045 of file information region 7046 to indicate that the third
file corresponds to a virtual three-dimensional object. First file information region 7042, file corresponds to a virtual three-dimensional object. First file information region 7042,
secondfile second file information information region region 7044, and fourth 7044, and fourth file file information information region region 7048 are displayed 7048 are displayed
without virtual object indicators because the files that corresponds to these file information without virtual object indicators because the files that corresponds to these file information
regions do regions do not not have have corresponding correspondingvirtual virtual three-dimensional three-dimensionalobjects objectsthat that are are viewable in an viewable in an augmentedreality augmented realityenvironment. environment.
[00228]
[00228] Figure 7P illustrates an e-mail user interface 7052 that includes e-mail Figure 7P illustrates an e-mail user interface 7052 that includes e-mail
navigation controls navigation controls 7054, e-mail information 7054, e-mail informationregion region7056, 7056,and ande-mail e-mailcontent contentregion region7058 7058 that includes a representation of a first attachment 7060 and a representation of a second that includes a representation of a first attachment 7060 and a representation of a second
attachment 7062. The representation of the first attachment 7060 includes a virtual object attachment 7062. The representation of the first attachment 7060 includes a virtual object
indicator 7064 to indicate that the first attachment is a virtual three-dimensional object indicator 7064 to indicate that the first attachment is a virtual three-dimensional object
viewablein viewable in an an augmented augmentedreality realityenvironment. environment.Second Second attachment attachment 70627062 is displayed is displayed without without
a virtual object indicator because the second attachment is not a virtual three-dimensional a virtual object indicator because the second attachment is not a virtual three-dimensional
object viewable object in an viewable in an augmented realityenvironment. augmented reality environment.
[00229]
[00229] Figures 8A-8E Figures 8A-8Eare areflow flowdiagrams diagrams illustratingmethod illustrating method800800 of of displaying displaying a a representation of a virtual object while switching from displaying a first user interface region representation of a virtual object while switching from displaying a first user interface region
to displaying to displaying aa second second user user interface interface region, region,ininaccordance accordance with with some embodiments. some embodiments. Method Method
800 is performed 800 is performed at an at an electronic electronic device device (e.g.,(e.g., devicedevice 300, Figure 300, Figure 3, or portable 3, or portable multifunction multifunction
device 100, Figure 1A) with a display, a touch-sensitive surface, and one or cameras (e.g., device 100, Figure 1A) with a display, a touch-sensitive surface, and one or cameras (e.g.,
one or one or more rear-facing cameras more rear-facing camerasonona aside sideof of the the device device opposite opposite from fromthe the display display and and the the touch-sensitive surface). touch-sensitive surface). In Insome some embodiments, thedisplay embodiments, the displayisis aa touch-screen touch-screen display display and and the the
78
1005066680
touch-sensitive surface is on or integrated with the display. In some embodiments, the display touch-sensitive surface is on or integrated with the display. In some embodiments, the display 10 Jan 2024
is separate from the touch-sensitive surface. Some operations in method 800 are, optionally, is separate from the touch-sensitive surface. Some operations in method 800 are, optionally,
combinedand/or combined and/orthe theorder orderofofsome someoperations operationsis,is,optionally, optionally, changed. changed.
[00230]
[00230] Method 800 relates to detecting an input by a contact at a touch-sensitive Method 800 relates to detecting an input by a contact at a touch-sensitive
surface of a device that displays a representation of a virtual object in a first user interface surface of a device that displays a representation of a virtual object in a first user interface
region. In response to the input, the device uses criteria to determine whether to continuously region. In response to the input, the device uses criteria to determine whether to continuously
display the representation of the virtual object while replacing display of at least a portion of display the representation of the virtual object while replacing display of at least a portion of 2024200149
the first user interface region with a field of view of one or more cameras of the device. the first user interface region with a field of view of one or more cameras of the device.
Using criteria to determine whether to continuously display the representation of the virtual Using criteria to determine whether to continuously display the representation of the virtual
object while replacing display of at least a portion of the first user interface region with the object while replacing display of at least a portion of the first user interface region with the
field of field ofview view of of the theone one or ormore more cameras enables the cameras enables the performance performanceofofmultiple multipledifferent different types types of operations in response to an input. Enabling the performance of multiple different types of of operations in response to an input. Enabling the performance of multiple different types of
operations in response to an input (e.g., by replacing display of at least a portion of the user operations in response to an input (e.g., by replacing display of at least a portion of the user
interface with a field of view of one or more cameras or maintaining display of the first user interface with a field of view of one or more cameras or maintaining display of the first user
interface region without replacing display of at least a portion of the first user interface region interface region without replacing display of at least a portion of the first user interface region
with the representation of the field of view of the one or more cameras) increases the with the representation of the field of view of the one or more cameras) increases the
efficiency with which the user is able to perform these operations, thereby enhancing the efficiency with which the user is able to perform these operations, thereby enhancing the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00231]
[00231] The device displays (802) a representation of a virtual object (e.g., a graphical The device displays (802) a representation of a virtual object (e.g., a graphical
representation of a three-dimensional object, such as virtual chair 5020, virtual lamp 5084, representation of a three-dimensional object, such as virtual chair 5020, virtual lamp 5084,
shoes, furniture, hand tools, decorations, people, an emoji, a game character, virtual furniture, shoes, furniture, hand tools, decorations, people, an emoji, a game character, virtual furniture,
etc.) in a first user interface region (e.g., a two-dimensional graphic user interface or a portion etc.) in a first user interface region (e.g., a two-dimensional graphic user interface or a portion
thereof (e.g., a browsable list of furniture images, an image containing one or more selectable thereof (e.g., a browsable list of furniture images, an image containing one or more selectable
objects, etc.)) on the display 112. For example, the first user interface region is messaging objects, etc.)) on the display 112. For example, the first user interface region is messaging
user interface user interface 5008 5008 as as shown in Figure shown in Figure 5B 5BororInternet Internet browser browseruser userinterface interface 5060 as shown 5060 as shown in Figure in Figure 5AE. In some 5AE. In someembodiments, embodiments,thethe firstuser first userinterface interface region region includes includes aa background background other than other than an an image of aa physical image of physical environment surroundingthethedevice environment surrounding device(e.g., (e.g., the the background background of the first user interface region is a preselected background color/pattern, or a background of the first user interface region is a preselected background color/pattern, or a background
imagethat image that is is distinct distinctfrom froman anoutput outputimage image concurrently concurrently captured by the captured by the one or more one or cameras more cameras
and distinct from live content in a field of view of the one or more cameras). and distinct from live content in a field of view of the one or more cameras).
79
1005066680
[00232]
[00232] While displaying the first representation of the virtual object in the first user While displaying the first representation of the virtual object in the first user 10 Jan 2024
interface region on the display, the device detects (804) a first input by a contact at a location interface region on the display, the device detects (804) a first input by a contact at a location
on the touch-sensitive surface 112 that corresponds to the representation of the virtual object on the touch-sensitive surface 112 that corresponds to the representation of the virtual object
on the display (e.g., the contact is detected on the first representation of the virtual object on on the display (e.g., the contact is detected on the first representation of the virtual object on
the touch-screen display, or the contact is detected on an affordance that is concurrently the touch-screen display, or the contact is detected on an affordance that is concurrently
displayed in the first user interface region with the first representation of the virtual object displayed in the first user interface region with the first representation of the virtual object
and that is configured to trigger display of an AR view of the virtual object when invoked by and that is configured to trigger display of an AR view of the virtual object when invoked by 2024200149
the contact). For example, the first input is an input by contact 5020 as described with regard the contact). For example, the first input is an input by contact 5020 as described with regard
to Figures to Figures 5C-5F or an 5C-5F or an input input by by contact contact 5086 5086asas described describedwith withregard regardtoto Figures Figures 5AF-5AL. 5AF-5AL.
[00233]
[00233] In response to detecting the first input by the contact (806), in accordance with In response to detecting the first input by the contact (806), in accordance with
a determination that the first input by the contact meets first (e.g., AR-trigger) criteria (e.g., a determination that the first input by the contact meets first (e.g., AR-trigger) criteria (e.g.,
the AR-trigger criteria are criteria configured to identify a swipe input, a touch-hold input, a the AR-trigger criteria are criteria configured to identify a swipe input, a touch-hold input, a
press input, a tap input, a hard press with an intensity above a predefined intensity threshold, press input, a tap input, a hard press with an intensity above a predefined intensity threshold,
or another type of predefined input gesture, that is associated with triggering the activation of or another type of predefined input gesture, that is associated with triggering the activation of
the camera(s), the camera(s), display display of of an an augmented reality (AR) augmented reality (AR)view viewofofthe thephysical physicalenvironment environment surrounding the device, placement of a three-dimensional representation of the virtual object surrounding the device, placement of a three-dimensional representation of the virtual object
inside the inside the augmented reality view augmented reality of the view of the physical physical environment, and/oraa combination environment, and/or combinationofoftwo two or more of the above actions): the device displays a second user interface region on the or more of the above actions): the device displays a second user interface region on the
display, including replacing display of at least a portion of the first user interface region with display, including replacing display of at least a portion of the first user interface region with
the representation of a field of view of the one or more cameras, and the device continuously the representation of a field of view of the one or more cameras, and the device continuously
displays the representation of the virtual object while switching from displaying the first user displays the representation of the virtual object while switching from displaying the first user
interface region to displaying the second user interface region. For example, the second user interface region to displaying the second user interface region. For example, the second user
interface region on the display is the field of view 5034 of the camera(s) in platter 5030 as interface region on the display is the field of view 5034 of the camera(s) in platter 5030 as
described with regard to Figure 5H, or the field of view 5034 of the camera(s) as described described with regard to Figure 5H, or the field of view 5034 of the camera(s) as described
with regard with regard to to Figure Figure 5AH. InFigures 5AH. In Figures5C-5I, 5C-5I,inin accordance accordancewith witha adetermination determinationthat thatanan input by contact 5026 has a characteristic intensity that increases above a deep press intensity input by contact 5026 has a characteristic intensity that increases above a deep press intensity
threshold ITD, threshold ITD, virtual virtualchair chairobject object5020 5020isiscontinuously continuouslydisplayed displayedwhile while switching switching from from
displaying the first user interface region (messaging user interface 5008) to displaying the displaying the first user interface region (messaging user interface 5008) to displaying the
second user interface region that replaces display of a portion of messaging user interface second user interface region that replaces display of a portion of messaging user interface
5008with 5008 withthe the field field of of view view 5034 of the 5034 of the camera(s) in platter camera(s) in platter5030. 5030. In InFigures Figures5AF-5AH, 5AF-5AH, inin
accordance with a determination that an input by contact 5086 has a characteristic intensity accordance with a determination that an input by contact 5086 has a characteristic intensity
that increases above a deep press intensity threshold IT , virtual lamp object 5084 is that increases above a deep press intensity threshold ITD, virtual D lamp object 5084 is
continuously displayed while switching from displaying the first user interface region continuously displayed while switching from displaying the first user interface region
80
1005066680
(Internet browser (Internet browser user user interface interface 5060) 5060) to displaying to displaying the second the second user interface user interface region that region that 10 Jan 2024
replaces display of a portion of Internet browser user interface 5060 with the field of view replaces display of a portion of Internet browser user interface 5060 with the field of view
5034of 5034 of the the camera(s). camera(s).
[00234]
[00234] In some In embodiments, some embodiments, continuously continuously displaying displaying thethe representation representation of of thethe virtual object includes maintaining display of the representation of the virtual object or virtual object includes maintaining display of the representation of the virtual object or
displaying an animated transition of the first representation of the virtual object changing into displaying an animated transition of the first representation of the virtual object changing into
a second representation of the virtual object (e.g., a view of the virtual object in a different a second representation of the virtual object (e.g., a view of the virtual object in a different 2024200149
size, from a different viewing angle, in a different rendering style, or at a different location on size, from a different viewing angle, in a different rendering style, or at a different location on
the display). the display). In Insome some embodiments, thefield embodiments, the field of of view 5034ofofthe view 5034 the one oneor or more morecameras cameras display aa live display liveimage image of of the the physical physical environment 5002surrounding environment 5002 surroundingthe thedevice devicewhich whichis is
updated in real-time when the device’s position and orientation change relative to the updated in real-time when the device's position and orientation change relative to the
physical environment physical environment(e.g., (e.g., as as illustrated illustratedatat Figures 5K-5L). Figures 5K-5L).InInsome some embodiments, the embodiments, the
second user interface region completely replaces the first user interface region on the display. second user interface region completely replaces the first user interface region on the display.
[00235]
[00235] In some In embodiments, some embodiments, thethe second second user user interfaceregion interface regionoverlays overlays a a portionofof portion
the first user interface region (e.g., a portion of the first user interface region is shown along the first user interface region (e.g., a portion of the first user interface region is shown along
an edge an edge or or around aroundthe the borders borders of of the the display). display). In Insome some embodiments, thesecond embodiments, the seconduser user interface region pops up next to the first user interface region. In some embodiments, the interface region pops up next to the first user interface region. In some embodiments, the
background within the first user interface region is replaced with content of the field of view background within the first user interface region is replaced with content of the field of view
5034 of the 5034 of the camera(s). camera(s). In In some embodiments, some embodiments, thethe device device displays displays an an animated animated transition transition that that
shows the virtual object moving and rotating (e.g., as illustrated at Figures 5E-5I) from a first shows the virtual object moving and rotating (e.g., as illustrated at Figures 5E-5I) from a first
orientation as shown in the first user interface region to a second orientation (e.g., an orientation as shown in the first user interface region to a second orientation (e.g., an
orientation that is predefined relative to a current orientation of a portion of the physical orientation that is predefined relative to a current orientation of a portion of the physical
environmentthat environment thatis is captured in the field captured in fieldofofview viewof ofthe theone oneorormore morecameras). cameras). For For example, example,
the animation the includes aa transition animation includes transition from from displaying displaying aa two-dimensional representationof two-dimensional representation of the the virtual object while displaying the first user interface region to displaying a three-dimensional virtual object while displaying the first user interface region to displaying a three-dimensional
representation of the virtual object while displaying the second user interface region. In some representation of the virtual object while displaying the second user interface region. In some
embodiments,a athree-dimensional embodiments, three-dimensional representation representation ofof thevirtual the virtual object object has has an an anchor anchorplane plane that is predefined based on the shape and orientation of the virtual object as shown in the that is predefined based on the shape and orientation of the virtual object as shown in the
two-dimensional graphical user interface (e.g., the first user interface region). When two-dimensional graphical user interface (e.g., the first user interface region). When
transitioning to the augmented reality view (e.g., the second user interface region), the three- transitioning to the augmented reality view (e.g., the second user interface region), the three-
dimensional representation of the virtual object is moved, resized, and reoriented from the dimensional representation of the virtual object is moved, resized, and reoriented from the
original location of the virtual object on the display to a new location on the display (e.g., to original location of the virtual object on the display to a new location on the display (e.g., to
the center of the of the theaugmented reality view, augmented reality view, or or another another predefined predefined location location in inthe theaugmented augmented 81
1005066680
reality view), reality view),and and during during the the movement movement ororat at the the end end of of the the movement, thethree-dimensional movement, the three-dimensional 10 Jan 2024
representation of the virtual object is reoriented such that the three-dimensional representation of the virtual object is reoriented such that the three-dimensional
representation of the virtual object is at a predefined position and/or orientation relative to a representation of the virtual object is at a predefined position and/or orientation relative to a
predefined plane identified in the field of view of the one or more cameras (e.g., a physical predefined plane identified in the field of view of the one or more cameras (e.g., a physical
surface, such as a vertical wall or horizontal floor surface that can serve as a support plane for surface, such as a vertical wall or horizontal floor surface that can serve as a support plane for
the three-dimensional representation of the virtual object). the three-dimensional representation of the virtual object).
[00236]
[00236] In some embodiments, the first criteria include (808) criteria that are satisfied In some embodiments, the first criteria include (808) criteria that are satisfied 2024200149
when (e.g., in accordance with a determination that) the contact is maintained at the location when (e.g., in accordance with a determination that) the contact is maintained at the location
on the touch-sensitive surface that corresponds to the representation of the virtual object with on the touch-sensitive surface that corresponds to the representation of the virtual object with
less than a threshold amount of movement for at least a predefined amount of time (e.g., a less than a threshold amount of movement for at least a predefined amount of time (e.g., a
long press long press time time threshold). threshold). In Insome some embodiments, embodiments, ininaccordance accordance with with a determination a determination thatthethe that
contact satisfies criteria for recognizing another type of gesture (e.g., a tap), the device contact satisfies criteria for recognizing another type of gesture (e.g., a tap), the device
performsanother performs anotherpredefined predefinedfunction functionother otherthan thantriggering triggering the the AR userinterface AR user interface while while maintainingdisplay maintaining displayof of the the virtual virtual object. object.Determining Determining whether to continuously whether to display the continuously display the representation of the virtual object while replacing display of at least a portion of the first representation of the virtual object while replacing display of at least a portion of the first
user interface region with the field of view of the camera(s), depending on whether the user interface region with the field of view of the camera(s), depending on whether the
contact is maintained at a location on the touch-sensitive surface that corresponds to the contact is maintained at a location on the touch-sensitive surface that corresponds to the
representation of the virtual object with less than a threshold amount of movement for at least representation of the virtual object with less than a threshold amount of movement for at least
a predefined a amountofoftime, predefined amount time,enables enablesthe the performance performanceofofmultiple multipledifferent differenttypes types of of operations in response to an input. Enabling the performance of multiple different types of operations in response to an input. Enabling the performance of multiple different types of
operations in response to an input increases the efficiency with which the user is able to operations in response to an input increases the efficiency with which the user is able to
performthese perform these operations, operations, thereby thereby enhancing enhancingthe theoperability operability of of the the device, device, which, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00237]
[00237] In some embodiments, the first criteria include (810) criteria that are satisfied In some embodiments, the first criteria include (810) criteria that are satisfied
when (e.g., in accordance with a determination that) a characteristic intensity of the contact when (e.g., in accordance with a determination that) a characteristic intensity of the contact
increases above a first intensity threshold (e.g., a light press intensity threshold IT or a deep increases above a first intensity threshold (e.g., a light press intensity threshold ITL or a L deep
press intensity press intensity threshold thresholdIT D). For ITD). Forexample, example, as as described described with with regard regard to toFigures Figures 5C-5F, 5C-5F,
criteria are satisfied when a characteristic intensity of the contact 5026 increases above deep criteria are satisfied when a characteristic intensity of the contact 5026 increases above deep
press intensity threshold IT , as indicated by intensity level meter 5028. In some press intensity threshold ITD, as D indicated by intensity level meter 5028. In some
embodiments, in accordance with a determination that the contact satisfies criteria for embodiments, in accordance with a determination that the contact satisfies criteria for
recognizing another type of gesture (e.g., a tap), the device performs another predefined recognizing another type of gesture (e.g., a tap), the device performs another predefined
function other than triggering the AR user interface while maintaining display of the virtual function other than triggering the AR user interface while maintaining display of the virtual
82
1005066680
object. In some embodiments, the first criteria require that the first input is not a tap input object. In some embodiments, the first criteria require that the first input is not a tap input 10 Jan 2024
(e.g., (e.g., the the input hasaaduration input has durationbetween between touch-down touch-down of the contact of the contact andoflift-off and lift-off of the contact the contact
that is greater than a tap time threshold). Determining whether to continuously display the that is greater than a tap time threshold). Determining whether to continuously display the
representation of the virtual object while replacing display of at least a portion of the first representation of the virtual object while replacing display of at least a portion of the first
user interface region with the field of view of the camera(s), depending on whether a user interface region with the field of view of the camera(s), depending on whether a
characteristic intensity of the contact increases above a first intensity threshold, enables the characteristic intensity of the contact increases above a first intensity threshold, enables the
performance of multiple different types of operations in response to an input. Enabling the performance of multiple different types of operations in response to an input. Enabling the 2024200149
performance of multiple different types of operations in response to an input increases the performance of multiple different types of operations in response to an input increases the
efficiency with which the user is able to perform these operations, thereby enhancing the efficiency with which the user is able to perform these operations, thereby enhancing the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00238]
[00238] In some embodiments, the first criteria include (812) criteria that are satisfied In some embodiments, the first criteria include (812) criteria that are satisfied
when(e.g., when (e.g., in in accordance with aa determination accordance with determination that) that) aa movement movement ofofthe thecontact contactmeets meets predefined movement predefined movement criteria(e.g., criteria (e.g., the the contact contact moves across the moves across the touch-sensitive touch-sensitive surface surface
beyond a predefined threshold position (e.g., a position that corresponds to a boundary of the beyond a predefined threshold position (e.g., a position that corresponds to a boundary of the
first user interface region, a position that is a threshold distance away from the original first user interface region, a position that is a threshold distance away from the original
position of the contact, etc.), the contact moves with a speed greater than a predefined position of the contact, etc.), the contact moves with a speed greater than a predefined
threshold speed, threshold speed, the the movement movement ofof thecontact the contactends endswith witha apress pressinput, input, etc. etc. In In some some
embodiments, the representation of the virtual object is dragged by the contact during an embodiments, the representation of the virtual object is dragged by the contact during an
initial portion of the movement of the contact, and the virtual object stops moving with the initial portion of the movement of the contact, and the virtual object stops moving with the
contact when contact themovement when the movementof of thethe contact contact isisabout abouttotomeet meetthe thepredefined predefineddefined definedmovement movement criteria to indicate that the first criteria are about to be met; and if the movement of the criteria to indicate that the first criteria are about to be met; and if the movement of the
contact continues contact and the continues and the predefined predefined movement movement criteriaare criteria aremet metbybythe thecontinued continuedmovement movement of the contact, the transition to display the second user interface region and display the virtual of the contact, the transition to display the second user interface region and display the virtual
object within object within the the augmented reality view augmented reality view is is started. started.InInsome some embodiments, when embodiments, when thevirtual the virtual object is dragged during the initial portion of the first input, the object size and viewing object is dragged during the initial portion of the first input, the object size and viewing
perspective does perspective does not not change, change, and andonce oncethe theaugmented augmented realityview reality viewisisdisplayed, displayed,and andthe the virtual object is dropped into position in the augmented reality view, the virtual object is virtual object is dropped into position in the augmented reality view, the virtual object is
displayed with a size and viewing perspective that is dependent on the physical location displayed with a size and viewing perspective that is dependent on the physical location
represented by the drop-off location of the virtual object in the augmented reality view. represented by the drop-off location of the virtual object in the augmented reality view.
Determiningwhether Determining whethertoto continuously continuously display display therepresentation the representationofofthe thevirtual virtual object object while while
replacing display of at least a portion of the first user interface region with the field of view replacing display of at least a portion of the first user interface region with the field of view
of the of the camera(s), camera(s), depending onwhether depending on whethermovement movementof aofcontact a contact meets meets predefined predefined movement movement
83
1005066680
criteria, enables the performance of multiple different types of operations in response to an criteria, enables the performance of multiple different types of operations in response to an 10 Jan 2024
input. Enabling the performance of multiple different types of operations in response to an input. Enabling the performance of multiple different types of operations in response to an
input increases the efficiency with which the user is able to perform these operations, thereby input increases the efficiency with which the user is able to perform these operations, thereby
enhancingthe enhancing theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reducespower power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00239]
[00239] In some embodiments, in response to detecting the first input by the contact, in In some embodiments, in response to detecting the first input by the contact, in 2024200149
accordance with a determination that the first input by the contact has met the first criteria, accordance with a determination that the first input by the contact has met the first criteria,
the device outputs (814), with one or more tactile output generators 167, a tactile output to the device outputs (814), with one or more tactile output generators 167, a tactile output to
indicate satisfaction of the first criteria by the first input (e.g., a tactile output 5032 as indicate satisfaction of the first criteria by the first input (e.g., a tactile output 5032 as
described with regard to Figure 5F or a tactile output 5088 as described with regard to Figure described with regard to Figure 5F or a tactile output 5088 as described with regard to Figure
5AH).InInsome 5AH). someembodiments, embodiments, the the haptic haptic is is generated generated before before thethe fieldofofview field viewofofthe theone oneoror more cameras appears on the display. For example, the haptic indicates the satisfaction of the more cameras appears on the display. For example, the haptic indicates the satisfaction of the
first criteria which trigger the activation of the one or more camera(s) and subsequent plane first criteria which trigger the activation of the one or more camera(s) and subsequent plane
detection in the field of view of the one or more camera(s). Since it takes time for the detection in the field of view of the one or more camera(s). Since it takes time for the
cameras to be activated and the field of view to become available for display, the haptic cameras to be activated and the field of view to become available for display, the haptic
serves as a non-visual signal to the user that the device has detected the necessary input, and serves as a non-visual signal to the user that the device has detected the necessary input, and
will present the augmented reality user interface as soon as the device is ready. will present the augmented reality user interface as soon as the device is ready.
[00240]
[00240] Outputting a tactile output to indicate satisfaction of criteria (e.g., for replacing Outputting a tactile output to indicate satisfaction of criteria (e.g., for replacing
display of at least a portion of a user interface with a field of view of the camera(s)) provides display of at least a portion of a user interface with a field of view of the camera(s)) provides
the user with feedback to indicate that the provided input satisfies the criteria. Providing the user with feedback to indicate that the provided input satisfies the criteria. Providing
improved tactile feedback enhances the operability of the device (e.g., by helping the user to improved tactile feedback enhances the operability of the device (e.g., by helping the user to
provide proper provide proper inputs inputs and and reducing reducinguser usermistakes mistakeswhen when operating/interactingwith operating/interacting withthe the device), which, device), which, additionally, additionally, reduces reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00241]
[00241] In some embodiments, in response to detecting at least an initial portion of the In some embodiments, in response to detecting at least an initial portion of the
first input (e.g., including detecting the contact, or detecting an input by the contact that first input (e.g., including detecting the contact, or detecting an input by the contact that
meets respective predefined criteria without meeting the first criteria, or detecting an input meets respective predefined criteria without meeting the first criteria, or detecting an input
that meets the first criteria), the device analyzes (816) the field of view of the one or more that meets the first criteria), the device analyzes (816) the field of view of the one or more
cameras to detect one or more planes (e.g., floor surface 5038, table surface 5046, wall, etc.) cameras to detect one or more planes (e.g., floor surface 5038, table surface 5046, wall, etc.)
in the in the field fieldofofview viewofofthe one the oneoror more morecameras. cameras. In Insome some embodiments, theone embodiments, the oneorormore more cameras are activated in response to detecting the at least the initial portion of the first input, cameras are activated in response to detecting the at least the initial portion of the first input,
84
1005066680
and the plane detection is initiated at the same time when the camera(s) are activated. In some and the plane detection is initiated at the same time when the camera(s) are activated. In some 10 Jan 2024
embodiments,display embodiments, displayofofthe thefield field of of view of the view of the one or more one or camerasisisdelayed more cameras delayedafter after activation of activation of the theone one or ormore more cameras (e.g., from cameras (e.g., from the the time time the theone one or ormore more cameras are cameras are
activated until the time that at least one plane is detected in the field of view of the activated until the time that at least one plane is detected in the field of view of the
camera(s)). In some embodiments, display of the field of the field of view of the one or more camera(s)). In some embodiments, display of the field of the field of view of the one or more
cameras is initiated at the time that the one or more cameras are activated, and the plane cameras is initiated at the time that the one or more cameras are activated, and the plane
detection is completed after the field of view is already visible on the display (e.g., in the detection is completed after the field of view is already visible on the display (e.g., in the 2024200149
second user interface region). In some embodiments, after detecting a respective plane in the second user interface region). In some embodiments, after detecting a respective plane in the
field of view of the one or more cameras, the device determines a size and/or position of the field of view of the one or more cameras, the device determines a size and/or position of the
representation of the virtual object based on a relative position of the respective plane to the representation of the virtual object based on a relative position of the respective plane to the
field of field ofview view of of the theone one or ormore more cameras. In some cameras. In embodiments, some embodiments, as as theelectronic the electronicdevice deviceisis moved, the size and/or position of the representation of the virtual object is updated as the moved, the size and/or position of the representation of the virtual object is updated as the
position of the field of view of the one or more cameras changes relative to the respective position of the field of view of the one or more cameras changes relative to the respective
plane (e.g., as described with regard to Figures 5K-5L). Determining a size and/or position of plane (e.g., as described with regard to Figures 5K-5L). Determining a size and/or position of
the representation of the virtual object based on the position of the respective plane detected the representation of the virtual object based on the position of the respective plane detected
in the field of view of the camera(s) (e.g., without requiring further user input to size and/or in the field of view of the camera(s) (e.g., without requiring further user input to size and/or
position the virtual object relative to the field of view of the camera(s)) enhances the position the virtual object relative to the field of view of the camera(s)) enhances the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00242]
[00242] In some In embodiments, some embodiments, analyzing analyzing thethe fieldofofview field viewofofthe theone oneorormore more cameras cameras
to detect the one or more planes in the field of view of the one or more cameras is initiated to detect the one or more planes in the field of view of the one or more cameras is initiated
(818) inresponse (818) in responseto to detection detection of the of the contact contact atlocation at the the location on theon the touch-sensitive touch-sensitive surface that surface that
corresponds to the representation of the virtual object on the display (e.g., in response to corresponds to the representation of the virtual object on the display (e.g., in response to
detection of the contact 5026 at the location on touch screen 112 that corresponds to virtual detection of the contact 5026 at the location on touch screen 112 that corresponds to virtual
chair 5020). For example, the activation of the cameras and the detection of the planes in the chair 5020). For example, the activation of the cameras and the detection of the planes in the
field of view of the camera(s) are started before the first criteria are met by the first input field of view of the camera(s) are started before the first criteria are met by the first input
(e.g., (e.g., before thecharacteristic before the characteristicintensity intensity of of thethe contact contact 50265026 increases increases above above the deepthe deep press press
intensity threshold IT , as described with regard to Figure 5F) and before the second user intensity threshold ITD, D as described with regard to Figure 5F) and before the second user
interface region is displayed. By starting the plane detection upon detection of any interaction interface region is displayed. By starting the plane detection upon detection of any interaction
with the virtual object, the plane detection can be completed before the AR trigger criteria are with the virtual object, the plane detection can be completed before the AR trigger criteria are
met, and therefore, there would be no visible delay to the user in seeing the virtual object met, and therefore, there would be no visible delay to the user in seeing the virtual object
transition into the augmented reality view when the AR trigger criteria are met by the first transition into the augmented reality view when the AR trigger criteria are met by the first
input. Initiating analysis to detect one or more planes in the field of view of the camera(s) in input. Initiating analysis to detect one or more planes in the field of view of the camera(s) in
85
1005066680
response to detection of the contact at the location of the representation of the virtual object response to detection of the contact at the location of the representation of the virtual object 10 Jan 2024
(e.g., without requiring further user input to initiate analysis of the field of the camera(s)) (e.g., without requiring further user input to initiate analysis of the field of the camera(s))
enhancesthe enhances the efficacy efficacy of of the device, device, which, which, additionally, additionally,reduces reduces power usage and power usage andimproves improves battery life of the device by enabling the user to use the device more quickly and efficiently. battery life of the device by enabling the user to use the device more quickly and efficiently.
[00243]
[00243] In some In embodiments, some embodiments, analyzing analyzing thethe fieldofofview field viewofofthe theone oneorormore more cameras cameras
to detect the one or more planes in the field of view of the one or more cameras is initiated to detect the one or more planes in the field of view of the one or more cameras is initiated
(820) inresponse (820) in responseto to detecting detecting thatthat the the first first criteria criteria areare met met byfirst by the the first inputinput bycontact by the the contact 2024200149
(e.g., (e.g., in in response todetecting response to detecting that that thethe characteristic characteristic intensity intensity of contact of the the contact 5026 increases 5026 increases
above the deep press intensity threshold IT , as described with regard to Figure 5F). For above the deep press intensity threshold ITD, as described D with regard to Figure 5F). For
example, the activation of the cameras and the detection of the planes in the field of view of example, the activation of the cameras and the detection of the planes in the field of view of
the camera(s) are started when the first criteria are met by the first input, and the field of view the camera(s) are started when the first criteria are met by the first input, and the field of view
of the camera is displayed before the plane detection is completed. By starting the camera of the camera is displayed before the plane detection is completed. By starting the camera
activation and plane detection upon satisfaction of the AR trigger criteria, the cameras and activation and plane detection upon satisfaction of the AR trigger criteria, the cameras and
plane detection plane detection are are not not unnecessarily unnecessarily activated activated and and kept kept going, going, which, which, conserves battery conserves battery
power and extends battery life and camera life. power and extends battery life and camera life.
[00244]
[00244] In some In embodiments, some embodiments, analyzing analyzing thethe fieldofofview field viewofofthe theone oneorormore more cameras cameras
to detect the one or more planes in the field of view of the one or more cameras is initiated to detect the one or more planes in the field of view of the one or more cameras is initiated
(822) inresponse (822) in responseto to detecting detecting thatthat an initial an initial portion portion of first of the the first inputinput meetsmeets plane-detection plane-detection
trigger criteria without meeting the first criteria. For example, the activation of the cameras trigger criteria without meeting the first criteria. For example, the activation of the cameras
and the detection of the planes in the field of view of the camera(s) are started when some and the detection of the planes in the field of view of the camera(s) are started when some
criteria (e.g., criteria that are less stringent that the AR-trigger criteria) are met by an initial criteria (e.g., criteria that are less stringent that the AR-trigger criteria) are met by an initial
portion of the first input, and the field of view of the camera is optionally displayed before portion of the first input, and the field of view of the camera is optionally displayed before
the plane detection is completed. By starting the camera activation and plane detection after the plane detection is completed. By starting the camera activation and plane detection after
satisfaction of certain criteria rather than upon detection of the contact, the cameras and plane satisfaction of certain criteria rather than upon detection of the contact, the cameras and plane
detection are detection are not not unnecessarily unnecessarily activated activated and and kept kept going, going, which, which, conserves battery power conserves battery and power and
extends battery life and camera life. By starting the camera activation and plane detection extends battery life and camera life. By starting the camera activation and plane detection
before satisfaction of the AR trigger criteria, delay (due to camera activation and plane before satisfaction of the AR trigger criteria, delay (due to camera activation and plane
detection) is reduced for displaying the virtual object transition into the augmented reality detection) is reduced for displaying the virtual object transition into the augmented reality
view when the AR trigger criteria are met by the first input. view when the AR trigger criteria are met by the first input.
[00245]
[00245] In some In someembodiments, embodiments,thethe device device displays displays (824) (824) thethe representationofofthe representation the virtual object in the second user interface region in a respective manner such that the virtual virtual object in the second user interface region in a respective manner such that the virtual
object (e.g., virtual chair 5020) is oriented at a predefined angle relative to a respective plane object (e.g., virtual chair 5020) is oriented at a predefined angle relative to a respective plane
86
1005066680
(e.g., such that there is no distance (or minimal distance) separating the undersides of the four (e.g., such that there is no distance (or minimal distance) separating the undersides of the four 10 Jan 2024
legs of the virtual chair 5020 from floor surface 5038) that is detected in the field of view legs of the virtual chair 5020 from floor surface 5038) that is detected in the field of view
5034 of the one or more cameras. For example, the orientation and/or position of the virtual 5034 of the one or more cameras. For example, the orientation and/or position of the virtual
object relative to a respective plane is predefined based on the shape and orientation of virtual object relative to a respective plane is predefined based on the shape and orientation of virtual
object as shown in the two-dimensional graphical user interface (e.g., the respective plane object as shown in the two-dimensional graphical user interface (e.g., the respective plane
corresponds to a horizontal physical surface that can serve as a support surface for three- corresponds to a horizontal physical surface that can serve as a support surface for three-
dimensional representation of the virtual object in the augmented reality view (e.g., a dimensional representation of the virtual object in the augmented reality view (e.g., a 2024200149
horizontal table surface to support a vase), or the respective plane is a vertical physical horizontal table surface to support a vase), or the respective plane is a vertical physical
surface that can serve as a support surface for the three-dimensional representation of the surface that can serve as a support surface for the three-dimensional representation of the
virtual object in the augmented reality view (e.g., a vertical wall to hang a virtual picture virtual object in the augmented reality view (e.g., a vertical wall to hang a virtual picture
frame)). In some embodiments, the orientation and/or position of the virtual object is defined frame)). In some embodiments, the orientation and/or position of the virtual object is defined
by a respective surface or boundary (e.g., the bottom surface, bottom boundary points, side by a respective surface or boundary (e.g., the bottom surface, bottom boundary points, side
surface, and/or surface, and/or side side boundary points) of boundary points) of the the virtual virtualobject. object.In In some someembodiments, an anchor embodiments, an anchor plane that corresponds to the respective plane is a property in a set of properties of the virtual plane that corresponds to the respective plane is a property in a set of properties of the virtual
object, and is specified in accordance with the nature of a physical object that the virtual object, and is specified in accordance with the nature of a physical object that the virtual
object is supposed to represent. In some embodiments, the virtual object is placed at a object is supposed to represent. In some embodiments, the virtual object is placed at a
predefined orientation and/or position relative to multiple planes detected in the field of view predefined orientation and/or position relative to multiple planes detected in the field of view
of the one or more cameras (e.g., multiple respective sides of the virtual object are associated of the one or more cameras (e.g., multiple respective sides of the virtual object are associated
with respective with respective planes planes detected detected in in the the field fieldofof view viewofofthe camera(s)). the In In camera(s)). some someembodiments, embodiments,
if the orientation and/or position predefined for the virtual object is defined relative to a if the orientation and/or position predefined for the virtual object is defined relative to a
horizontal bottom plane of the virtual object, the bottom plane of the virtual object is horizontal bottom plane of the virtual object, the bottom plane of the virtual object is
displayed on a floor plane detected in the field of view of the camera(s) (e.g., the horizontal displayed on a floor plane detected in the field of view of the camera(s) (e.g., the horizontal
bottom plane of the virtual object is parallel to the floor plane with zero distance from the bottom plane of the virtual object is parallel to the floor plane with zero distance from the
floor plane). In some embodiments, if the orientation and/or position predefined for the floor plane). In some embodiments, if the orientation and/or position predefined for the
virtual object is defined relative to a vertical back plane of the virtual object, the back surface virtual object is defined relative to a vertical back plane of the virtual object, the back surface
of the virtual object is placed against a wall plane detected in the field of view of the one or of the virtual object is placed against a wall plane detected in the field of view of the one or
more cameras (e.g., the vertical back plane of the virtual object is parallel to the wall plane more cameras (e.g., the vertical back plane of the virtual object is parallel to the wall plane
with zero distance from the wall plane). In some embodiments, the virtual object is placed at with zero distance from the wall plane). In some embodiments, the virtual object is placed at
a fixed distance relative a respective plane and/or at an angle other than zero or right angles a fixed distance relative a respective plane and/or at an angle other than zero or right angles
relative to the respective plane. Displaying a representation of a virtual object relative to a relative to the respective plane. Displaying a representation of a virtual object relative to a
plane detected in a field of view of the camera(s) (e.g., without requiring further user input to plane detected in a field of view of the camera(s) (e.g., without requiring further user input to
display the virtual object relative to a plane in the field of view of the camera(s)) enhances display the virtual object relative to a plane in the field of view of the camera(s)) enhances
87
1005066680
the operability the operability of ofthe thedevice, device,which, which,additionally, additionally,reduces power reduces powerusage usage and and improves battery improves battery 10 Jan 2024
life of the device by enabling the user to use the device more quickly and efficiently. life of the device by enabling the user to use the device more quickly and efficiently.
[00246]
[00246] In some embodiments, in response to detecting the respective plane in the field In some embodiments, in response to detecting the respective plane in the field
of view of the one or more cameras, the device outputs (826), with the one or more tactile of view of the one or more cameras, the device outputs (826), with the one or more tactile
output generators 167, a tactile output to indicate the detection of the respective plane in the output generators 167, a tactile output to indicate the detection of the respective plane in the
field of field ofview view of of the theone one or ormore more cameras. In some cameras. In embodiments, some embodiments, a respectivetactile a respective tactile output outputis is generated for each plane (e.g., floor surface 5038 and/or table surface 5046) that is detected generated for each plane (e.g., floor surface 5038 and/or table surface 5046) that is detected 2024200149
in the field of view of the camera(s). In some embodiments, the tactile output is generated in the field of view of the camera(s). In some embodiments, the tactile output is generated
whenplane when planedetection detectionisis completed. completed.InIn some someembodiments, embodiments,the the tactileoutput tactile outputisisaccompanied accompanied by visual indication (e.g., a momentary highlighting of the field of view plane that has been by visual indication (e.g., a momentary highlighting of the field of view plane that has been
detected) of the field of view plane in the field of view shown in the second user interface detected) of the field of view plane in the field of view shown in the second user interface
portion. Outputting a tactile output to indicate detection of a plane in a field of view of the portion. Outputting a tactile output to indicate detection of a plane in a field of view of the
camera(s) provides the user with feedback to indicate that the plane has been detected. camera(s) provides the user with feedback to indicate that the plane has been detected.
Providing improved tactile feedback enhances the operability of the device (e.g., by helping Providing improved tactile feedback enhances the operability of the device (e.g., by helping
the user to provide proper inputs and reducing unnecessary additional inputs for placing the the user to provide proper inputs and reducing unnecessary additional inputs for placing the
virtual object), which, additionally, reduces power usage and improves battery life of the virtual object), which, additionally, reduces power usage and improves battery life of the
device by enabling the user to use the device more quickly and efficiently. device by enabling the user to use the device more quickly and efficiently.
[00247]
[00247] In some In embodiments, some embodiments, while while switching switching from from displaying displaying the the firstuser first userinterface interface region to displaying the second user interface region, the device displays (828) an animation region to displaying the second user interface region, the device displays (828) an animation
as the representation of the virtual object transitions (e.g., moves, rotates, resizes, and/or is re- as the representation of the virtual object transitions (e.g., moves, rotates, resizes, and/or is re-
rendered in a different style, etc.) into the second user interface region to a predefined rendered in a different style, etc.) into the second user interface region to a predefined
position relative to the respective plane (e.g., as illustrated at Figures 5F-5I) and, in position relative to the respective plane (e.g., as illustrated at Figures 5F-5I) and, in
conjunction with displaying the representation of the virtual object at the predefined angle conjunction with displaying the representation of the virtual object at the predefined angle
relative to the respective plane (e.g., at a predefined orientation and/or position relative to the relative to the respective plane (e.g., at a predefined orientation and/or position relative to the
respective plane, and its size, rotation angle, and appearance reaching a final state to be respective plane, and its size, rotation angle, and appearance reaching a final state to be
shown in the augmented reality view), the device outputs, with the one or more tactile output shown in the augmented reality view), the device outputs, with the one or more tactile output
generators 167, a tactile output to indicate display of the virtual object at the predefined angle generators 167, a tactile output to indicate display of the virtual object at the predefined angle
relative to the respective plane in the second user interface region. For example, as illustrated relative to the respective plane in the second user interface region. For example, as illustrated
in Figure 5I, the device outputs tactile output 5036 in conjunction with displaying virtual in Figure 5I, the device outputs tactile output 5036 in conjunction with displaying virtual
chair 5020 chair at aa predefined 5020 at predefined angle angle relative relativeto tofloor surface floor 5038. surface InInsome 5038. someembodiments, the embodiments, the
tactile output that is generated is configured to have characteristics (e.g., frequency, number tactile output that is generated is configured to have characteristics (e.g., frequency, number
of cycles, of cycles, modulation, modulation, amplitude, accompanying amplitude, accompanying audio audio waves, waves, etc.) etc.) reflectingthe reflecting theweight weight (e.g., heavy vs. light), material (e.g., metal, cotton, wood, marble, liquid, rubber, glass), size (e.g., heavy VS. light), material (e.g., metal, cotton, wood, marble, liquid, rubber, glass), size
88
1005066680
(e.g., large vs. small), shape (e.g., thin vs. thick, long vs. short, round vs. spiky, etc.), (e.g., large VS. small), shape (e.g., thin VS. thick, long VS. short, round VS. spiky, etc.), 10 Jan 2024
elasticity (e.g., bouncy vs. stiff), nature (e.g., playful vs. solemn, gentle vs. forceful, etc.), and elasticity (e.g., bouncy VS. stiff), nature (e.g., playful VS. solemn, gentle VS. forceful, etc.), and
other properties of the virtual object or the physical object represented by the virtual object. other properties of the virtual object or the physical object represented by the virtual object.
For example, the tactile output uses one or more of the tactile output patterns illustrated at For example, the tactile output uses one or more of the tactile output patterns illustrated at
Figures 4F-4K. Figures 4F-4K.InInsome someembodiments, embodiments, a preset a preset profile profile including including one one or or more more changes changes to one to one
or more characteristics over time corresponds to a virtual object (e.g., an emoji). For or more characteristics over time corresponds to a virtual object (e.g., an emoji). For
example, a “bouncing” tactile output profile is provided for a “smiley” emoji virtual object. example, a "bouncing" tactile output profile is provided for a "smiley" emoji virtual object. 2024200149
Outputting the tactile output to indicate placement of the representation of the virtual object Outputting the tactile output to indicate placement of the representation of the virtual object
relative to the respective plane provides the user with feedback to indicate that the relative to the respective plane provides the user with feedback to indicate that the
representation of the virtual object has been automatically placed relative to the respective representation of the virtual object has been automatically placed relative to the respective
plane. Providing improved tactile feedback enhances the operability of the device (e.g., by plane. Providing improved tactile feedback enhances the operability of the device (e.g., by
helping the helping the user user to to provide provide proper proper inputs inputs and and reducing unnecessaryadditional reducing unnecessary additional inputs inputs for for placing the placing the virtual virtualobject), object),which, which,additionally, additionally,reduces power reduces powerusage usage and and improves battery improves battery
life of the device by enabling the user to use the device more quickly and efficiently. life of the device by enabling the user to use the device more quickly and efficiently.
[00248]
[00248] In some embodiments (830), the tactile output has a tactile output profile that In some embodiments (830), the tactile output has a tactile output profile that
corresponds to a characteristic (e.g., a simulated physical property such as size, density, mass, corresponds to a characteristic (e.g., a simulated physical property such as size, density, mass,
and/or material) of the virtual object. In some embodiments, the tactile output profile has and/or material) of the virtual object. In some embodiments, the tactile output profile has
characteristics (e.g., characteristics (e.g.,frequency, number frequency, number of ofcycles, cycles,modulation, modulation,amplitude, amplitude, accompanying accompanying
audio waves, etc.) that vary based on one or more characteristics (e.g., weight, material, size, audio waves, etc.) that vary based on one or more characteristics (e.g., weight, material, size,
shape, and/or elasticity) of the virtual object. For example, the tactile output uses one or more shape, and/or elasticity) of the virtual object. For example, the tactile output uses one or more
of the tactile output patterns illustrated at Figures 4F-4K. In some embodiments, the of the tactile output patterns illustrated at Figures 4F-4K. In some embodiments, the
amplitude and/or duration of the tactile output is increased as the size, weight, and/or mass of amplitude and/or duration of the tactile output is increased as the size, weight, and/or mass of
the virtual object increases. In some embodiments, a tactile output pattern is selected based the virtual object increases. In some embodiments, a tactile output pattern is selected based
on a virtual material of which the virtual object is composed. Outputting a tactile output with on a virtual material of which the virtual object is composed. Outputting a tactile output with
a profile that corresponds to a characteristic of the virtual object provides the user with a profile that corresponds to a characteristic of the virtual object provides the user with
feedback to indicate information about the characteristic of the virtual object. Providing feedback to indicate information about the characteristic of the virtual object. Providing
improved tactile feedback enhances the operability of the device (e.g., by helping the user to improved tactile feedback enhances the operability of the device (e.g., by helping the user to
provide proper inputs, reducing unnecessary additional inputs for placing the virtual object, provide proper inputs, reducing unnecessary additional inputs for placing the virtual object,
and providing sensory information that allows a user to perceive the characteristic of the and providing sensory information that allows a user to perceive the characteristic of the
virtual object without cluttering the user interface with displayed information about the virtual object without cluttering the user interface with displayed information about the
characteristic), which, additionally, reduces power usage and improves battery life of the characteristic), which, additionally, reduces power usage and improves battery life of the
device by enabling the user to use the device more quickly and efficiently. device by enabling the user to use the device more quickly and efficiently.
89
1005066680
[00249]
[00249] In some In embodiments, some embodiments, while while displaying displaying thethe representation representation ofof thevirtual the virtual 10 Jan 2024
object in the second user interface region, the device detects (832) movement of the device object in the second user interface region, the device detects (832) movement of the device
(e.g., (e.g., lateral lateral movement and/or movement and/or rotation rotation ofdevice) of the the device) that adjusts that adjusts theoffield the field viewof view 5034 5034 of the of the
one or more cameras (e.g., as illustrated in Figures 5K-5L), and, in response to detecting one or more cameras (e.g., as illustrated in Figures 5K-5L), and, in response to detecting
movement of the device, the device adjusts the representation of the virtual object (e.g., movement of the device, the device adjusts the representation of the virtual object (e.g.,
virtual chair 5020) in the second user interface region in accordance with a fixed spatial virtual chair 5020) in the second user interface region in accordance with a fixed spatial
relationship (e.g., orientation and/or position) between the virtual object and the respective relationship (e.g., orientation and/or position) between the virtual object and the respective 2024200149
plane (e.g., floor surface 5038) in the field of view of the one or more cameras (e.g., the plane (e.g., floor surface 5038) in the field of view of the one or more cameras (e.g., the
virtual object is displayed with an orientation and a position on the display such that a fixed virtual object is displayed with an orientation and a position on the display such that a fixed
angle between the representation of the virtual object and the plane is maintained (e.g., the angle between the representation of the virtual object and the plane is maintained (e.g., the
virtual object appears to stay at a fixed location on the plane or roll along the field of view virtual object appears to stay at a fixed location on the plane or roll along the field of view
plane)) as the field of view of the one or more cameras is adjusted. For example, in Figures plane)) as the field of view of the one or more cameras is adjusted. For example, in Figures
5K-5L, the virtual chair 5020 in the second user interface region that includes the field of 5K-5L, the virtual chair 5020 in the second user interface region that includes the field of
view 5034 of the camera(s) maintains a fixed orientation and position relative to the floor view 5034 of the camera(s) maintains a fixed orientation and position relative to the floor
surface 5038 surface as the 5038 as the device 100 is device 100 is moved. In some moved. In someembodiments, embodiments,the the virtual virtual objectappears object appears stationary and stationary and unchanged relative to unchanged relative to the the surrounding physical environment surrounding physical environment5002, 5002,that thatis, is, the the representation of the virtual object changes in size, position, and/or orientation on the display representation of the virtual object changes in size, position, and/or orientation on the display
as device position and/or orientation is changed, as the field of view of the one or more as device position and/or orientation is changed, as the field of view of the one or more
cameraschanges cameras changeswhen when thethe device device moves moves relative relative to to thethe surrounding surrounding physical physical environment. environment.
Adjusting the representation of a virtual object in accordance with the fixed relationship Adjusting the representation of a virtual object in accordance with the fixed relationship
between the virtual object and a respective plane (e.g., without requiring further user input to between the virtual object and a respective plane (e.g., without requiring further user input to
maintain a position of the virtual object relative to the respective plane) enhances the maintain a position of the virtual object relative to the respective plane) enhances the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00250]
[00250] In some embodiments, (e.g., at a time that corresponds to replacing display of In some embodiments, (e.g., at a time that corresponds to replacing display of
at least a portion of the first user interface region with the representation of the field of view at least a portion of the first user interface region with the representation of the field of view
of the of the one one or or more cameras,) the more cameras,) the device device displays displays (834) (834) an an animation animation(e.g., (e.g., movement, rotation movement, rotation
about one or more axes, and/or scaling) as the representation of the virtual object (e.g., virtual about one or more axes, and/or scaling) as the representation of the virtual object (e.g., virtual
chair 5020) is continuously displayed while switching from displaying the first user interface chair 5020) is continuously displayed while switching from displaying the first user interface
region to displaying the second user interface region (e.g., as illustrated in 5F-5I). For region to displaying the second user interface region (e.g., as illustrated in 5F-5I). For
example,the example, the animation animationincludes includesaatransition transition from displaying aa two-dimensional from displaying two-dimensional representation of the virtual object while displaying the first user interface region to representation of the virtual object while displaying the first user interface region to
displaying a three-dimensional representation of the virtual object while displaying the displaying a three-dimensional representation of the virtual object while displaying the
90
1005066680
seconduser second user interface interface region. region. In In some embodiments,a athree-dimensional some embodiments, three-dimensional representation representation ofof the the 10 Jan 2024
virtual object has an orientation that is predefined relative to a current orientation of a portion virtual object has an orientation that is predefined relative to a current orientation of a portion
of the of the physical physical environment capturedinin the environment captured the field field of ofview view of of the theone one or ormore more cameras. In some cameras. In some
embodiments,when embodiments, when transitioning transitioning to to theaugmented the augmented reality reality view, view, a representationofofthe a representation the virtual object is moved, resized, and reoriented from an initial location on the display to a virtual object is moved, resized, and reoriented from an initial location on the display to a
new location on the display (e.g., the center of the augmented reality view, or another new location on the display (e.g., the center of the augmented reality view, or another
predefined location predefined location in in the the augmented reality view), augmented reality view), and during the and during the movement movement oror atatthe theend endofof 2024200149
the movement, is reoriented such that the virtual object is at a fixed angle relative to a plane the movement, is reoriented such that the virtual object is at a fixed angle relative to a plane
(e.g., (e.g., aa physical surface,such physical surface, suchas as a vertical a vertical wall wall or horizontal or horizontal floorfloor surface surface thatsupport that can can support a a representation of the virtual object) detected in the field of view of the camera(s). In some representation of the virtual object) detected in the field of view of the camera(s). In some
embodiments, the lighting of the virtual object and/or a shadow cast by the virtual object are embodiments, the lighting of the virtual object and/or a shadow cast by the virtual object are
adjusted as the animated transition occurs (e.g., to match ambient lighting detected in the adjusted as the animated transition occurs (e.g., to match ambient lighting detected in the
field of field ofview view of of the theone oneor ormore more cameras). cameras). Displaying an animation Displaying an animationasasthe the representation representation of of the virtual object while switching from displaying the first user interface region to the second the virtual object while switching from displaying the first user interface region to the second
user interface region provides the user with feedback to indicate that the first input meets the user interface region provides the user with feedback to indicate that the first input meets the
first criteria. Providing improved feedback enhances the operability of the device (e.g., by first criteria. Providing improved feedback enhances the operability of the device (e.g., by
helping the helping the user user to to provide provide proper proper inputs inputs and and reducing reducing user user mistakes when mistakes when
operating/interacting with operating/interacting with the the device), device),which, which, additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00251]
[00251] In some In embodiments, some embodiments, while while displaying displaying thethe second second user user interface interface region region onon
the display, the device detects (836) a second input by a second contact (e.g., contact 5040), the display, the device detects (836) a second input by a second contact (e.g., contact 5040),
wherein the second input includes (optionally, a press or touch input by the second contact to wherein the second input includes (optionally, a press or touch input by the second contact to
select the representation of the virtual object and) movement of the second contact along a select the representation of the virtual object and) movement of the second contact along a
first path across the display (e.g., as illustrated in Figures 5N-5P) and, in response to first path across the display (e.g., as illustrated in Figures 5N-5P) and, in response to
detecting the second input by the second contact, the device moves the representation of the detecting the second input by the second contact, the device moves the representation of the
virtual object (e.g., virtual chair 5020) in the second user interface region along a second path virtual object (e.g., virtual chair 5020) in the second user interface region along a second path
that corresponds to (e.g., is the same as, or is constrained by) the first path. In some that corresponds to (e.g., is the same as, or is constrained by) the first path. In some
embodiments, the second contact is distinct from the first contact and is detected after lift-off embodiments, the second contact is distinct from the first contact and is detected after lift-off
of the first contact (e.g., as illustrated by contact 5040 in Figures 5N-5P, which is detected of the first contact (e.g., as illustrated by contact 5040 in Figures 5N-5P, which is detected
after the after thelift-off of contact lift-off 5026 of contact in Figures 5026 5C-5F). in Figures In In 5C-5F). some embodiments, some embodiments, the the second contact second contact
is the same as the first contact that is continuously maintained on the touch-sensitive surface is the same as the first contact that is continuously maintained on the touch-sensitive surface
(e.g., (e.g., as as illustrated illustrated by the input by the inputbybycontact contact 5086, 5086, which which meets meets AR-trigger AR-trigger criteria criteria and then and then
91
1005066680
movesacross moves acrosstouch touchscreen screen112 112totomove move virtuallamp virtual lamp 5084). 5084). In In some some embodiments, embodiments, a swipe a swipe 10 Jan 2024
input on the virtual object rotates the virtual object, while the movement of the virtual object input on the virtual object rotates the virtual object, while the movement of the virtual object
is optionally constrained by the plane in the field of view of the camera(s) (e.g., the swipe is optionally constrained by the plane in the field of view of the camera(s) (e.g., the swipe
input rotates a representation of a chair on a floor plane in the field of view of the camera(s)). input rotates a representation of a chair on a floor plane in the field of view of the camera(s)).
Moving the representation of the virtual object in response to detecting an input provides the Moving the representation of the virtual object in response to detecting an input provides the
user with feedback to indicate that the displayed position of the virtual object is movable in user with feedback to indicate that the displayed position of the virtual object is movable in
response to response to user user input. input. Providing Providing improved feedbackenhances improved feedback enhances thethe operabilityofofthe operability thedevice device 2024200149
(e.g., (e.g.,by byhelping helpingthe theuser usertoto provide provideproper properinputs and inputs andreducing reducinguser usermistakes mistakeswhen when
operating/interacting operating/interacting with with the the device), device),which, which, additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00252]
[00252] In some In embodiments, some embodiments, thethe device device adjusts(838) adjusts (838)a asize sizeofofthe the representation representation of of the virtual object (e.g., based on a virtual distance from the representation of the virtual object the virtual object (e.g., based on a virtual distance from the representation of the virtual object
to the user, to maintain an accurate perspective of the virtual object in the field of view) as to the user, to maintain an accurate perspective of the virtual object in the field of view) as
the representation the representation of of the thevirtual virtualobject moves object movesalong along the thesecond second path path based based on on the the movement movement
of the contact and a respective plane that corresponds to the virtual object. For example, in of the contact and a respective plane that corresponds to the virtual object. For example, in
Figures 5N-5P, the size of virtual chair 5020 decreases as the virtual chair moves deeper into Figures 5N-5P, the size of virtual chair 5020 decreases as the virtual chair moves deeper into
the field the fieldof ofview view 5034 5034 of of the the camera(s), camera(s), away fromdevice away from device100 100and andtoward toward table5004. table 5004. Adjusting the size of the representation of the virtual object as the representation of the Adjusting the size of the representation of the virtual object as the representation of the
virtual object virtual objectmoves along the moves along the second path based second path basedon onthe the movement movementof of thethe contact contact and and thethe
plane that corresponds to the virtual object (e.g., without requiring further user input to adjust plane that corresponds to the virtual object (e.g., without requiring further user input to adjust
a size of the representation of the virtual object to maintain the representation of the virtual a size of the representation of the virtual object to maintain the representation of the virtual
object at a realistic size relative to the environment in the field of view of the camera(s)) object at a realistic size relative to the environment in the field of view of the camera(s))
enhancesthe enhances theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reducespower power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00253]
[00253] In some In embodiments, some embodiments, thethe device device maintains maintains (840) (840) a firstsize a first size of of the the representation of the virtual object (e.g., virtual lamp 5084) as the representation of the representation of the virtual object (e.g., virtual lamp 5084) as the representation of the
virtual object moves along the second path (e.g., as illustrated in Figures 5AI-5AL), the virtual object moves along the second path (e.g., as illustrated in Figures 5AI-5AL), the
device detects termination of the second input by the second contact (e.g., including detecting device detects termination of the second input by the second contact (e.g., including detecting
lift-off of the second contact, as illustrated in Figures 5AL-5AM), and, in response to lift-off of the second contact, as illustrated in Figures 5AL-5AM), and, in response to
detecting the termination of the second input by the second contact, the device places the detecting the termination of the second input by the second contact, the device places the
representation of the virtual object at a drop-off location (e.g., on table surface 5046) in the representation of the virtual object at a drop-off location (e.g., on table surface 5046) in the
92
1005066680
second user interface region and displays the representation of the virtual object at the drop- second user interface region and displays the representation of the virtual object at the drop- 10 Jan 2024
off location in the second user interface region with a second size that is distinct from the first off location in the second user interface region with a second size that is distinct from the first
size (e.g., the size of virtual lamp 5084 in Figure 5AM, after termination of the input by size (e.g., the size of virtual lamp 5084 in Figure 5AM, after termination of the input by
contact 5086, is distinct from the size of virtual lamp 5084 in Figure 5AL, prior to contact 5086, is distinct from the size of virtual lamp 5084 in Figure 5AL, prior to
termination of the input by contact 5086). For example, the object does not change its size termination of the input by contact 5086). For example, the object does not change its size
and viewing and viewingperspective perspectivewhile whilebeing beingdragged draggedbyby thecontact, the contact,and andwhen whenthethe object object isisdropped dropped at its final location in the augmented reality view, the object is displayed with size and at its final location in the augmented reality view, the object is displayed with size and 2024200149
viewingperspective viewing perspectivedetermined determinedbased based onon a physicallocation a physical locationininthe thephysical physicalenvironment environmentthat that corresponds to the drop-off location of the virtual object shown in the field of view of the corresponds to the drop-off location of the virtual object shown in the field of view of the
camera(s), such that in accordance a determination that the drop-off location is a first location camera(s), such that in accordance a determination that the drop-off location is a first location
in the field of view of the camera(s), the object has a second size and in accordance a in the field of view of the camera(s), the object has a second size and in accordance a
determination that the drop-off location is a second location in the field of view of the determination that the drop-off location is a second location in the field of view of the
camera(s), the object has a third size that is different from the second size, wherein the camera(s), the object has a third size that is different from the second size, wherein the
second and third sizes are selected based on the distance of the drop-off location from the one second and third sizes are selected based on the distance of the drop-off location from the one
or more cameras. Displaying the representation of the virtual object with a changed size in or more cameras. Displaying the representation of the virtual object with a changed size in
response to detecting termination of the second input that moves the virtual object (e.g., response to detecting termination of the second input that moves the virtual object (e.g.,
without requiring further user input to adjust a size of the virtual object to maintain the virtual without requiring further user input to adjust a size of the virtual object to maintain the virtual
object at a realistic size relative to the environment in the field of view of the camera(s)) object at a realistic size relative to the environment in the field of view of the camera(s))
enhancesthe enhances theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reducespower power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00254]
[00254] In some In embodiments, some embodiments, in in accordance accordance with with a determination a determination thatthat thethe movement movement
of the second contact along the first path across the display meets second criteria (e.g., at the of the second contact along the first path across the display meets second criteria (e.g., at the
end of the first path, the contact is within a threshold distance of or outside of an edge (e.g., end of the first path, the contact is within a threshold distance of or outside of an edge (e.g.,
bottom edge, top edge, and or side edge) of the display or an edge of the second user interface bottom edge, top edge, and or side edge) of the display or an edge of the second user interface
region)), the device (842): ceases to display the second user interface region including the region)), the device (842): ceases to display the second user interface region including the
representation of the field of view of the one or more cameras, and redisplays the (full) first representation of the field of view of the one or more cameras, and redisplays the (full) first
user interface region with the representation of the virtual object (e.g., if a portion of the first user interface region with the representation of the virtual object (e.g., if a portion of the first
user interface region is previously displayed concurrently with the second user interface user interface region is previously displayed concurrently with the second user interface
region, the device displays the full first user interface region after the second user interface region, the device displays the full first user interface region after the second user interface
region is region is no no longer longer displayed). displayed). For For example, in response example, in response to to movement movement ofofcontact contact5054 5054that that drags virtual chair 5054 to the edge of touch screen 112, as illustrated in Figures 5V-5X, the drags virtual chair 5054 to the edge of touch screen 112, as illustrated in Figures 5V-5X, the
field of field ofview view 5034 of the 5034 of the camera(s) camera(s) ceases ceases to to be be displayed displayed and and the the full fullmessaging messaging user user
93
1005066680
interface 5008 interface is redisplayed, 5008 is redisplayed, as asillustrated in Figures illustrated 5Y-5AD. in Figures 5Y-5AD. In Insome some embodiments, embodiments, asasthe the 10 Jan 2024
contact approaches an edge of the display or the edge of the second user interface region, the contact approaches an edge of the display or the edge of the second user interface region, the
second user interface region fades out (e.g., as illustrated at Figures 5X-5Y) and/or the second user interface region fades out (e.g., as illustrated at Figures 5X-5Y) and/or the
(undisplayed or blocked portion of) the first user interface region fades in (e.g., as illustrated (undisplayed or blocked portion of) the first user interface region fades in (e.g., as illustrated
at Figure at Figure 5Z-5AA). 5Z-5AA). InInsome someembodiments, embodiments, the the gesture gesture forfor transitioningfrom transitioning from thethe non-AR non-AR
view (e.g., the first user interface region) to the AR view (e.g., the second user interface view (e.g., the first user interface region) to the AR view (e.g., the second user interface
region) and region) the gesture and the gesture for for transitioning transitioningfrom fromthe theAR AR view to the view to the non-AR vieware non-AR view arethe thesame. same. 2024200149
For example, a drag gesture on the virtual object beyond a threshold position in the currently For example, a drag gesture on the virtual object beyond a threshold position in the currently
displayed user interface (e.g., within a threshold distance of a boundary of the currently displayed user interface (e.g., within a threshold distance of a boundary of the currently
displayed user interface region, or beyond a boundary of the currently displayed user displayed user interface region, or beyond a boundary of the currently displayed user
interface region) causes the transition from the currently displayed user interface region to the interface region) causes the transition from the currently displayed user interface region to the
counterpart user interface region (e.g., from displaying the first user interface region to counterpart user interface region (e.g., from displaying the first user interface region to
displaying the second the second user interface region, or alternatively, from displaying the displaying the second the second user interface region, or alternatively, from displaying the
second user interface region to displaying the first user interface region). In some second user interface region to displaying the first user interface region). In some
embodiments, the visual indication (e.g., fading out the currently displayed user interface embodiments, the visual indication (e.g., fading out the currently displayed user interface
region and fading in the counterpart user interface) is shown before the first/second criteria region and fading in the counterpart user interface) is shown before the first/second criteria
are met, and is reversible if the input continues and the first/second criteria are not met before are met, and is reversible if the input continues and the first/second criteria are not met before
termination of the input (e.g., lift-off of the contact) is detected. Redisplaying a first user termination of the input (e.g., lift-off of the contact) is detected. Redisplaying a first user
interface in response to detecting an input that meets input criteria provides additional control interface in response to detecting an input that meets input criteria provides additional control
options without cluttering the second user interface with additional displayed controls (e.g., options without cluttering the second user interface with additional displayed controls (e.g.,
controls for displaying the first user interface from the second user interface). Providing controls for displaying the first user interface from the second user interface). Providing
additional control options without cluttering the second user interface with additional additional control options without cluttering the second user interface with additional
displayed controls enhances the operability of the device, which, additionally, reduces power displayed controls enhances the operability of the device, which, additionally, reduces power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently.
[00255]
[00255] In some embodiments, at a time that corresponds to redisplaying the first user In some embodiments, at a time that corresponds to redisplaying the first user
interface region, the device displays (844) an animated transition (e.g., movement, rotation interface region, the device displays (844) an animated transition (e.g., movement, rotation
about one or more axes, and/or scaling) from displaying the representation of the virtual about one or more axes, and/or scaling) from displaying the representation of the virtual
object in the second user interface region to displaying the representation of the virtual object object in the second user interface region to displaying the representation of the virtual object
in the first user interface region (e.g., as illustrated by the animation of virtual chair 5020 in in the first user interface region (e.g., as illustrated by the animation of virtual chair 5020 in
Figures 5AB-5AD). Figures 5AB-5AD). Displaying Displaying an animated an animated transition transition fromfrom displaying displaying the the representation representation of of the virtual object in the second user interface to displaying the representation of the virtual the virtual object in the second user interface to displaying the representation of the virtual
object in the first user interface (e.g., without requiring further user input to reposition the object in the first user interface (e.g., without requiring further user input to reposition the
94
1005066680
virtual object in the first user interface) enhances the operability of the device, which, virtual object in the first user interface) enhances the operability of the device, which, 10 Jan 2024
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by enabling the by enabling the user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00256]
[00256] In some In embodiments, some embodiments, as as thesecond the second contact contact moves moves along along the the firstpath, first path,the the device changes (846) a visual appearance of one or more respective planes (e.g., highlighting, device changes (846) a visual appearance of one or more respective planes (e.g., highlighting,
marking,outlining, marking, outlining, and/or and/or otherwise visually altering otherwise visually altering the theappearance appearance of of the theone one or ormore more
planes) identified in the field of view of the one or more cameras that corresponds to a planes) identified in the field of view of the one or more cameras that corresponds to a 2024200149
current location of the contact. For example, as the contact 5042 drags virtual chair 5020 current location of the contact. For example, as the contact 5042 drags virtual chair 5020
along aa path along path as illustrated illustratedbybyarrows arrows5042 5042 and and 5044 in Figures 5044 in Figures 5O-5P, floor surface 50-5P, floor surface 5038 5038is is highlighted (e.g., highlighted (e.g., inincomparison comparison with with Figure Figure 5M, prior to 5M, prior to movement movement ofof contact5042). contact 5042).InIn someembodiments, some embodiments,in in accordance accordance with with a determination a determination thatthat thethe contact contact is is atataalocation location that that corresponds to a first plane detected in the field of view of the camera(s), the first plane is corresponds to a first plane detected in the field of view of the camera(s), the first plane is
highlighted. In accordance with a determination that the contact has moved to a location the highlighted. In accordance with a determination that the contact has moved to a location the
corresponds to a second plane detected in the field of view of the camera(s) (e.g., as corresponds to a second plane detected in the field of view of the camera(s) (e.g., as
illustrated in Figures 5S-5U), the first plane (e.g., floor surface 5038) ceases to be highlighted illustrated in Figures 5S-5U), the first plane (e.g., floor surface 5038) ceases to be highlighted
and the and the second plane (e.g., second plane (e.g., table tablesurface surface5046) 5046)isishighlighted. InIn highlighted. some someembodiments, multiple embodiments, multiple
planes are planes are highlighted highlighted at at the thesame same time. time. In In some some embodiments, embodiments, a afirst first plane plane of of multiple multiple
visually altered planes is visually altered in a manner that is distinct from the manner in visually altered planes is visually altered in a manner that is distinct from the manner in
which the other planes are visually altered to indicate that the contact is at a location that which the other planes are visually altered to indicate that the contact is at a location that
corresponds to the first plane. Changing the visual appearance of the one or more respective corresponds to the first plane. Changing the visual appearance of the one or more respective
planes identified in the field of view of the camera(s) provides the user with feedback to planes identified in the field of view of the camera(s) provides the user with feedback to
indicate that a plane (e.g., relative to which the virtual object may be positioned) has been indicate that a plane (e.g., relative to which the virtual object may be positioned) has been
identified. Providing improved visual feedback enhances the operability of the device (e.g., identified. Providing improved visual feedback enhances the operability of the device (e.g.,
by helping by helping the the user user to to provide provide proper proper inputs inputs and and reducing user mistakes reducing user when mistakes when
operating/interacting with operating/interacting with the the device), device),which, which, additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00257]
[00257] In some embodiments, in response to detecting the first input by the contact, in In some embodiments, in response to detecting the first input by the contact, in
accordance with a determination that the first input by the contact meets third (e.g., staging accordance with a determination that the first input by the contact meets third (e.g., staging
user interface display) criteria (e.g., the staging user interface display criteria are criteria user interface display) criteria (e.g., the staging user interface display criteria are criteria
configured to identify a swipe input, a touch-hold input, a press input, a tap input, or a hard configured to identify a swipe input, a touch-hold input, a press input, a tap input, or a hard
press with an intensity above a predefined intensity threshold), the device displays (848) a press with an intensity above a predefined intensity threshold), the device displays (848) a
third user interface region on the display, including replacing display of at least a portion of third user interface region on the display, including replacing display of at least a portion of
95
1005066680
the first user interface region (e.g., including a 3D model of the virtual object that replaces a the first user interface region (e.g., including a 3D model of the virtual object that replaces a 10 Jan 2024
2Dimage 2D imageofofthe thevirtual virtual object). object). In Insome some embodiments, whiledisplaying embodiments, while displayinga astaging staginguser user interface (e.g., staging user interface 6010 as described with regard to Figure 6I), the device interface (e.g., staging user interface 6010 as described with regard to Figure 6I), the device
updates the appearance of the representation of the virtual object based on inputs detected updates the appearance of the representation of the virtual object based on inputs detected
that correspond to the staging user interface (e.g., as described in greater detail below with that correspond to the staging user interface (e.g., as described in greater detail below with
reference method reference method900). 900).InInsome someembodiments, embodiments, whenwhen another another inputinput is detected is detected while while the the virtual object is displayed in the staging user interface and the input meets the criteria for virtual object is displayed in the staging user interface and the input meets the criteria for 2024200149
transitioning to displaying the second user interface region, the device replaces display of the transitioning to displaying the second user interface region, the device replaces display of the
staging user interface with the second user interface region while continuously displaying the staging user interface with the second user interface region while continuously displaying the
virtual object. More details are described with respect to method 900. Displaying the third virtual object. More details are described with respect to method 900. Displaying the third
user interface in accordance with the determination that the first input meets the third criteria user interface in accordance with the determination that the first input meets the third criteria
provides additional control options without cluttering the first user interface with additional provides additional control options without cluttering the first user interface with additional
displayed controls (e.g., controls for displaying the third user interface from the first user displayed controls (e.g., controls for displaying the third user interface from the first user
interface). Providing additional control options without cluttering the second user interface interface). Providing additional control options without cluttering the second user interface
with additional displayed controls enhances the operability of the device, which, additionally, with additional displayed controls enhances the operability of the device, which, additionally,
reduces power usage and improves battery life of the device by enabling the user to use the reduces power usage and improves battery life of the device by enabling the user to use the
device more device morequickly quicklyand andefficiently. efficiently.
[00258]
[00258] In some In embodiments, some embodiments, in in accordance accordance with with a determination a determination thatthat thethe firstinput first input (e.g., a swipe input that corresponds to scrolling the first user interface region or a tap input (e.g., a swipe input that corresponds to scrolling the first user interface region or a tap input
that corresponds to a request to display a web page or email corresponding to content in the that corresponds to a request to display a web page or email corresponding to content in the
first user interface region) by the contact does not meet the first (e.g., AR-trigger) criteria, the first user interface region) by the contact does not meet the first (e.g., AR-trigger) criteria, the
device maintains (850) display of the first user interface region without replacing display of device maintains (850) display of the first user interface region without replacing display of
at least a portion of the first user interface region with the representation of the field of view at least a portion of the first user interface region with the representation of the field of view
of the one or more cameras (e.g., as described with regard to Figures 6B-6C). Using the first of the one or more cameras (e.g., as described with regard to Figures 6B-6C). Using the first
criteria to determine whether to maintain display of the first user interface region or to criteria to determine whether to maintain display of the first user interface region or to
continuously display the representation of the virtual object while replacing display of at least continuously display the representation of the virtual object while replacing display of at least
a portion of the first user interface region with the field of view of one or more cameras a portion of the first user interface region with the field of view of one or more cameras
enables the performance of multiple different types of operations in response to an input. enables the performance of multiple different types of operations in response to an input.
Enabling the performance of multiple different types of operations in response to an input Enabling the performance of multiple different types of operations in response to an input
(e.g., by replacing display of at least a portion of the user interface with a field of view of one (e.g., by replacing display of at least a portion of the user interface with a field of view of one
or more cameras or maintaining display of the first user interface region without replacing or more cameras or maintaining display of the first user interface region without replacing
display of at least a portion of the first user interface region with the representation of the display of at least a portion of the first user interface region with the representation of the
field of view of the one or more cameras) increases the efficiency with which the user is able field of view of the one or more cameras) increases the efficiency with which the user is able
96
1005066680
to perform these operations, thereby enhancing the operability of the device, which, to perform these operations, thereby enhancing the operability of the device, which, 10 Jan 2024
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by enabling the by enabling the user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00259]
[00259] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 8A-8E Figures 8A-8Ehave havebeen been described described is is merely merely an an example example and and is not is not intended intended to to indicate indicate that that
the described the described order order is is the theonly onlyorder orderininwhich which the theoperations operationscould couldbe beperformed. performed. One of One of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described 2024200149
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with
respect to other respect other methods described herein methods described herein (e.g., (e.g., methods 900 and methods 900 and1000) 1000)are arealso alsoapplicable applicable in in an analogous an analogousmanner mannertoto method method 800800 described described above above withwith respect respect to Figures to Figures 8A-8E. 8A-8E. For For example, the contacts, inputs, virtual objects, user interface regions, intensity thresholds, example, the contacts, inputs, virtual objects, user interface regions, intensity thresholds,
tactile outputs, tactile outputs,fields of of fields view, movements, view, movements, and/or and/or animations animations described abovewith described above withreference reference to method 800 optionally have one or more of the characteristics of the contacts, inputs, to method 800 optionally have one or more of the characteristics of the contacts, inputs,
virtual objects, user interface regions, intensity thresholds, tactile outputs, fields of view, virtual objects, user interface regions, intensity thresholds, tactile outputs, fields of view,
movements,and/or movements, and/or animations animations described described herein herein with with reference reference to to othermethods other methods described described
herein (e.g., herein (e.g.,methods methods 900, 900, 1000, 16000, 17000, 1000, 16000, 17000,18000, 18000,19000, 19000, and and 20000). 20000). ForFor brevity, brevity, these these
details are not repeated here. details are not repeated here.
[00260]
[00260] Figures 9A-9D Figures 9A-9Dareareflow flowdiagrams diagrams illustratingmethod illustrating method 900 900 of of displaying displaying a first a first
representation of a virtual object in a first user interface region, a second representation of the representation of a virtual object in a first user interface region, a second representation of the
virtual object in the second user interface region, and a third representation of the virtual virtual object in the second user interface region, and a third representation of the virtual
object with a representation of a field of view of one or more cameras, in accordance with object with a representation of a field of view of one or more cameras, in accordance with
someembodiments. some embodiments. Method Method 900performed 900 is is performed at anatelectronic an electronic device device (e.g., (e.g., device device 300, 300,
Figure 3, or portable multifunction device 100, Figure 1A) with a display, a touch-sensitive Figure 3, or portable multifunction device 100, Figure 1A) with a display, a touch-sensitive
surface, and surface, and one one or or more cameras(e.g., more cameras (e.g., one or more one or rear-facing cameras more rear-facing camerasonona aside sideof of the the device opposite device opposite from fromthe the display display and and the the touch-sensitive touch-sensitive surface). surface). In Insome some embodiments, the embodiments, the
display is a touch-screen display and the touch-sensitive surface is on or integrated with the display is a touch-screen display and the touch-sensitive surface is on or integrated with the
display. In display. In some embodiments,thethedisplay some embodiments, displayisisseparate separate from fromthe thetouch-sensitive touch-sensitive surface. surface. Some Some
operations in operations in method 900are, method 900 are, optionally, optionally, combined and/orthe combined and/or theorder orderofofsome someoperations operationsis, is, optionally, changed. optionally, changed.
[00261]
[00261] As described below, method 900 relates to detecting input by a contact at a As described below, method 900 relates to detecting input by a contact at a
touch-sensitive surface of a device that displays a representation of a virtual object in a first touch-sensitive surface of a device that displays a representation of a virtual object in a first
user interface (e.g., a two-dimensional graphic user interface). In response to a first input, the user interface (e.g., a two-dimensional graphic user interface). In response to a first input, the
97
1005066680
device uses criteria to determine whether to display a second representation of the virtual device uses criteria to determine whether to display a second representation of the virtual 10 Jan 2024
object in a second user interface (e.g., a staging user interface in which a three-dimensional object in a second user interface (e.g., a staging user interface in which a three-dimensional
representation of the virtual object can be moved, resized, and/or reoriented). While representation of the virtual object can be moved, resized, and/or reoriented). While
displaying the second representation of the virtual object in the second user interface, in displaying the second representation of the virtual object in the second user interface, in
response to a second input, the device either changes a display property of the second response to a second input, the device either changes a display property of the second
representation of the virtual object based on the second input or displays a third representation of the virtual object based on the second input or displays a third
representation of the virtual object in a third user interface that includes a field of view of one representation of the virtual object in a third user interface that includes a field of view of one 2024200149
or more or camerasofofthe more cameras thedevice. device. Enabling Enablingthe theperformance performanceofof multipledifferent multiple differenttypes typesofof operations in response to an input (e.g., by changing a display property of a virtual object or operations in response to an input (e.g., by changing a display property of a virtual object or
displaying the virtual object in a third user interface) increases the efficiency with which the displaying the virtual object in a third user interface) increases the efficiency with which the
user is able to perform these operations, thereby enhancing the operability of the device, user is able to perform these operations, thereby enhancing the operability of the device,
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00262]
[00262] The device displays (902) a first representation of a virtual object (e.g., a The device displays (902) a first representation of a virtual object (e.g., a
graphical representation of a three-dimensional object, such as virtual chair 5020, virtual graphical representation of a three-dimensional object, such as virtual chair 5020, virtual
lamp 5084, shoes, furniture, hand tools, decorations, people, an emoji, a game character, lamp 5084, shoes, furniture, hand tools, decorations, people, an emoji, a game character,
virtual furniture, etc.) in a first user interface region (e.g., a two-dimensional graphic user virtual furniture, etc.) in a first user interface region (e.g., a two-dimensional graphic user
interface or a portion thereof (e.g., a browsable list of furniture images, an image containing interface or a portion thereof (e.g., a browsable list of furniture images, an image containing
one or more selectable objects, etc.)) on the display 112. For example, the first user interface one or more selectable objects, etc.)) on the display 112. For example, the first user interface
region is region is messaging user interface messaging user interface 5008 as shown 5008 as shownininFigure Figure6A. 6A.InInsome someembodiments, embodiments, the the first user interface region includes a background other than an image of a physical first user interface region includes a background other than an image of a physical
environment surrounding the device (e.g., the background of the first user interface region is environment surrounding the device (e.g., the background of the first user interface region is
a preselected a preselected background color/pattern, or background color/pattern, or aa background imagethat background image thatisis distinct distinct from from an an output output
imageconcurrently image concurrentlycaptured capturedbybythe theone oneorormore morecameras cameras andand distinctfrom distinct from livecontent live contentinina a field of view of the one or more cameras). field of view of the one or more cameras).
[00263]
[00263] While displaying the first representation of the virtual object in the first user While displaying the first representation of the virtual object in the first user
interface region on the display, the device detects (904) a first input by a first contact at a interface region on the display, the device detects (904) a first input by a first contact at a
location on the touch-sensitive surface that corresponds to the first representation of the location on the touch-sensitive surface that corresponds to the first representation of the
virtual object on the display (e.g., the first contact is detected on the first representation of the virtual object on the display (e.g., the first contact is detected on the first representation of the
virtual objectonona atouch-screen virtual object touch-screen display, display, or first or the the first contact contact is detected is detected on an on an affordance affordance (e.g., (e.g., toggle control 6018) that is concurrently displayed in the first user interface region with the toggle control 6018) that is concurrently displayed in the first user interface region with the
first representation of the virtual object and that is configured to trigger display of an AR first representation of the virtual object and that is configured to trigger display of an AR
view (e.g., field of view 6036 of the camera(s)) and/or a staging user interface 6010 that view (e.g., field of view 6036 of the camera(s)) and/or a staging user interface 6010 that
98
1005066680
includes a representation of the virtual object (e.g., virtual chair 5020) when invoked by the includes a representation of the virtual object (e.g., virtual chair 5020) when invoked by the 10 Jan 2024
first contact). For example, the first input is an input by contact 6006 as described with regard first contact). For example, the first input is an input by contact 6006 as described with regard
to Figures 6E-6I. to Figures 6E-6I.
[00264]
[00264] In response to detecting the first input by the first contact and in accordance In response to detecting the first input by the first contact and in accordance
with a determination that the first input by the first contact meets first (e.g., staging-trigger) with a determination that the first input by the first contact meets first (e.g., staging-trigger)
criteria, (e.g., the staging-trigger criteria are criteria configured to identify a swipe input, a criteria, (e.g., the staging-trigger criteria are criteria configured to identify a swipe input, a
touch-hold input, a press input, a tap input, touch down of a contact, initial movement of a touch-hold input, a press input, a tap input, touch down of a contact, initial movement of a 2024200149
contact, or another type of predefined input gesture that is associated with triggering the contact, or another type of predefined input gesture that is associated with triggering the
activation of the camera(s) and/or detection of field of view planes in a field of view of the activation of the camera(s) and/or detection of field of view planes in a field of view of the
camera(s)), the device displays (906) a second representation of the virtual object in a second camera(s)), the device displays (906) a second representation of the virtual object in a second
user interface region that is different from the first user interface region (e.g., the second user user interface region that is different from the first user interface region (e.g., the second user
interface region is a staging user interface 6010 that does not include the field of view of the interface region is a staging user interface 6010 that does not include the field of view of the
camera(s)and camera(s) andthat that includes includes aa simulated three-dimensionalspace simulated three-dimensional spaceininwhich whicha athree- three- dimensionalrepresentation dimensional representationof of the the virtual virtual object objectmay may be be manipulated (e.g., rotated manipulated (e.g., rotated and and moved) moved)
in response in to user response to user input). input).For Forexample, example, in inFigures Figures 6E-6H, in accordance 6E-6H, in with aa accordance with
determination that an input by contact 6006 has a characteristic intensity that increases above determination that an input by contact 6006 has a characteristic intensity that increases above
a deep press intensity threshold IT , virtual chair object 5020 is displayed in a staging user a deep press intensity threshold ITD, virtual D chair object 5020 is displayed in a staging user
interface 6010 (e.g., as shown in Figure 6I) that is distinct from the messaging user interface interface 6010 (e.g., as shown in Figure 6I) that is distinct from the messaging user interface
5008 (e.g., as shown in Figure 6E). 5008 (e.g., as shown in Figure 6E).
[00265]
[00265] In some embodiments, in response to detecting the first input and in In some embodiments, in response to detecting the first input and in
accordance with a determination that the first input meets the staging trigger criteria, the accordance with a determination that the first input meets the staging trigger criteria, the
device displays a first animated transition that shows a three-dimensional representation of device displays a first animated transition that shows a three-dimensional representation of
the virtual object being moved and reoriented from a first orientation as shown in the first the virtual object being moved and reoriented from a first orientation as shown in the first
user interface region (e.g., a first orientation of virtual chair 5020 as shown in messaging user user interface region (e.g., a first orientation of virtual chair 5020 as shown in messaging user
interface 5008 in Figure 6E) to a second orientation that is determined based on a virtual interface 5008 in Figure 6E) to a second orientation that is determined based on a virtual
plane on the display that is oriented independent of a current orientation of device relative to plane on the display that is oriented independent of a current orientation of device relative to
the physical environment surrounding the device (e.g., a second orientation of virtual chair the physical environment surrounding the device (e.g., a second orientation of virtual chair
5020determined 5020 determinedbased basedonon stageplane stage plane6014, 6014, asas shown shown in in Figure Figure 6I).For 6I). Forexample, example, thethe three- three-
dimensional representation of the virtual object has a predefined orientation and/or distance dimensional representation of the virtual object has a predefined orientation and/or distance
from a plane (e.g., based on the shape and orientation of the virtual object as shown in the from a plane (e.g., based on the shape and orientation of the virtual object as shown in the
two-dimensional graphical user interface) and when transitioning to the staging view (e.g., two-dimensional graphical user interface) and when transitioning to the staging view (e.g.,
staging user interface 6010), the three-dimensional representation is moved, resized, and staging user interface 6010), the three-dimensional representation is moved, resized, and
reoriented from the original location of the virtual object on the display to a new location on reoriented from the original location of the virtual object on the display to a new location on
99
1005066680
the display (e.g., the center of the virtual stage 6014), and, during the movement or at the end the display (e.g., the center of the virtual stage 6014), and, during the movement or at the end 10 Jan 2024
of the movement, the three-dimensional representation is reoriented such that the virtual of the movement, the three-dimensional representation is reoriented such that the virtual
object is at a fixed angle relative to a predefined staging virtual plane 6014 which is defined object is at a fixed angle relative to a predefined staging virtual plane 6014 which is defined
independentofofthe independent the physical physical environment environmentsurrounding surroundingthethe device. device.
[00266]
[00266] While displaying the second representation of the virtual object in the second While displaying the second representation of the virtual object in the second
user interface region, the device detects (908) a second input (e.g., an input by contact 6034 user interface region, the device detects (908) a second input (e.g., an input by contact 6034
as illustrated as illustratedatat Figures 6Q-6T). Figures 6Q-6T).InInsome some embodiments, detectingthe embodiments, detecting the second secondinput inputincludes includes 2024200149
detecting one detecting or more one or secondcontacts more second contactsatat aa location location on on the touch-screen touch-screen corresponding to the corresponding to the second representation of the virtual object, detecting a second contact on an affordance that is second representation of the virtual object, detecting a second contact on an affordance that is
configured to configured to trigger trigger display display of ofan anaugmented reality view augmented reality view of of the the physical physical environment environment
surroundingthe surrounding the device device when wheninvoked invoked by by thethe second second contact, contact, detecting detecting movement movement of of the the second contact(s), and/or detecting lift-off of the second contact(s)). In some embodiments, second contact(s), and/or detecting lift-off of the second contact(s)). In some embodiments,
the second input is a continuation of the first input by the same contact (e.g., the second input the second input is a continuation of the first input by the same contact (e.g., the second input
is an input by contact 6034 as illustrated at Figures 6Q-6T following the first input by contact is an input by contact 6034 as illustrated at Figures 6Q-6T following the first input by contact
6006 6006 asasillustrated illustratedatatFigures Figures 6E-6I 6E-6I (e.g., (e.g., withwith no liftoff no liftoff of contact)), of the the contact)), or a separate or a separate input input
with a completely different contact (e.g., the second input is an input by contact 6034 as with a completely different contact (e.g., the second input is an input by contact 6034 as
illustrated at Figures 6Q-6T following the first input by contact 6006 as illustrated at Figures illustrated at Figures 6Q-6T following the first input by contact 6006 as illustrated at Figures
6E-6I (e.g., with a liftoff of the contact)), or a continuation of the input with an additional 6E-6I (e.g., with a liftoff of the contact)), or a continuation of the input with an additional
contact (e.g., the second input is the input by contact 6006 as illustrated at Figures 6J-6L contact (e.g., the second input is the input by contact 6006 as illustrated at Figures 6J-6L
following the first input by contact 6006 as illustrated at Figures 6E-6I). For example, the following the first input by contact 6006 as illustrated at Figures 6E-6I). For example, the
second input may be a continuation of a swipe input, a second tap input, a second press input, second input may be a continuation of a swipe input, a second tap input, a second press input,
a press input that followed the first input, a second touch-hold input, a sustained touch that a press input that followed the first input, a second touch-hold input, a sustained touch that
continues from the first input, etc. continues from the first input, etc.
[00267]
[00267] In response to detecting the second input (910): in accordance with a In response to detecting the second input (910): in accordance with a
determination that the second input corresponds to a request to manipulate the virtual object determination that the second input corresponds to a request to manipulate the virtual object
in the second user interface region (e.g., without transitioning to the augmented reality view), in the second user interface region (e.g., without transitioning to the augmented reality view),
the device changes a display property of the second representation of the virtual object within the device changes a display property of the second representation of the virtual object within
the second the user interface second user interface region region based based on on the the second input, and second input, and in in accordance with aa accordance with
determination that the second input corresponds to a request to display the virtual object in an determination that the second input corresponds to a request to display the virtual object in an
augmented reality environment, the device displays a third representation of the virtual object augmented reality environment, the device displays a third representation of the virtual object
with a representation of a field of view of the one or more cameras (e.g., the device displays a with a representation of a field of view of the one or more cameras (e.g., the device displays a
third user interface that includes a field of view 6036 of the one or more cameras and places a third user interface that includes a field of view 6036 of the one or more cameras and places a
three-dimensional representation of the virtual object (e.g., virtual chair 5020) on a virtual three-dimensional representation of the virtual object (e.g., virtual chair 5020) on a virtual
100
1005066680
plane (e.g., floor surface 5038) detected within the field of view of the camera(s) that plane (e.g., floor surface 5038) detected within the field of view of the camera(s) that 10 Jan 2024
corresponds to a physical plane (e.g., the floor) in the physical environment 5002 surrounding corresponds to a physical plane (e.g., the floor) in the physical environment 5002 surrounding
the device). the device).
[00268]
[00268] In some In embodiments, some embodiments, thethe second second input input thatcorresponds that corresponds to to a requesttoto a request
manipulate the virtual object in the second user interface region is a pinch or swipe by the manipulate the virtual object in the second user interface region is a pinch or swipe by the
second contact(s) at a location on the touch-sensitive surface that corresponds to the second second contact(s) at a location on the touch-sensitive surface that corresponds to the second
representation of the virtual object in the second user interface region. For example, the representation of the virtual object in the second user interface region. For example, the 2024200149
second input is an input by contact 6006 as illustrated at Figures 6J-6L or an input by second input is an input by contact 6006 as illustrated at Figures 6J-6L or an input by
contacts 6026 contacts and6030 6026 and 6030asasillustrated illustrated at at Figures Figures 6N-6O. 6N-60.
[00269]
[00269] In some In embodiments, some embodiments, thethe second second input input thatcorresponds that corresponds to to a requesttoto a request
display the virtual object in an augmented reality environment is a tap input, a press input, or display the virtual object in an augmented reality environment is a tap input, a press input, or
a touch-hold or press input followed by a drag input, at or from a location on the touch- a touch-hold or press input followed by a drag input, at or from a location on the touch-
sensitive surface that corresponds to the representation of the virtual object in the second user sensitive surface that corresponds to the representation of the virtual object in the second user
interface region. For example, the second input is a deep press input by contact 6034 as interface region. For example, the second input is a deep press input by contact 6034 as
illustrated at Figures 6Q-6T. illustrated at Figures 6Q-6T.
[00270]
[00270] In some In embodiments, some embodiments, changing changing a display a display property property of of thethe second second
representation of the virtual object within the second user interface region based on the representation of the virtual object within the second user interface region based on the
second input includes rotating about one or more axes (e.g., via vertical and/or horizontal second input includes rotating about one or more axes (e.g., via vertical and/or horizontal
swipe), resizing (e.g., pinch to resize), tilting about one or more axes (e.g., by tilting the swipe), resizing (e.g., pinch to resize), tilting about one or more axes (e.g., by tilting the
device), changing device), changing aa perspective perspective (e.g., (e.g., by bymoving the device moving the horizontally, which device horizontally, in some which in some
embodiments is used for the analysis of the field of view of the one or more cameras to detect embodiments is used for the analysis of the field of view of the one or more cameras to detect
one or more field of view planes), and/or changing a color of the representation of the virtual one or more field of view planes), and/or changing a color of the representation of the virtual
object. For example, changing a display property of the second representation of the virtual object. For example, changing a display property of the second representation of the virtual
object includes rotating the virtual chair 5020 in response to a horizontal swipe gesture by object includes rotating the virtual chair 5020 in response to a horizontal swipe gesture by
contact 6006 as illustrated in Figures 6J-6K, rotating the virtual chair 5020 in response to a contact 6006 as illustrated in Figures 6J-6K, rotating the virtual chair 5020 in response to a
diagonal swipe gesture by contact 6006 as illustrated in Figures 6K-6L, or increasing the size diagonal swipe gesture by contact 6006 as illustrated in Figures 6K-6L, or increasing the size
of virtual chair 5020 in response to a depinch gesture by contacts 6026 and 6030 as illustrated of virtual chair 5020 in response to a depinch gesture by contacts 6026 and 6030 as illustrated
in Figures in Figures 6N-6O. Insome 6N-60. In someembodiments, embodiments, the the amount amount by which by which the display the display property property of of the the second representation of the virtual object is changed is correlated with an amount by which a second representation of the virtual object is changed is correlated with an amount by which a
property of property of the second input changes second input (e.g., distance changes (e.g., distance or orspeed speed of ofmovement bycontact(s), movement by contact(s), intensity of contact, duration of contact etc.) intensity of contact, duration of contact etc.)
101
1005066680
[00271]
[00271] In some In embodiments, some embodiments, in in accordance accordance with with a determination a determination that that thethe second second 10 Jan 2024
input corresponds to a request to display the virtual object in an augmented reality input corresponds to a request to display the virtual object in an augmented reality
environment(e.g., environment (e.g., in in the the field fieldofofview view6036 6036 of ofthe theone oneor ormore more cameras, cameras, as as described described with with
regard to Figure 6T), the device displays a second animated transition that shows the three- regard to Figure 6T), the device displays a second animated transition that shows the three-
dimensional representation of the virtual object being reoriented from the respective dimensional representation of the virtual object being reoriented from the respective
orientation relative to the virtual plane on the display (e.g., the orientation of virtual chair orientation relative to the virtual plane on the display (e.g., the orientation of virtual chair
5020 shown in Figure 6R) to a third orientation that is determined based on the current 5020 shown in Figure 6R) to a third orientation that is determined based on the current 2024200149
orientation of the portion of the physical environment captured in the field of view of the one orientation of the portion of the physical environment captured in the field of view of the one
or more cameras (e.g., the orientation of virtual chair 5020 shown in Figure 6T). For or more cameras (e.g., the orientation of virtual chair 5020 shown in Figure 6T). For
example, the three-dimensional representation of the virtual object is reoriented such that the example, the three-dimensional representation of the virtual object is reoriented such that the
three-dimensional representation of the virtual object is at a fixed angle relative to a three-dimensional representation of the virtual object is at a fixed angle relative to a
predefined plane (e.g., floor surface 5038) identified in the live image of the physical predefined plane (e.g., floor surface 5038) identified in the live image of the physical
environment 5002 (e.g., a physical surface, such as a vertical wall or horizontal floor surface environment 5002 (e.g., a physical surface, such as a vertical wall or horizontal floor surface
that can support the three-dimensional representation of the virtual object) captured in the that can support the three-dimensional representation of the virtual object) captured in the
field of view of the camera(s). In some embodiments, the orientation of the virtual object in field of view of the camera(s). In some embodiments, the orientation of the virtual object in
the augmented reality view is constrained by the orientation of the virtual object in the the augmented reality view is constrained by the orientation of the virtual object in the
staging user interface in at least one aspect. For example, the rotational angle of the virtual staging user interface in at least one aspect. For example, the rotational angle of the virtual
object around object at least around at leastone one axis axisof ofa athree-dimensional three-dimensionalcoordinate coordinate system system is ismaintained maintained when when
transitioning the virtual object from the staging user interface to the augmented reality view transitioning the virtual object from the staging user interface to the augmented reality view
(e.g., (e.g., as as described withregard described with regard to Figures to Figures 6Q-6U, 6Q-6U, a rotation a rotation of virtual of virtual chair chair 5020 as 5020 as described described
with regard with regard to to Figures Figures 6J-6K is maintained). 6J-6K is In some maintained). In embodiments, some embodiments, a source a source of of lightcast light caston on the representation of the virtual object in the second user interface region is a virtual light the representation of the virtual object in the second user interface region is a virtual light
source. In some embodiments, the third representation of the virtual object in the third user source. In some embodiments, the third representation of the virtual object in the third user
interface region is illuminated by a real world light source (e.g., as detected in and/or interface region is illuminated by a real world light source (e.g., as detected in and/or
determinedfrom determined fromthe thefield field of of view view of of the the one or more one or cameras). more cameras).
[00272]
[00272] In some embodiments, the first criteria include (912) criteria that are satisfied In some embodiments, the first criteria include (912) criteria that are satisfied
when (e.g., in accordance with a determination that) the first input includes a tap input by the when (e.g., in accordance with a determination that) the first input includes a tap input by the
first contact at a location on the touch-sensitive surface that corresponds to a virtual object first contact at a location on the touch-sensitive surface that corresponds to a virtual object
indicator 5022 (e.g., an indicator, such as an icon, displayed overlaying and/or adjacent to the indicator 5022 (e.g., an indicator, such as an icon, displayed overlaying and/or adjacent to the
representation of the virtual object on the display). For example, the virtual object indicator representation of the virtual object on the display). For example, the virtual object indicator
5022 provides 5022 provides an an indication indication that that the virtual the virtual object object to which to which it corresponds it corresponds is viewable is viewable in a in a staging view (e.g., staging user interface 6010) and an augmented reality view (e.g., field of staging view (e.g., staging user interface 6010) and an augmented reality view (e.g., field of
view 6036 of the camera(s)) (e.g., as described in greater detail below with reference to view 6036 of the camera(s)) (e.g., as described in greater detail below with reference to
102
1005066680
method1000). method 1000).Determining Determining whether whether to display to display thethe second second representation representation of of thethe virtualobject virtual object 10 Jan 2024
in the second user interface region, depending on whether the first input includes a tap input, in the second user interface region, depending on whether the first input includes a tap input,
enables the performance of multiple different types of operations in response to the first enables the performance of multiple different types of operations in response to the first
input. Enabling the performance of multiple different types of operations in response to an input. Enabling the performance of multiple different types of operations in response to an
input increases the efficiency with which the user is able to perform these operations, thereby input increases the efficiency with which the user is able to perform these operations, thereby
enhancingthe enhancing theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reduces power power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and 2024200149
efficiently. efficiently.
[00273]
[00273] In some embodiments, the first criteria include (914) criteria that are satisfied In some embodiments, the first criteria include (914) criteria that are satisfied
when (e.g., in accordance with a determination that) the first contact is maintained at the when (e.g., in accordance with a determination that) the first contact is maintained at the
location on the touch-sensitive surface that corresponds to the first representation of the location on the touch-sensitive surface that corresponds to the first representation of the
virtual object with less than a threshold amount of movement for at least a predefined virtual object with less than a threshold amount of movement for at least a predefined
threshold amount of time (e.g., a long press time threshold). For example, the first criteria are threshold amount of time (e.g., a long press time threshold). For example, the first criteria are
met by a touch-hold input. In some embodiments, the first criteria include a criterion that met by a touch-hold input. In some embodiments, the first criteria include a criterion that
requires a movement of the first contact after the first contact has been maintained at the requires a movement of the first contact after the first contact has been maintained at the
location on the touch-sensitive surface that corresponds to the representation of the virtual location on the touch-sensitive surface that corresponds to the representation of the virtual
object with object with less less than than the thethreshold thresholdamount amount of of movement foratatleast movement for least the the predefined predefined threshold threshold
amount of time, in order for the criterion to be met. For example, the first criteria are met by a amount of time, in order for the criterion to be met. For example, the first criteria are met by a
touch-hold input touch-hold input followed followedbybyaadrag draginput. input. Determining Determiningwhether whethertoto displaythe display thesecond second representation of the virtual object in the second user interface region, depending on whether representation of the virtual object in the second user interface region, depending on whether
the contact is maintained at a location on a touch-sensitive surface that corresponds to the the contact is maintained at a location on a touch-sensitive surface that corresponds to the
representation of the virtual object with less than a threshold amount of movement for at least representation of the virtual object with less than a threshold amount of movement for at least
a predefined a amountofoftime, predefined amount time,enables enablesthe the performance performanceofofmultiple multipledifferent different types types of of operations in response to the first input. Enabling the performance of multiple different types operations in response to the first input. Enabling the performance of multiple different types
of operations in response to an input increases the efficiency with which the user is able to of operations in response to an input increases the efficiency with which the user is able to
performthese perform these operations, operations, thereby thereby enhancing enhancingthe theoperability operability of of the the device, device, which, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00274]
[00274] In some embodiments, the first criteria include (916) criteria that are satisfied In some embodiments, the first criteria include (916) criteria that are satisfied
when (e.g., in accordance with a determination that) a characteristic intensity of the first when (e.g., in accordance with a determination that) a characteristic intensity of the first
contact increases above a first intensity threshold (e.g., a deep press intensity threshold ITD). contact increases above a first intensity threshold (e.g., a deep press intensity threshold ITD).
For example, as described with regard to Figures 6Q-6T, criteria are satisfied when a For example, as described with regard to Figures 6Q-6T, criteria are satisfied when a
characteristic intensity of the contact 6034 increases above deep press intensity threshold ITD, characteristic intensity of the contact 6034 increases above deep press intensity threshold ITD,
103
1005066680
as indicated as indicated by by intensity intensitylevel levelmeter meter5028. 5028.In Insome some embodiments, embodiments, ininaccordance accordancewith witha a 10 Jan 2024
determination that the contact satisfies criteria for recognizing another type of gesture (e.g., a determination that the contact satisfies criteria for recognizing another type of gesture (e.g., a
tap), the device performs another predefined function other than triggering the second (e.g., tap), the device performs another predefined function other than triggering the second (e.g.,
staging) user interface while maintaining display of the virtual object. In some embodiments, staging) user interface while maintaining display of the virtual object. In some embodiments,
the first criteria require that the first input is not a tap input (e.g., a hard tap input with an the first criteria require that the first input is not a tap input (e.g., a hard tap input with an
intensity reaching above the threshold intensity before lift-off of the contact is detected intensity reaching above the threshold intensity before lift-off of the contact is detected
within aa tap time within time threshold threshold of of initial initialtouch-down touch-down of ofthe thecontact). contact).InIn some someembodiments, the embodiments, the 2024200149
first criteria include a criterion that requires a movement of the first contact after the intensity first criteria include a criterion that requires a movement of the first contact after the intensity
of the first contact has exceeded the first intensity threshold, in order for the criterion to be of the first contact has exceeded the first intensity threshold, in order for the criterion to be
met. For example, the first criteria are met by a press input followed by a drag input. met. For example, the first criteria are met by a press input followed by a drag input.
Determining whether to display the virtual object in a second user interface region, depending Determining whether to display the virtual object in a second user interface region, depending
on whether a characteristic intensity of a contact increases above a first intensity threshold, on whether a characteristic intensity of a contact increases above a first intensity threshold,
enables the performance of multiple different types of operations in response to the first enables the performance of multiple different types of operations in response to the first
input. Enabling the performance of multiple different types of operations in response to an input. Enabling the performance of multiple different types of operations in response to an
input increases the efficiency with which the user is able to perform these operations, thereby input increases the efficiency with which the user is able to perform these operations, thereby
enhancingthe enhancing theoperability operability of of the device, device, which, which, additionally, additionally,reduces reducespower power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00275]
[00275] In some embodiments, in response to detecting the first input by the first In some embodiments, in response to detecting the first input by the first
contact and in accordance with a determination that the first input by the first contact meets contact and in accordance with a determination that the first input by the first contact meets
second criteria (e.g., interface-scroll criteria), wherein the second criteria require that the first second criteria (e.g., interface-scroll criteria), wherein the second criteria require that the first
input includes movement of the first contact in a direction across the touch-sensitive surface input includes movement of the first contact in a direction across the touch-sensitive surface
for more than a threshold distance (e.g., the second criteria are met by a swipe gesture, such for more than a threshold distance (e.g., the second criteria are met by a swipe gesture, such
as a vertical swipe or horizontal gesture), the device scrolls (918) the first user interface as a vertical swipe or horizontal gesture), the device scrolls (918) the first user interface
region (and the representation of the virtual object) in a direction that corresponds to the region (and the representation of the virtual object) in a direction that corresponds to the
direction of movement of the first contact (e.g., the first criteria are not met and displaying direction of movement of the first contact (e.g., the first criteria are not met and displaying
the representation of the virtual object in the second user interface region is forgone). For the representation of the virtual object in the second user interface region is forgone). For
example,asas described example, describedwith withregard regardtoto Figures Figures 6B-6C, 6B-6C,ananupward upward verticalswipe vertical swipe gesture gesture byby
contact 6002 causes the messaging user interface 5008 and the virtual chair 5020 to scroll contact 6002 causes the messaging user interface 5008 and the virtual chair 5020 to scroll
upward. In some embodiments, the first criteria also require that the first input includes upward. In some embodiments, the first criteria also require that the first input includes
movement of the first contact for more than a threshold distance in order for the first criteria movement of the first contact for more than a threshold distance in order for the first criteria
to be met, and the device determines whether the first input meets the first criteria (e.g., to be met, and the device determines whether the first input meets the first criteria (e.g.,
staging-trigger criteria) or the second criteria (e.g., interface-scrolling criteria) based on staging-trigger criteria) or the second criteria (e.g., interface-scrolling criteria) based on
104
1005066680
whether an initial portion of the first input meets object-selection criteria (e.g., a touch-hold whether an initial portion of the first input meets object-selection criteria (e.g., a touch-hold 10 Jan 2024
or press on the representation of the virtual object). In some embodiments, the second criteria or press on the representation of the virtual object). In some embodiments, the second criteria
are met by a swipe input that is initiated at a touch-location outside of the location of the are met by a swipe input that is initiated at a touch-location outside of the location of the
virtual object and the AR icon of the virtual object). Determining whether to scroll the first virtual object and the AR icon of the virtual object). Determining whether to scroll the first
user interface region in response to the first input, depending on whether the first input meets user interface region in response to the first input, depending on whether the first input meets
the second criteria, enables the performance of multiple different types of operations in the second criteria, enables the performance of multiple different types of operations in
response to the first input. Enabling the performance of multiple different types of operations response to the first input. Enabling the performance of multiple different types of operations 2024200149
in response to an input increases the efficiency with which the user is able to perform these in response to an input increases the efficiency with which the user is able to perform these
operations, thereby enhancing the operability of the device, which, additionally, reduces operations, thereby enhancing the operability of the device, which, additionally, reduces
powerusage power usageand andimproves improves battery battery lifeofofthe life the device device by byenabling enablingthe the user user to to use use the the device device
more quickly and efficiently. more quickly and efficiently.
[00276]
[00276] In some embodiments, in response to detecting the first input by the first In some embodiments, in response to detecting the first input by the first
contact and in accordance with a determination that the first input by the first contact meets contact and in accordance with a determination that the first input by the first contact meets
third (e.g., AR-trigger) criteria, the device displays (920) the third representation of the third (e.g., AR-trigger) criteria, the device displays (920) the third representation of the
virtual object with the representation of the field of view of the one or more cameras. For virtual object with the representation of the field of view of the one or more cameras. For
example, as described example, as describedwith withregard regardto to Figures Figures 6AD-6AG, 6AD-6AG, a long a long touch touch input input by by contact contact 6044 6044
followedby followed byan anupward upwarddrag drag inputbybycontact input contact6044 6044 thatdrags that dragsvirtual virtualchair chair 5020 5020causes causesthe the virtual chair 5020 to be displayed with the field of view 6036 of the camera(s). virtual chair 5020 to be displayed with the field of view 6036 of the camera(s).
[00277]
[00277] In some embodiments, the third criteria include, e.g., criteria that are satisfied In some embodiments, the third criteria include, e.g., criteria that are satisfied
in accordance with a determination that: the one or more cameras are in an active state, the in accordance with a determination that: the one or more cameras are in an active state, the
device orientation falls within a defined range (e.g., from a defined origin orientation, a device orientation falls within a defined range (e.g., from a defined origin orientation, a
defined angle of rotation about one or more axes), the input by the contact includes a defined angle of rotation about one or more axes), the input by the contact includes a
selection input (e.g., a long touch) followed by a drag input (movement of the contact that selection input (e.g., a long touch) followed by a drag input (movement of the contact that
moves the virtual object on the display (e.g., to within a predetermined distance from the moves the virtual object on the display (e.g., to within a predetermined distance from the
edge of the display), the characteristic intensity of the contact increases above an AR-trigger edge of the display), the characteristic intensity of the contact increases above an AR-trigger
intensity threshold (e.g., a light-press threshold IT or a deep-press threshold IT ), a duration intensity threshold (e.g., a light-press threshold ITL or La deep-press threshold ITD), a duration D
of the contact increases above an AR-trigger duration threshold (e.g., a long press threshold), of the contact increases above an AR-trigger duration threshold (e.g., a long press threshold),
and/or aa distance and/or distance traversed traversed by by the the contact contactincreases increasesabove above an an AR-trigger distance threshold AR-trigger distance threshold
(e.g., a long swipe threshold). In some embodiments, a control (e.g., toggle control 6018) for (e.g., a long swipe threshold). In some embodiments, a control (e.g., toggle control 6018) for
displaying the representation of the virtual object in the second user interface region (e.g., displaying the representation of the virtual object in the second user interface region (e.g.,
staging user interface 6010) is displayed in the user interface (e.g., the third user interface staging user interface 6010) is displayed in the user interface (e.g., the third user interface
region that replaces at least a portion of the second user interface region) that includes the region that replaces at least a portion of the second user interface region) that includes the
representation of the virtual object and the field of view 6036 of the one or more cameras. representation of the virtual object and the field of view 6036 of the one or more cameras.
105
1005066680
[00278]
[00278] In some In embodiments, some embodiments, when when transitioning transitioning directly directly from from thethe firstuser first user interface interface 10 Jan 2024
region (e.g., the non-AR, non-staging, touch-screen UI view) to the third user interface region region (e.g., the non-AR, non-staging, touch-screen UI view) to the third user interface region
(e.g., (e.g., the the augmented reality augmented reality view), view), the the device device displays displays an animated an animated transition transition that showsthat the shows the
three-dimensional representation of the virtual object being reoriented from the respective three-dimensional representation of the virtual object being reoriented from the respective
orientation represented orientation represented in in the thetouch-screen touch-screen UI UI (e.g., (e.g.,the non-AR, the non-AR, non-staging non-staging view) on the view) on the display to an orientation that is predefined relative to the current orientation of the portion of display to an orientation that is predefined relative to the current orientation of the portion of
the physical the physical environment capturedininthe environment captured the field field of of view view of of the the one one or or more more cameras. For cameras. For 2024200149
example,asas shown example, shownininFigures Figures6AD-6AJ, 6AD-6AJ,whenwhen transitioning transitioning directly directly from from a firstuser a first user interface region (e.g., messaging user interface 5008, as shown in Figure 6AD) to the third interface region (e.g., messaging user interface 5008, as shown in Figure 6AD) to the third
user interface region (e.g., the augmented reality user interface that includes the field of view user interface region (e.g., the augmented reality user interface that includes the field of view
6036of 6036 of the the camera(s), camera(s), as as shown inFigure shown in Figure6AJ), 6AJ),virtual virtual chair chair 5020 changesfrom 5020 changes froma afirst first orientation as orientation as shown in Figures shown in 6AD-6AH Figures 6AD-6AH to atopredefined a predefined orientation orientation relativetotofloor relative floor surface surface 5038in 5038 in physical physical environment environment5002 5002as as captured captured inin thefield the field of of view view6036 6036ofofthe thecamera(s) camera(s) (e.g., (e.g., as as shown shown inin Figure Figure 6AJ). 6AJ). For For example, example, the three-dimensional the three-dimensional representation representation of the of the virtual object is reoriented such that the three-dimensional representation of the virtual object virtual object is reoriented such that the three-dimensional representation of the virtual object
is at a fixed angle relative to a predefined plane identified in the live images of the physical is at a fixed angle relative to a predefined plane identified in the live images of the physical
environment 5002 (e.g., a physical surface, such as a vertical wall or horizontal floor surface environment 5002 (e.g., a physical surface, such as a vertical wall or horizontal floor surface
(e.g., (e.g., floor floor surface 5038)that surface 5038) thatcancan support support the three-dimensional the three-dimensional representation representation of the virtual of the virtual
object). Determining whether to display the third representation of the virtual object with the object). Determining whether to display the third representation of the virtual object with the
field of view of the camera(s) in response to the first input, depending on whether the first field of view of the camera(s) in response to the first input, depending on whether the first
input meets third criteria, enables the performance of multiple different types of operations in input meets third criteria, enables the performance of multiple different types of operations in
response to the first input. Enabling the performance of multiple different types of operations response to the first input. Enabling the performance of multiple different types of operations
in response to an input increases the efficiency with which the user is able to perform these in response to an input increases the efficiency with which the user is able to perform these
operations, thereby enhancing the operability of the device, which, additionally, reduces operations, thereby enhancing the operability of the device, which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00279]
[00279] In some embodiments, in response to detecting the first input by the first In some embodiments, in response to detecting the first input by the first
contact, the device determines (922), by the one or more device orientation sensors, a current contact, the device determines (922), by the one or more device orientation sensors, a current
device orientation of the device (e.g., an orientation relative to the physical environment device orientation of the device (e.g., an orientation relative to the physical environment
surrounding the device) and the third criteria (e.g., AR trigger criteria) require that the current surrounding the device) and the third criteria (e.g., AR trigger criteria) require that the current
device orientation be within a first range of orientations in order for the third criteria to be device orientation be within a first range of orientations in order for the third criteria to be
met (e.g., met (e.g., the thesecond second criteria criteriamet metwhen when the the angle angle between the device between the and the device and the ground is below ground is below
a threshold angle, indicating that the device is sufficiently parallel to the ground (to bypass a threshold angle, indicating that the device is sufficiently parallel to the ground (to bypass
106
1005066680
the interstitial state)). In some embodiments, the first criteria (e.g., staging trigger criteria) the interstitial state)). In some embodiments, the first criteria (e.g., staging trigger criteria) 10 Jan 2024
require that the current device orientation be within a second range of orientations in order require that the current device orientation be within a second range of orientations in order
for the first criteria to be met (e.g., the first criteria are met when the angle between the for the first criteria to be met (e.g., the first criteria are met when the angle between the
device and the ground is within a threshold value to 90 degrees, indicating that the device device and the ground is within a threshold value to 90 degrees, indicating that the device
sufficiently upright relative to the ground to go to the interstitial state first. Determining sufficiently upright relative to the ground to go to the interstitial state first. Determining
whether to display the third representation of the virtual object with a field of view of the whether to display the third representation of the virtual object with a field of view of the
camera(s) in response to the first input, depending on whether the device orientation is within camera(s) in response to the first input, depending on whether the device orientation is within 2024200149
a range of orientations, enables the performance of multiple different types of operations in a range of orientations, enables the performance of multiple different types of operations in
response to the first input. Enabling the performance of multiple different types of operations response to the first input. Enabling the performance of multiple different types of operations
in response to an input increases the efficiency with which the user is able to perform these in response to an input increases the efficiency with which the user is able to perform these
operations, thereby enhancing the operability of the device, which, additionally, reduces operations, thereby enhancing the operability of the device, which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00280]
[00280] In some embodiments, at least one display property (e.g., size, shape, In some embodiments, at least one display property (e.g., size, shape,
respective angles around the yaw, pitch, and roll axes, etc.) of the second representation of respective angles around the yaw, pitch, and roll axes, etc.) of the second representation of
the virtual object is applied (924) to the third representation of the virtual object. For the virtual object is applied (924) to the third representation of the virtual object. For
example,asas described example, describedwith withregard regardto to Figures Figures 6Q-6U, 6Q-6U,a arotation rotationofof the the second secondrepresentation representation of virtual chair 5020 applied in the staging user interface 6010, as described with regard to of virtual chair 5020 applied in the staging user interface 6010, as described with regard to
Figures 6J-6K, is maintained when the third representation of the virtual chair 5020 is Figures 6J-6K, is maintained when the third representation of the virtual chair 5020 is
displayed in the augmented reality view that includes the field of view 6036 of the camera(s) displayed in the augmented reality view that includes the field of view 6036 of the camera(s)
(e.g., as shown in Figure 6U). In some embodiments, the orientation of the virtual object in (e.g., as shown in Figure 6U). In some embodiments, the orientation of the virtual object in
the augmented reality view is constrained by the orientation of the virtual object in the the augmented reality view is constrained by the orientation of the virtual object in the
staging user interface in at least one aspect. For example, the rotational angle of the virtual staging user interface in at least one aspect. For example, the rotational angle of the virtual
object around at least one axis (e.g., yaw, pitch, or roll axis) of a predefined three- object around at least one axis (e.g., yaw, pitch, or roll axis) of a predefined three-
dimensionalcoordinate dimensional coordinatesystem systemisismaintained maintainedwhen when transitioning transitioning thevirtual the virtualobject objectfrom fromthe the staging view staging to the view to the augmented reality view. augmented reality view. In In some someembodiments, embodiments,thethe at at leastone least onedisplay display property of the second representation of the virtual object is only applied to the third property of the second representation of the virtual object is only applied to the third
representation of the virtual object if the second representation of the virtual object has been representation of the virtual object if the second representation of the virtual object has been
manipulated in some way (e.g., changed in size, shape, texture, orientation, etc.) by user manipulated in some way (e.g., changed in size, shape, texture, orientation, etc.) by user
input. In input. In other otherwords, words, the thechanges changes made in the made in the staging staging view is maintained view is whenthe maintained when theobject objectis is shown in the augmented reality view or used to constrain the appearance of the object in the shown in the augmented reality view or used to constrain the appearance of the object in the
augmentedreality augmented realityview viewininone oneorormore moreways. ways.Applying Applying at at leastone least onedisplay displayproperty propertyofofthe the second representation of the virtual object to the third representation of the virtual object second representation of the virtual object to the third representation of the virtual object
107
1005066680
(e.g., without requiring further user input to apply the same display property to the second (e.g., without requiring further user input to apply the same display property to the second 10 Jan 2024
representation of the virtual object and the third representation of the virtual object) enhances representation of the virtual object and the third representation of the virtual object) enhances
the operability of the device (e.g., by allowing the user to apply a rotation to the second the operability of the device (e.g., by allowing the user to apply a rotation to the second
virtual object while a large version of the virtual object is displayed in the second user virtual object while a large version of the virtual object is displayed in the second user
interface and applying the rotation to the third representation of the virtual object displayed interface and applying the rotation to the third representation of the virtual object displayed
with the representation of the field of view of the one or more cameras), which, additionally, with the representation of the field of view of the one or more cameras), which, additionally,
reduces power reduces powerusage usageand andimproves improves battery battery lifeofofthe life thedevice deviceby byenabling enablingthe theuser userto to use use the the 2024200149
device more device morequickly quicklyand andefficiently. efficiently.
[00281]
[00281] In some embodiments, in response to detecting at least an initial portion of the In some embodiments, in response to detecting at least an initial portion of the
first input by the first contact (926) (e.g., including detecting the first contact, or detecting an first input by the first contact (926) (e.g., including detecting the first contact, or detecting an
input by the first contact that meets respective predefined criteria without meeting the first input by the first contact that meets respective predefined criteria without meeting the first
criteria, or detecting an input that meets the first criteria): the device activates the one or more criteria, or detecting an input that meets the first criteria): the device activates the one or more
cameras (e.g., activating the camera(s) without immediately displaying the field of view of cameras (e.g., activating the camera(s) without immediately displaying the field of view of
the camera(s) on the display) and the device analyzes the field of view of the one or more the camera(s) on the display) and the device analyzes the field of view of the one or more
camerastoto detect cameras detect one one or or more moreplanes planesin in the the field field of ofview view of of the theone oneor ormore more cameras. cameras. In In some some
embodiments, embodiments, displaying displaying thefield the fieldofof view view6036 6036ofofthe theone oneorormore morecameras cameras is is delayed delayed after after
activating the one or more cameras (e.g., until the second input that corresponds to the activating the one or more cameras (e.g., until the second input that corresponds to the
request to display the virtual object in an augmented reality environment is detected, until at request to display the virtual object in an augmented reality environment is detected, until at
least one field of view plane is detected, or until a field of view plane that corresponds to an least one field of view plane is detected, or until a field of view plane that corresponds to an
anchor plane defined for the virtual object is detected). In some embodiments, the field of anchor plane defined for the virtual object is detected). In some embodiments, the field of
view 6036 of the one or more cameras is displayed at a time that corresponds to (e.g., at the view 6036 of the one or more cameras is displayed at a time that corresponds to (e.g., at the
sametime same timeas) as) activation activation of of the the one one or or more more cameras. In some cameras. In someembodiments, embodiments,thethe fieldofofview field view 6036 of the one or more cameras is displayed before a plane is detected in the field of view of 6036 of the one or more cameras is displayed before a plane is detected in the field of view of
the one or more cameras (e.g., the field of view of the one or more cameras is displayed in the one or more cameras (e.g., the field of view of the one or more cameras is displayed in
response to detecting the first input by the contact and in accordance with the determination). response to detecting the first input by the contact and in accordance with the determination).
Activating the camera(s) and analyzing the field of view of the camera(s) to detect one or Activating the camera(s) and analyzing the field of view of the camera(s) to detect one or
more field of view planes in response to detecting an initial portion of the first input (e.g., more field of view planes in response to detecting an initial portion of the first input (e.g.,
prior to displaying the third representation of the virtual object with the representation of the prior to displaying the third representation of the virtual object with the representation of the
field of view of the one or more cameras) enhances the efficiency of the device (e.g., by field of view of the one or more cameras) enhances the efficiency of the device (e.g., by
reducing the amount of time required to determine a position and/or orientation of the third reducing the amount of time required to determine a position and/or orientation of the third
representation of the virtual object relative to a respective plane in the field of view of the representation of the virtual object relative to a respective plane in the field of view of the
camera(s)) which, camera(s)) which,additionally, additionally, reduces powerusage reduces power usageand andimproves improves battery battery lifeofofthe life the device device by enabling the user to use the device more quickly and efficiently. by enabling the user to use the device more quickly and efficiently.
108
1005066680
[00282]
[00282] In some embodiments, in response to detecting a respective plane (e.g., floor In some embodiments, in response to detecting a respective plane (e.g., floor 10 Jan 2024
surface 5038) in the field of view of the one or more cameras, the device outputs (928), with surface 5038) in the field of view of the one or more cameras, the device outputs (928), with
one or more tactile output generators 167, a tactile output to indicate the detection of a one or more tactile output generators 167, a tactile output to indicate the detection of a
respective plane respective plane in the the field fieldofof view viewofofthe one the oneoror more morecameras. cameras. In Insome some embodiments, the embodiments, the
field of view 6036 can be shown before the field of view plane is identified. In some field of view 6036 can be shown before the field of view plane is identified. In some
embodiments, additional user interface controls and/or icons are overlaid on the real-world embodiments, additional user interface controls and/or icons are overlaid on the real-world
image in the field of view after at least one field of view plane is detected or after all of the image in the field of view after at least one field of view plane is detected or after all of the 2024200149
field of view planes are identified. Outputting a tactile output to indicate detection of a plane field of view planes are identified. Outputting a tactile output to indicate detection of a plane
in a field of view of the camera(s) provides the user with feedback to indicate that the plane in a field of view of the camera(s) provides the user with feedback to indicate that the plane
has been has been detected. detected. Providing improvedtactile Providing improved tactile feedback feedbackenhances enhancesthetheoperability operabilityofofthe the device device (e.g., (e.g., by helpingthe by helping theuser usertotoprovide provide proper proper inputs inputs and reducing and reducing unnecessary unnecessary additional additional inputs inputs for placing the virtual object), which, additionally, reduces power usage and improves battery for placing the virtual object), which, additionally, reduces power usage and improves battery
life of the device by enabling the user to use the device more quickly and efficiently. life of the device by enabling the user to use the device more quickly and efficiently.
[00283]
[00283] In some embodiments, a size of the third representation of the virtual object on In some embodiments, a size of the third representation of the virtual object on
the display is determined (930) based on a simulated real-world size of the virtual object and the display is determined (930) based on a simulated real-world size of the virtual object and
a distance a distance between the one between the one or or more morecameras camerasand and a a locationininthe location thefield field of of view 6036of view 6036 of the the one or more cameras with which the third representation of the virtual object (e.g., virtual one or more cameras with which the third representation of the virtual object (e.g., virtual
chair 5020) has a fixed spatial relationship (e.g., a plane, such as floor surface 5038, to which chair 5020) has a fixed spatial relationship (e.g., a plane, such as floor surface 5038, to which
the virtual object is attached). In some embodiments, the size of the third representation of the virtual object is attached). In some embodiments, the size of the third representation of
the virtual object is constrained such that the scale of the size of the third representation of the virtual object is constrained such that the scale of the size of the third representation of
the virtual object relative to the field of view of the one or more cameras is maintained. In the virtual object relative to the field of view of the one or more cameras is maintained. In
someembodiments, some embodiments,oneone or or more more physical physical dimension dimension parameters parameters (e.g., (e.g., length, length, width, width, depth, depth,
and/or radius) are defined for a virtual object. In some embodiments, in the second user and/or radius) are defined for a virtual object. In some embodiments, in the second user
interface (e.g., the staging user interface), the virtual object is unconstrained by its defined interface (e.g., the staging user interface), the virtual object is unconstrained by its defined
physical dimension parameters (e.g., the size of the virtual object is changeable in response to physical dimension parameters (e.g., the size of the virtual object is changeable in response to
user input). In some embodiments, the third representation of the virtual object is constrained user input). In some embodiments, the third representation of the virtual object is constrained
by its by its defined defined dimension parameters.When dimension parameters. When user user inputisisdetected input detectedtotochange changethe thelocation locationof of the virtual object in the augmented reality view relative to the physical environment the virtual object in the augmented reality view relative to the physical environment
represented in the field of view, or when user input is detected to change the zoom level of represented in the field of view, or when user input is detected to change the zoom level of
the field of view, or when user input is detected to move relative to the physical environment the field of view, or when user input is detected to move relative to the physical environment
surrounding the device, the appearance of the virtual object (e.g., size, viewing perspective) surrounding the device, the appearance of the virtual object (e.g., size, viewing perspective)
will change in a manner that is constrained by a fixed spatial relationship between the virtual will change in a manner that is constrained by a fixed spatial relationship between the virtual
object and the physical environment (e.g., as represented by the fixed spatial relationship object and the physical environment (e.g., as represented by the fixed spatial relationship
109
1005066680
betweenthe between theanchor anchorplane planeofofthe thevirtual virtual object object and and the the in inthe theaugmented reality environment) augmented reality environment) 10 Jan 2024
and aa fixed and fixed scale scale based based on on predefined dimensionalparameters predefined dimensional parametersofofthe thevirtual virtual object object and the and the
actual dimensions actual of the dimensions of the physical physical environment. Determining environment. Determining a sizeofofthe a size thethird third representation representation of the virtual object based on a simulated real-world size of the virtual object and a distance of the virtual object based on a simulated real-world size of the virtual object and a distance
between the one or more cameras and a location in the field of view of the camera(s) (e.g., between the one or more cameras and a location in the field of view of the camera(s) (e.g.,
without requiring further user input to resize the third representation of the virtual object to without requiring further user input to resize the third representation of the virtual object to
simulate a real-world size of the virtual object) enhances the operability of the device, which, simulate a real-world size of the virtual object) enhances the operability of the device, which, 2024200149
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00284]
[00284] In some In embodiments, some embodiments, thethe second second input input thatcorresponds that corresponds to to therequest the requesttoto display the virtual object in an augmented reality environment includes (932) an input that display the virtual object in an augmented reality environment includes (932) an input that
(selects and) drags the second representation of the virtual object (e.g., by a distance that (selects and) drags the second representation of the virtual object (e.g., by a distance that
increases above a distance threshold, beyond a defined boundary and/or to a location that is increases above a distance threshold, beyond a defined boundary and/or to a location that is
within a threshold distance of an edge (e.g., bottom edge, top edge, and or side edge) of the within a threshold distance of an edge (e.g., bottom edge, top edge, and or side edge) of the
display or the second user interface region). Displaying the third representation of the virtual display or the second user interface region). Displaying the third representation of the virtual
object with the representation of the field of view of the camera(s) in response to detecting object with the representation of the field of view of the camera(s) in response to detecting
the second input that corresponds to the request to display the virtual object in an augmented the second input that corresponds to the request to display the virtual object in an augmented
reality environment provides additional control options without cluttering the second user reality environment provides additional control options without cluttering the second user
interface with additional displayed controls (e.g., controls for displaying the augmented interface with additional displayed controls (e.g., controls for displaying the augmented
reality environment from the second user interface). Providing additional control options reality environment from the second user interface). Providing additional control options
without cluttering the second user interface with additional displayed controls enhances the without cluttering the second user interface with additional displayed controls enhances the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00285]
[00285] In some In embodiments, some embodiments, while while displaying displaying thethe second second representation representation of of thethe
virtual object in the second user interface region (e.g., staging user interface 6010 as shown in virtual object in the second user interface region (e.g., staging user interface 6010 as shown in
Figure 6Z), the device detects (934) a fourth input that meets respective criteria for Figure 6Z), the device detects (934) a fourth input that meets respective criteria for
redisplaying the first user interface region (e.g., a tap, hard press, or touch-hold and drag redisplaying the first user interface region (e.g., a tap, hard press, or touch-hold and drag
input a location on the touch-sensitive surface that corresponds to the second representation input a location on the touch-sensitive surface that corresponds to the second representation
of the virtual object or another location on the touch-sensitive surface (e.g., a bottom or edge of the virtual object or another location on the touch-sensitive surface (e.g., a bottom or edge
of the second user interface region), and/or an input at a location on the touch-sensitive of the second user interface region), and/or an input at a location on the touch-sensitive
surface that corresponds to a control for returning to the first user interface region), and, in surface that corresponds to a control for returning to the first user interface region), and, in
response to detecting the fourth input, the device ceases to display the second representation response to detecting the fourth input, the device ceases to display the second representation
of the virtual object in the second user interface region and the device redisplays the first of the virtual object in the second user interface region and the device redisplays the first
110
1005066680
representation of the virtual object in the first user interface region. For example, as shown in representation of the virtual object in the first user interface region. For example, as shown in 10 Jan 2024
Figures 6Z-6AC, Figures 6Z-6AC,ininresponse responsetotoananinput inputbybycontact contact6042 6042atataalocation location that that corresponds to corresponds to
back control 6016 displayed in staging user interface 6010, the device ceases to display the back control 6016 displayed in staging user interface 6010, the device ceases to display the
second representation of virtual chair 5020 in the second user interface region (e.g., staging second representation of virtual chair 5020 in the second user interface region (e.g., staging
user interface 6010) and the device redisplays the first representation of the virtual chair 5020 user interface 6010) and the device redisplays the first representation of the virtual chair 5020
in the first user interface region (e.g., messaging user interface 5008). In some embodiments, in the first user interface region (e.g., messaging user interface 5008). In some embodiments,
the first representation of the virtual object is displayed in the first user interface region with the first representation of the virtual object is displayed in the first user interface region with 2024200149
the same appearance, location, and/or orientation as those shown before the transition to the the same appearance, location, and/or orientation as those shown before the transition to the
staging view staging and/or the view and/or the augmented augmentedreality realityview. view.For Forexample, example,ininFigure Figure6AC, 6AC, virtualchair virtual chair 5020 is displayed in messaging user interface 5008 with the same orientation as virtual chair 5020 is displayed in messaging user interface 5008 with the same orientation as virtual chair
5020 displayedinin the 5020 displayed the messaging messaginguser userinterface interface 5008 5008ininFigure Figure6A. 6A.InInsome someembodiments, embodiments, the the
device continuously displays the virtual object on the screen when transitioning back to device continuously displays the virtual object on the screen when transitioning back to
displaying the virtual object in the first user interface region. For example, in Figures 6Y-6C, displaying the virtual object in the first user interface region. For example, in Figures 6Y-6C,
virtual chair 5020 is continuously displayed during the transition from displaying staging user virtual chair 5020 is continuously displayed during the transition from displaying staging user
interface 6010 interface to displaying 6010 to displaying messaging userinterface messaging user interface 5008. Determiningwhether 5008. Determining whether to to redisplay redisplay
the first representation of the virtual object in the first user interface depending on whether a the first representation of the virtual object in the first user interface depending on whether a
fourth input detected while displaying the second representation of the virtual object in the fourth input detected while displaying the second representation of the virtual object in the
second user interface meets criteria for redisplaying the first user interface enables the second user interface meets criteria for redisplaying the first user interface enables the
performance of multiple different types of operations in response to the fourth input. performance of multiple different types of operations in response to the fourth input.
Enabling the performance of multiple different types of operations in response to an input Enabling the performance of multiple different types of operations in response to an input
increases the efficiency with which the user is able to perform these operations, thereby increases the efficiency with which the user is able to perform these operations, thereby
enhancingthe enhancing theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reducespower power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00286]
[00286] In some In embodiments, some embodiments, while while displaying displaying thethe thirdrepresentation third representationofofthe thevirtual virtual object with the representation of the field of view 5036 of the one or more cameras (e.g., as object with the representation of the field of view 5036 of the one or more cameras (e.g., as
shown in Figure 6U), the device detects (936) a fifth input that meets respective criteria for shown in Figure 6U), the device detects (936) a fifth input that meets respective criteria for
redisplaying the second user interface region (e.g., a tap, hard press, or touch and drag input a redisplaying the second user interface region (e.g., a tap, hard press, or touch and drag input a
location on the touch-sensitive surface that corresponds to the third representation of the location on the touch-sensitive surface that corresponds to the third representation of the
virtual object or another location on the touch-sensitive surface, and/or an input at a location virtual object or another location on the touch-sensitive surface, and/or an input at a location
on the touch-sensitive surface that corresponds to a control for returning to displaying the on the touch-sensitive surface that corresponds to a control for returning to displaying the
second user interface region), and, in response to detecting the fifth input, the device ceases second user interface region), and, in response to detecting the fifth input, the device ceases
to display the third representation of the virtual object and the representation of the field of to display the third representation of the virtual object and the representation of the field of
111
1005066680
view of view of the the one or more one or camerasand more cameras andredisplays redisplaysthe thesecond secondrepresentation representationofofthe thevirtual virtual 10 Jan 2024
object in object in the the second second user user interface interfaceregion. region.For Forexample, example, as asshown in Figures shown in Figures 6V-6Y, in 6V-6Y, in
response to an input by contact 6040 at a location that corresponds to toggle control 6018 response to an input by contact 6040 at a location that corresponds to toggle control 6018
displayed in the third user interface that includes the field of view 6036 of the camera(s), the displayed in the third user interface that includes the field of view 6036 of the camera(s), the
device ceases to display the field of view 6036 of the camera(s)and redisplays the staging device ceases to display the field of view 6036 of the camera(s)and redisplays the staging
user interface 6010. In some embodiments, the second representation of the virtual object is user interface 6010. In some embodiments, the second representation of the virtual object is
displayed in the second user interface region with the same orientation as that shown in the displayed in the second user interface region with the same orientation as that shown in the 2024200149
augmentedreality augmented realityview. view.InIn some someembodiments, embodiments,the the device device continuously continuously displays displays the the virtual virtual
object on the screen when transitioning back to displaying the virtual object in the second object on the screen when transitioning back to displaying the virtual object in the second
user interface user interface region. region.For Forexample, example, in in Figures Figures 6V-6Y, virtual chair 6V-6Y, virtual chair 5020 5020 is is continuously continuously
displayed during the transition from displaying field of view 6036 of the camera(s) to displayed during the transition from displaying field of view 6036 of the camera(s) to
displaying staging displaying staging user user interface interface 6010. 6010. Determining whethertotoredisplay Determining whether redisplaythe the second second representation of the virtual object in the second user interface, depending on whether a fifth representation of the virtual object in the second user interface, depending on whether a fifth
input detected while displaying the third representation of the virtual object with the field of input detected while displaying the third representation of the virtual object with the field of
view of the camera(s) meets criteria for redisplaying the second user interface, enables the view of the camera(s) meets criteria for redisplaying the second user interface, enables the
performance of multiple different types of operations in response to the fifth input. Enabling performance of multiple different types of operations in response to the fifth input. Enabling
the performance of multiple different types of operations in response to an input increases the the performance of multiple different types of operations in response to an input increases the
efficiency with which the user is able to perform these operations, thereby enhancing the efficiency with which the user is able to perform these operations, thereby enhancing the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00287]
[00287] In some In embodiments, some embodiments, while while displaying displaying thethe thirdrepresentation third representationofofthe thevirtual virtual object with the representation 6036 of the field of view of the one or more cameras, the object with the representation 6036 of the field of view of the one or more cameras, the
device detects (938) a sixth input that meets respective criteria for redisplaying the first user device detects (938) a sixth input that meets respective criteria for redisplaying the first user
interface region (e.g., messaging user interface 5008), and, in response to detecting the sixth interface region (e.g., messaging user interface 5008), and, in response to detecting the sixth
input, the device ceases to display the third representation of the virtual object (e.g., virtual input, the device ceases to display the third representation of the virtual object (e.g., virtual
chair 5020) and the representation of the field of view 6036 of the one or more cameras (e.g., chair 5020) and the representation of the field of view 6036 of the one or more cameras (e.g.,
as shown in Figure 6U) and the device redisplays the first representation of the virtual object as shown in Figure 6U) and the device redisplays the first representation of the virtual object
in the first user interface region (e.g., as shown in Figure 6AC). In some embodiments, the in the first user interface region (e.g., as shown in Figure 6AC). In some embodiments, the
sixth input is, e.g., a tap, hard press, or touch and drag input a location on the touch-sensitive sixth input is, e.g., a tap, hard press, or touch and drag input a location on the touch-sensitive
surface that corresponds to the representation of the third representation of the virtual object surface that corresponds to the representation of the third representation of the virtual object
or another location on the touch-sensitive surface, and/or an input at a location on the touch- or another location on the touch-sensitive surface, and/or an input at a location on the touch-
sensitive surface that corresponds to a control for returning to displaying the first user sensitive surface that corresponds to a control for returning to displaying the first user
interface region. In some embodiments, the first representation of the virtual object is interface region. In some embodiments, the first representation of the virtual object is
112
1005066680
displayed in the first user interface region with the same appearance and location as those displayed in the first user interface region with the same appearance and location as those 10 Jan 2024
shownbefore shown beforethe thetransition transition to to the the staging stagingview view and/or and/or the the augmented reality view. augmented reality view. In In some some
embodiments,thethedevice embodiments, devicecontinuously continuously displays displays thevirtual the virtualobject objecton onthe the screen screen when when transitioning back to displaying the virtual object in the first user interface region. transitioning back to displaying the virtual object in the first user interface region.
Determining whether to redisplay the first representation of the virtual object in the first user Determining whether to redisplay the first representation of the virtual object in the first user
interface, depending on whether a sixth input detected while displaying the third interface, depending on whether a sixth input detected while displaying the third
representation of the virtual object with the field of view of the camera(s) meets criteria for representation of the virtual object with the field of view of the camera(s) meets criteria for 2024200149
redisplaying the first user interface, enables the performance of multiple different types of redisplaying the first user interface, enables the performance of multiple different types of
operations in response to the sixth input. Enabling the performance of multiple different types operations in response to the sixth input. Enabling the performance of multiple different types
of operations in response to an input increases the efficiency with which the user is able to of operations in response to an input increases the efficiency with which the user is able to
performthese perform these operations, operations, thereby thereby enhancing enhancingthe theoperability operability of of the the device, device, which, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00288]
[00288] In some embodiments, in response to detecting the first input by the first In some embodiments, in response to detecting the first input by the first
contact and in accordance with a determination that the input by the first contact meets the contact and in accordance with a determination that the input by the first contact meets the
first criteria, the device continuously displays (940) the virtual object when transitioning from first criteria, the device continuously displays (940) the virtual object when transitioning from
displaying the first user interface region (e.g., messaging user interface 5008) to displaying displaying the first user interface region (e.g., messaging user interface 5008) to displaying
the second user interface region (e.g., staging user interface 6010), including displaying an the second user interface region (e.g., staging user interface 6010), including displaying an
animation (e.g., movement, rotation about one or more axes, and/or scaling) of the first animation (e.g., movement, rotation about one or more axes, and/or scaling) of the first
representation of the virtual object in the first user interface region transforming into the representation of the virtual object in the first user interface region transforming into the
second representation of the virtual object in the second user interface region. For example, in second representation of the virtual object in the second user interface region. For example, in
Figures 6E-6I, virtual chair 5020 is continuously displayed and animated (e.g., the orientation Figures 6E-6I, virtual chair 5020 is continuously displayed and animated (e.g., the orientation
of virtual chair 5020 changes) during the transition from displaying messaging user interface of virtual chair 5020 changes) during the transition from displaying messaging user interface
5008 to displaying 5008 to displaying staging staging user user interface interface 6010. 6010. In In some embodiments,thethevirtual some embodiments, virtualobject object has has aa defined orientation, position, and/or distance relative to a plane in the field of view of the defined orientation, position, and/or distance relative to a plane in the field of view of the
camera(s) (e.g., that is defined based on the shape and orientation of the first representation of camera(s) (e.g., that is defined based on the shape and orientation of the first representation of
the virtual object as shown in the first user interface region) and, when transitioning to the the virtual object as shown in the first user interface region) and, when transitioning to the
second user interface region, the first representation of the virtual object moves, resizes, second user interface region, the first representation of the virtual object moves, resizes,
and/or reorients to the second representation of the virtual object at new location on the and/or reorients to the second representation of the virtual object at new location on the
display (e.g., the center of a virtual staging plane in the second user interface region), and display (e.g., the center of a virtual staging plane in the second user interface region), and
during the movement or at the end of the movement, the virtual object is reoriented such that during the movement or at the end of the movement, the virtual object is reoriented such that
the virtual object is at a predetermined angle relative the predefined virtual staging plane the virtual object is at a predetermined angle relative the predefined virtual staging plane
whichisis defined which defined independent independentofofthe the physical physical environment environmentsurrounding surrounding thethe device.Displaying device. Displaying 113
1005066680
an animation as the first representation of the virtual object in the first user interface an animation as the first representation of the virtual object in the first user interface 10 Jan 2024
transforms into the second representation of the virtual object in the second user interface transforms into the second representation of the virtual object in the second user interface
provides the user with feedback to indicate that the first input meets the first criteria. provides the user with feedback to indicate that the first input meets the first criteria.
Providingimproved Providing improvedfeedback feedback enhances enhances the the operability operability of of thedevice the device (e.g.,by (e.g., byhelping helpingthe the user user to provide to provide proper inputs and proper inputs reducing user and reducing user mistakes mistakeswhen whenoperating/interacting operating/interactingwith withthe the device), which, device), additionally, reduces which, additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently. 2024200149
[00289]
[00289] In some In embodiments, some embodiments, in in response response to to detectingthe detecting thesecond second inputbybythethesecond input second contact and contact in accordance and in withaa determination accordance with determinationthat that the the second input by second input by the the second secondcontact contact corresponds to the request to display the virtual object in the augmented reality environment, corresponds to the request to display the virtual object in the augmented reality environment,
the device the device continuously displays (942) continuously displays (942) the the virtual virtual object objectwhen when transitioning transitioning from from displaying displaying
the second user interface region (e.g., staging user interface 6010) to displaying a third user the second user interface region (e.g., staging user interface 6010) to displaying a third user
interface region interface region including including the the field fieldofof view view6036 6036 of ofthe theone oneor ormore more cameras, cameras, including including
displaying an displaying an animation animation(e.g., (e.g., movement, rotation about movement, rotation aboutone oneorormore moreaxes, axes,and/or and/orscaling) scaling)of of the second representation of the virtual object in the second user interface region the second representation of the virtual object in the second user interface region
transforming into the third representation of the virtual object in the third user interface transforming into the third representation of the virtual object in the third user interface
region including region including the the field field of ofview view of ofthe theone oneor ormore more cameras. cameras. For For example, in Figures example, in Figures 6Q- 6Q- 6U, virtualchair 6U, virtual chair5020 5020is is continuously continuously displayed displayed and animated and animated (e.g., the(e.g., the and position position size ofand size of
virtual chair 5020 changes) during the transition from displaying staging user interface 6010 virtual chair 5020 changes) during the transition from displaying staging user interface 6010
to displaying to displaying the the field fieldofofview view6036 6036 of of the thecamera(s). camera(s).In Insome some embodiments, thevirtual embodiments, the virtual object is reoriented such that the virtual object is at a predefined orientation, position, and/or object is reoriented such that the virtual object is at a predefined orientation, position, and/or
distance relative to a field of view plane detected in the field of view of the one or more distance relative to a field of view plane detected in the field of view of the one or more
cameras (e.g., a physical surface, such as a vertical wall or horizontal floor surface that can cameras (e.g., a physical surface, such as a vertical wall or horizontal floor surface that can
support the three-dimensional representation of the user interface object). Displaying an support the three-dimensional representation of the user interface object). Displaying an
animation as the second representation of the virtual object in the second user interface animation as the second representation of the virtual object in the second user interface
transforms into the third representation of the virtual object in the third user interface transforms into the third representation of the virtual object in the third user interface
provides the user with feedback to indicate that the second input corresponds to the request to provides the user with feedback to indicate that the second input corresponds to the request to
display the display the virtual virtualobject objectinin thethe augmented augmented reality realityenvironment. environment. Providing Providing improved visual improved visual
feedback to the user enhances the operability of the device (e.g., by helping the user to feedback to the user enhances the operability of the device (e.g., by helping the user to
provide proper provide proper inputs inputs and and reducing reducinguser usermistakes mistakeswhen when operating/interactingwith operating/interacting withthe the device), which, device), additionally, reduces which, additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
114
1005066680
[00290]
[00290] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in 10 Jan 2024
Figures 9A-9D Figures 9A-9Dhave have been been described described is is merely merely an an example example and and is not is not intended intended to indicate to indicate that that
the described the described order order is is the theonly onlyorder orderininwhich which the theoperations operationscould couldbe beperformed. performed. One of One of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 16000, 16000,17000, 17000,18000, 18000, 19000, and20000) 19000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 900900 described described 2024200149
above with respect to Figures 9A-9D. For example, contacts, inputs, virtual objects, user above with respect to Figures 9A-9D. For example, contacts, inputs, virtual objects, user
interface regions, intensity thresholds, fields of view, tactile outputs, movements, and/or interface regions, intensity thresholds, fields of view, tactile outputs, movements, and/or
animationsdescribed animations describedabove abovewith withreference referencetotomethod method900900 optionally optionally have have oneone or or more more of the of the
characteristics of the contacts, inputs, virtual objects, user interface regions, intensity characteristics of the contacts, inputs, virtual objects, user interface regions, intensity
thresholds, fields of view, tactile outputs, movements, and/or animations described herein thresholds, fields of view, tactile outputs, movements, and/or animations described herein
with reference with reference to to other other methods describedherein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 16000, 16000,17000, 17000, 18000, 19000, 18000, 19000, andand 20000). 20000). For brevity, For brevity, these details these details are not are not repeated repeated here. here.
[00291]
[00291] Figures 10A-10D Figures 10A-10D areare flow flow diagrams diagrams illustratingmethod illustrating method 1000 1000 of displaying of displaying an an item with a visual indication to indicate that the item corresponds to a virtual three- item with a visual indication to indicate that the item corresponds to a virtual three-
dimensionalobject, dimensional object, in in accordance withsome accordance with someembodiments. embodiments. Method Method 1000 1000 is performed is performed at an at an electronic device (e.g., device 300, Figure 3, or portable multifunction device 100, Figure electronic device (e.g., device 300, Figure 3, or portable multifunction device 100, Figure
1A) having 1A) having a display a display and and a touch-sensitive a touch-sensitive surfacesurface (e.g., a(e.g., a touch-screen touch-screen display display that that serves serves
both as the display and the touch-sensitive surface). In some embodiments, the display is a both as the display and the touch-sensitive surface). In some embodiments, the display is a
touch-screen display and the touch-sensitive surface is on or integrated with the display. In touch-screen display and the touch-sensitive surface is on or integrated with the display. In
someembodiments, some embodiments,thethe display display is isseparate separatefrom fromthethetouch-sensitive touch-sensitivesurface. surface.Some Some operations operations
in method in 1000are, method 1000 are,optionally, optionally, combined combinedand/or and/orthe theorder orderofofsome someoperations operationsis,is,optionally, optionally, changed. changed.
[00292]
[00292] As described As describedbelow, below,method method 1000 1000 relates relates toto displayingitems displaying itemsininfirst first and and
second user interfaces. Each item is displayed either with a visual indication to indicate that second user interfaces. Each item is displayed either with a visual indication to indicate that
the item corresponds to a virtual three-dimensional object or without the visual indication, the item corresponds to a virtual three-dimensional object or without the visual indication,
dependingononwhether depending whetheranan item item corresponds corresponds to to a respectivevirtual a respective virtualthree-dimensional three-dimensionalobject. object. Providing an indication to the user of whether an item is a virtual three-dimensional object Providing an indication to the user of whether an item is a virtual three-dimensional object
increases the efficiency with which the user is able to perform operations on the first item increases the efficiency with which the user is able to perform operations on the first item
(e.g., (e.g., by helpingthe by helping theuser usertotoprovide provide appropriate appropriate inputs inputs depending depending onthe on whether whether item isthe item is or is or is
not a virtual three-dimensional object), thereby enhancing the operability of the device, not a virtual three-dimensional object), thereby enhancing the operability of the device,
115
1005066680
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling 10 Jan 2024
the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00293]
[00293] The device receives (1002) a request to display a first user interface that The device receives (1002) a request to display a first user interface that
includes a first item (e.g., an icon, a thumbnail image, an image, an emoji, an attachment, a includes a first item (e.g., an icon, a thumbnail image, an image, an emoji, an attachment, a
sticker, an app icon, an avatar, etc.). For example, in some embodiments, the request is an sticker, an app icon, an avatar, etc.). For example, in some embodiments, the request is an
input (e.g., as described with regard to Figure 7A) for opening a user interface (e.g. Internet input (e.g., as described with regard to Figure 7A) for opening a user interface (e.g. Internet
browser user interface 5060, as illustrated at Figure 7B) for displaying a representation of the browser user interface 5060, as illustrated at Figure 7B) for displaying a representation of the 2024200149
first item in a predefined environment associated with the first item. The predefined first item in a predefined environment associated with the first item. The predefined
environment is, optionally, a user interface of an application (e.g., an email application, a environment is, optionally, a user interface of an application (e.g., an email application, a
messagingapplication, messaging application, aa browser browserapplication, application, aa word wordprocessing processingapplication, application, aa e-reader e-reader application, etc.) or a system user interface (e.g., a lock screen, a notification interface, a application, etc.) or a system user interface (e.g., a lock screen, a notification interface, a
suggestion interface, a control panel user interface, a home screen user interface, etc.). suggestion interface, a control panel user interface, a home screen user interface, etc.).
[00294]
[00294] In response to the request to display the first user interface, the device displays In response to the request to display the first user interface, the device displays
(1004) thefirst (1004) the firstuser userinterface interface(e.g. (e.g.Internet Internet browser browser user user interface interface 5060, 5060, as illustrated as illustrated at at Figure 7B) with a representation of the first item. In accordance with a determination that the Figure 7B) with a representation of the first item. In accordance with a determination that the
first item corresponds to a respective virtual three-dimensional object, the device displays the first item corresponds to a respective virtual three-dimensional object, the device displays the
representation of the first item with a visual indication to indicate that the first item representation of the first item with a visual indication to indicate that the first item
corresponds to a first respective virtual three-dimensional object (e.g., an image, such as an corresponds to a first respective virtual three-dimensional object (e.g., an image, such as an
icon and/or background panel, displayed at a location that corresponds to the representation icon and/or background panel, displayed at a location that corresponds to the representation
of the first item; an outline; and/or text). In accordance with a determination that the first item of the first item; an outline; and/or text). In accordance with a determination that the first item
does not correspond to a respective virtual three-dimensional object, the device displays the does not correspond to a respective virtual three-dimensional object, the device displays the
representation of the first item without the visual indication. For example, in Internet browser representation of the first item without the visual indication. For example, in Internet browser
user interface 5060, as illustrated at Figure 7B, web object 5068 (including a representation user interface 5060, as illustrated at Figure 7B, web object 5068 (including a representation
of virtual three-dimensional lamp object 5084) is displayed with a visual indication (virtual of virtual three-dimensional lamp object 5084) is displayed with a visual indication (virtual
object indicator 5080) to indicate that virtual lamp 8084 is a virtual three-dimensional object object indicator 5080) to indicate that virtual lamp 8084 is a virtual three-dimensional object
and web and webobject object5074 5074isisdisplayed displayedwithout withouta avisual visual object object indicator indicator because webobject because web object5074 5074 does not include an item that corresponds to a virtual three-dimensional object. does not include an item that corresponds to a virtual three-dimensional object.
[00295]
[00295] After displaying the representation of the first item, the device receives (1006) After displaying the representation of the first item, the device receives (1006)
a request (e.g., an input as described with regard to Figures 7H-7L) to display a second user a request (e.g., an input as described with regard to Figures 7H-7L) to display a second user
interface (e.g., messaging user interface 5008, as illustrated at Figure 7M) that includes a interface (e.g., messaging user interface 5008, as illustrated at Figure 7M) that includes a
second item (e.g., an icon, a thumbnail image, an image, an emoji, an attachment, a sticker, second item (e.g., an icon, a thumbnail image, an image, an emoji, an attachment, a sticker,
an app icon, an avatar, etc.). The second item is distinct from the first item and the second an app icon, an avatar, etc.). The second item is distinct from the first item and the second
116
1005066680
user interface is distinct from the first user interface. For example, in some embodiments, the user interface is distinct from the first user interface. For example, in some embodiments, the 10 Jan 2024
request is another input for opening a user interface for displaying a representation of the request is another input for opening a user interface for displaying a representation of the
seconditem second itemin in aa predefined environmentassociated predefined environment associatedwith withthe thesecond seconditem. item.The The predefined predefined
environment is, optionally, a user interface of an application other than the application used environment is, optionally, a user interface of an application other than the application used
for showing the first item (e.g., an email application, a messaging application, a browser for showing the first item (e.g., an email application, a messaging application, a browser
application, a word processing application, a e-reader application, etc.) or in a system user application, a word processing application, a e-reader application, etc.) or in a system user
interface other than the system user interface used for showing the first item (e.g., a lock interface other than the system user interface used for showing the first item (e.g., a lock 2024200149
screen, a notification interface, a suggestion interface, a control panel user interface, a home screen, a notification interface, a suggestion interface, a control panel user interface, a home
screen user interface, etc. screen user interface, etc.
[00296]
[00296] In response to the request to display the second user interface, the device In response to the request to display the second user interface, the device
displays (1008) the second user interface (e.g., messaging user interface 5008, as illustrated at displays (1008) the second user interface (e.g., messaging user interface 5008, as illustrated at
Figure 7M) Figure 7M)with witha arepresentation representationofof the the second seconditem. item. In In accordance accordancewith withaadetermination determinationthat that the second item corresponds to a respective virtual three-dimensional object, the device the second item corresponds to a respective virtual three-dimensional object, the device
displays the representation of the second item with the visual indication (e.g., the same visual displays the representation of the second item with the visual indication (e.g., the same visual
indication that indicates that the first item corresponds to a virtual three-dimensional object) indication that indicates that the first item corresponds to a virtual three-dimensional object)
to indicate that the second item corresponds to a second respective virtual three-dimensional to indicate that the second item corresponds to a second respective virtual three-dimensional
object. In object. In accordance accordance with a determination with a that the determination that the second second item item does not correspond does not to aa correspond to
respective virtual three-dimensional object, the device displays the representation of the respective virtual three-dimensional object, the device displays the representation of the
seconditem second itemwithout withoutthe thevisual visual indication. indication. For For example, in messaging example, in userinterface messaging user interface 5008, 5008, as as illustrated at Figure 7M, virtual three-dimensional chair object 5020 is displayed with a visual illustrated at Figure 7M, virtual three-dimensional chair object 5020 is displayed with a visual
indication (virtual object indicator 5022) to indicate that virtual chair 5020 is a virtual three- indication (virtual object indicator 5022) to indicate that virtual chair 5020 is a virtual three-
dimensionalobject, dimensional object, and and emoji emoji7020 7020isisdisplayed displayedwithout withouta avisual visual object object indicator indicator because because
emoji 7020 does not include an item that corresponds to a virtual three-dimensional object. emoji 7020 does not include an item that corresponds to a virtual three-dimensional object.
[00297]
[00297] In some embodiments, displaying the representation of the first item (e.g., In some embodiments, displaying the representation of the first item (e.g.,
virtual lamp 5084) with the visual indication (e.g., virtual object indicator 5080) to indicate virtual lamp 5084) with the visual indication (e.g., virtual object indicator 5080) to indicate
that the first item corresponds to a first respective virtual three-dimensional object includes that the first item corresponds to a first respective virtual three-dimensional object includes
(1010): in response to detecting a movement of the device that results in a change from a first (1010): in response to detecting a movement of the device that results in a change from a first
device orientation to a second device orientation (e.g., as detected by orientation sensors device orientation to a second device orientation (e.g., as detected by orientation sensors
(e.g., (e.g.,one oneor ormore more accelerometers accelerometers 168 of the 168 of the device device 100), displaying movement movement ofof thefirst the first item (e.g., tilting of the first item and/or movement of the first item relative to the first user item (e.g., tilting of the first item and/or movement of the first item relative to the first user
interface) that corresponds to the change from the first device orientation to the second device interface) that corresponds to the change from the first device orientation to the second device
orientation. For example, the first device orientation is an orientation of device 100 as orientation. For example, the first device orientation is an orientation of device 100 as
illustrated in Figure 7F1 and the second device orientation is an orientation of device 100 as illustrated in Figure 7F1 and the second device orientation is an orientation of device 100 as
117
1005066680
illustrated in Figure 7G1. In response to the movement illustrated in Figure 7F1 to Figure illustrated in Figure 7G1. In response to the movement illustrated in Figure 7F1 to Figure 10 Jan 2024
7G1, a first item (e.g., virtual lamp 5084) tilts (e.g., as illustrated at Figure 7F2 to Figure 7G1, a first item (e.g., virtual lamp 5084) tilts (e.g., as illustrated at Figure 7F2 to Figure
7G2). In 7G2). In some someembodiments, embodiments,if if thethesecond second object object corresponds corresponds to to a virtualthree-dimensional a virtual three-dimensional object, the object, the second second object object also alsoresponds responds to to detecting detectingmovement ofthe movement of the device devicein in the the manner manner
described above (e.g., to indicate that the second object also corresponds to a virtual three- described above (e.g., to indicate that the second object also corresponds to a virtual three-
dimensionalobject). dimensional object).
[00298]
[00298] Displayingmovement Displaying movementof of thethe firstitem first itemthat that corresponds correspondstotothe the change changefrom fromthe the 2024200149
first device orientation to the second device orientation provides visual feedback to the user first device orientation to the second device orientation provides visual feedback to the user
indicating behavior indicating of the behavior of the virtual virtualthree-dimensional three-dimensional object. object.Providing Providing improved visual improved visual
feedback to the user enhances the operability of the device (e.g., by allowing the user to view feedback to the user enhances the operability of the device (e.g., by allowing the user to view
the virtual three-dimensional object from orientations without needing to provide further the virtual three-dimensional object from orientations without needing to provide further
input), which, additionally, reduces power usage and improves battery life of the device by input), which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00299]
[00299] In some In embodiments, some embodiments, displaying displaying thethe representation representation ofof thefirst the first item with the item with the visual indication to indicate that the first item corresponds to a first respective virtual three- visual indication to indicate that the first item corresponds to a first respective virtual three-
dimensional object includes (1012): in response to detecting a first input by a first contact dimensional object includes (1012): in response to detecting a first input by a first contact
(e.g., a swipe input on the first user interface in a first direction, or a touch-hold input on a (e.g., a swipe input on the first user interface in a first direction, or a touch-hold input on a
scroll button on an end of a scroll bar) that scrolls the first user interface while the scroll button on an end of a scroll bar) that scrolls the first user interface while the
representation of the first item is displayed in the first user interface: the device translates the representation of the first item is displayed in the first user interface: the device translates the
representation of the first item on the display in accordance with scrolling of the first user representation of the first item on the display in accordance with scrolling of the first user
interface (e.g., moving an anchor position of the first item by a distance based on the amount interface (e.g., moving an anchor position of the first item by a distance based on the amount
of scrolling made to the first user interface and in a direction opposite of the scrolling (e.g., of scrolling made to the first user interface and in a direction opposite of the scrolling (e.g.,
whenthe when thefirst first user user interface interfaceisis dragged draggedupward upward by by a a contact contact moving across the moving across the touch- touch- sensitive surface, the representation of the first item moves upward on the display with the sensitive surface, the representation of the first item moves upward on the display with the
first user interface)) and the device rotates the representation of the first item relative to a first user interface)) and the device rotates the representation of the first item relative to a
plane defined by the first user interface (or the display) in accordance with a direction in plane defined by the first user interface (or the display) in accordance with a direction in
which the first user interface is scrolled. For example, as illustrated in Figures 7C-7D, in which the first user interface is scrolled. For example, as illustrated in Figures 7C-7D, in
response to detecting an input by contact 7002 that scrolls Internet browser user interface response to detecting an input by contact 7002 that scrolls Internet browser user interface
5060 while a representation of virtual lamp 5084 is displayed in Internet browser user 5060 while a representation of virtual lamp 5084 is displayed in Internet browser user
interface 5060, virtual lamp 5084 is translated in accordance with the scrolling of Internet interface 5060, virtual lamp 5084 is translated in accordance with the scrolling of Internet
browser user interface 5060 and virtual lamp 5084 is rotates relative to display 112 in browser user interface 5060 and virtual lamp 5084 is rotates relative to display 112 in
accordancewith accordance withaadirection direction of of the the path path of of movement movement ofofcontact contact7002. 7002.InInsome someembodiments, embodiments, in accordance with a determination that the first user interface is dragged upward, the in accordance with a determination that the first user interface is dragged upward, the
118
1005066680
representation of the first item moves upward with the first user interface, and the viewing representation of the first item moves upward with the first user interface, and the viewing 10 Jan 2024
perspective of the first item as shown on the first user interface changes as if the user is perspective of the first item as shown on the first user interface changes as if the user is
looking at the first item from a different viewing angle (e.g., a lower angle). In some looking at the first item from a different viewing angle (e.g., a lower angle). In some
embodiments,ininaccordance embodiments, accordance with with a determination a determination that that thethesecond second user user interfaceisisdragged interface dragged upward,the upward, the representation representation of of the second item moves second item movesupward upward with with thethe second second user user interface, interface,
and the and the viewing perspectiveof viewing perspective of the the second seconditem itemas as shown shownononthe thesecond seconduser userinterface interface changes as if the user is looking at the second item from a different viewing angle (e.g., a changes as if the user is looking at the second item from a different viewing angle (e.g., a 2024200149
lower angle). lower angle).
[00300]
[00300] Displayingmovement Displaying movementof of an an item, item, where where thethe movement movement corresponds corresponds to a to a change from a first device orientation to a second device orientation, provides visual change from a first device orientation to a second device orientation, provides visual
feedbackto feedback to the the user user indicating indicating the thechange change in in device device orientation. orientation.Providing Providing improved visual improved visual
feedback to the user enhances the operability of the device (e.g., by allowing the user to view feedback to the user enhances the operability of the device (e.g., by allowing the user to view
the virtual three-dimensional object from orientations without needing to provide further the virtual three-dimensional object from orientations without needing to provide further
input), which, additionally, reduces power usage and improves battery life of the device by input), which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00301]
[00301] In some In embodiments, some embodiments, while while displaying displaying thethe representation representation of of thefirst the first item item (e.g., lamp object 5084) with the visual indication (e.g., visual object indicator 5080) in the (e.g., lamp object 5084) with the visual indication (e.g., visual object indicator 5080) in the
first user interface (e.g., Internet browser user interface 5060, as illustrated at Figure 7B), the first user interface (e.g., Internet browser user interface 5060, as illustrated at Figure 7B), the
device displays (1014) a representation of a third item, wherein the representation of the third device displays (1014) a representation of a third item, wherein the representation of the third
item is displayed without the visual indication in order to indicate that the third item does not item is displayed without the visual indication in order to indicate that the third item does not
correspond to a virtual three-dimensional object (e.g., the third item does not correspond to correspond to a virtual three-dimensional object (e.g., the third item does not correspond to
any three-dimensional any three-dimensionalobject objectthat that can can be be rendered rendered in in an an augmented augmentedreality realityenvironment). environment).For For example, in Internet browser user interface 5060, as illustrated at Figure 7B, web objects example, in Internet browser user interface 5060, as illustrated at Figure 7B, web objects
5074, 5070, 5074, 5070, and and5076 5076are aredisplayed displayedwithout withoutvisual visualobject objectindicators indicators because becauseweb webobjects objects 5074, 5070, 5074, 5070, and and5076 5076dodonot notcorrespond correspondtoto virtualthree-dimensional virtual three-dimensionalobjects. objects.
[00302]
[00302] Displaying, in the first user interface, a first item with a visual indication to Displaying, in the first user interface, a first item with a visual indication to
indicate that the first item is a virtual three-dimensional object and a third item that is indicate that the first item is a virtual three-dimensional object and a third item that is
displayed without the visual indication increases the efficiency with which the user is able to displayed without the visual indication increases the efficiency with which the user is able to
perform operations using the first user interface (e.g., by helping the user to provide perform operations using the first user interface (e.g., by helping the user to provide
appropriate inputs depending on whether an item with which the user is interacting is or is not appropriate inputs depending on whether an item with which the user is interacting is or is not
a virtual three-dimensional object), thereby enhancing the operability of the device, which, a virtual three-dimensional object), thereby enhancing the operability of the device, which,
119
1005066680
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by enabling the by enabling the 10 Jan 2024
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00303]
[00303] In some In embodiments, some embodiments, while while displaying displaying thethe representation representation of of thesecond the second item item
(e.g., virtual chair 5020) with the visual indication (e.g., virtual object indicator 5022) in the (e.g., virtual chair 5020) with the visual indication (e.g., virtual object indicator 5022) in the
second user interface (e.g., messaging user interface 5008, as illustrated in Figure 7M), the second user interface (e.g., messaging user interface 5008, as illustrated in Figure 7M), the
device displays (1016) a representation of a fourth item (e.g., emoji 7020), wherein the device displays (1016) a representation of a fourth item (e.g., emoji 7020), wherein the
representation of the fourth item is displayed without the visual indication in order to indicate representation of the fourth item is displayed without the visual indication in order to indicate 2024200149
that the fourth item does not correspond to a respective virtual three-dimensional object. that the fourth item does not correspond to a respective virtual three-dimensional object.
[00304]
[00304] Displaying, in the second user interface, a second item with a visual indication Displaying, in the second user interface, a second item with a visual indication
to indicate that the second item is a virtual three-dimensional object and a fourth item that is to indicate that the second item is a virtual three-dimensional object and a fourth item that is
displayed without the visual indication increases the efficiency with which the user is able to displayed without the visual indication increases the efficiency with which the user is able to
perform operations using the second user interface (e.g., by helping the user to provide perform operations using the second user interface (e.g., by helping the user to provide
appropriate inputs depending on whether an item with which the user is interacting is or is not appropriate inputs depending on whether an item with which the user is interacting is or is not
a virtual three-dimensional object), thereby enhancing the operability of the device, which, a virtual three-dimensional object), thereby enhancing the operability of the device, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00305]
[00305] In some embodiments (1018), the first user interface (e.g., Internet browser In some embodiments (1018), the first user interface (e.g., Internet browser
user interface 5060, as illustrated at Figure 7B) corresponds to a first application (e.g., an user interface 5060, as illustrated at Figure 7B) corresponds to a first application (e.g., an
Internet browser application), the second user interface (e.g., messaging user interface 5008, Internet browser application), the second user interface (e.g., messaging user interface 5008,
as illustrated in Figure 7M) corresponds to a second application (e.g., a messaging as illustrated in Figure 7M) corresponds to a second application (e.g., a messaging
application) that is distinct from the first application, and the representation of the first item application) that is distinct from the first application, and the representation of the first item
(e.g., (e.g., lamp object5084) lamp object 5084) displayed displayed with with the visual the visual indication indication e.g., virtual e.g., virtual object indicator object indicator
5080) and the representation of the second item (e.g., virtual chair 5020) displayed with the 5080) and the representation of the second item (e.g., virtual chair 5020) displayed with the
visual indication (e.g., virtual object indicator 5022) share a predefined set of visual visual indication (e.g., virtual object indicator 5022) share a predefined set of visual
characteristics and/or behavioral characteristics (e.g., uses the same indicator icon, have the characteristics and/or behavioral characteristics (e.g., uses the same indicator icon, have the
sametexture same texture or or rendering rendering style, style, and/or and/or behavior behavior when invokedbybya apredefined when invoked predefinedtype typeofof inputs). For example, the icons for virtual object indicator 5080 and virtual object indicator inputs). For example, the icons for virtual object indicator 5080 and virtual object indicator
5022include 5022 includethe the same samesymbol. symbol.
[00306]
[00306] Displaying the first item with the visual indication in the first user interface of Displaying the first item with the visual indication in the first user interface of
a first application and displaying the second item with the visual indication in the second user a first application and displaying the second item with the visual indication in the second user
interface of a second application such that the visual indications of the first item and the interface of a second application such that the visual indications of the first item and the
second item share a predefined set of visual characteristics and/or behavioral characteristics second item share a predefined set of visual characteristics and/or behavioral characteristics
120
1005066680
increases the efficiency with which the user is able to perform operations using the second increases the efficiency with which the user is able to perform operations using the second 10 Jan 2024
user interface (e.g., by helping the user to provide appropriate inputs depending on whether user interface (e.g., by helping the user to provide appropriate inputs depending on whether
an item with which the user is interacting is or is not a virtual three-dimensional object), an item with which the user is interacting is or is not a virtual three-dimensional object),
thereby enhancing thereby enhancingthe theoperability operability of of the device, device, which, which, additionally, additionally,reduces reduces power usage and power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00307]
[00307] In some embodiments, the first user interface is (1020) an Internet browser In some embodiments, the first user interface is (1020) an Internet browser 2024200149
application user interface (e.g., Internet browser user interface 5060, as illustrated at Figure application user interface (e.g., Internet browser user interface 5060, as illustrated at Figure
7B) and the first item is an element of a web page (e.g., the first item is represented in the 7B) and the first item is an element of a web page (e.g., the first item is represented in the
webpageasasananembedded webpage embedded image, image, a hyperlink, a hyperlink, an applet, an applet, an an emoji, emoji, an an embedded embedded mediamedia object, object,
etc.). For example, the first item is virtual lamp object 5084 of web object 5068. etc.). For example, the first item is virtual lamp object 5084 of web object 5068.
[00308]
[00308] Displayingaa web Displaying webpage pageelement element with with a visualindication a visual indicationindicating indicatingthat that the the web web
page element is a virtual three-dimensional object increases the efficiency with which the page element is a virtual three-dimensional object increases the efficiency with which the
user is able to perform operations using an Internet browser application (e.g., by helping the user is able to perform operations using an Internet browser application (e.g., by helping the
user to user to provide provide appropriate appropriate inputs inputs depending onwhether depending on whethera aweb webpage page element element with with which which the the
user is interacting is or is not a virtual three-dimensional object), thereby enhancing the user is interacting is or is not a virtual three-dimensional object), thereby enhancing the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00309]
[00309] In some embodiments, the first user interface is (1022) an e-mail application In some embodiments, the first user interface is (1022) an e-mail application
user interface (e.g., e-mail user interface 7052, as illustrated in Figure 7P) and the first item is user interface (e.g., e-mail user interface 7052, as illustrated in Figure 7P) and the first item is
an attachment (e.g., attachment 7060) to an e-mail. an attachment (e.g., attachment 7060) to an e-mail.
[00310]
[00310] Displaying an e-mail attachment with a visual indication indicating that the e- Displaying an e-mail attachment with a visual indication indicating that the e-
mail attachment is a virtual three-dimensional object increases the efficiency with which the mail attachment is a virtual three-dimensional object increases the efficiency with which the
user is able to perform operations using an e-mail application user interface (e.g., by helping user is able to perform operations using an e-mail application user interface (e.g., by helping
the user the user to to provide provide appropriate appropriate inputs inputs depending on whether depending on whetheranane-mail e-mailattachment attachmentwith withwhich which the user is interacting is or is not a virtual three-dimensional object), thereby enhancing the the user is interacting is or is not a virtual three-dimensional object), thereby enhancing the
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00311]
[00311] In some In embodiments, some embodiments, thethe firstuser first userinterface interface is is (1024) (1024) aa messaging messaging
application user interface (e.g., messaging user interface 5008, as illustrated in Figure 7M) application user interface (e.g., messaging user interface 5008, as illustrated in Figure 7M)
and the first item is an attachment or an element (e.g., virtual chair 5020) in a message (e.g., and the first item is an attachment or an element (e.g., virtual chair 5020) in a message (e.g.,
the first item is an image, a hyperlink, a mini program, an emoji, a media object, etc.). the first item is an image, a hyperlink, a mini program, an emoji, a media object, etc.).
121
1005066680
[00312]
[00312] Displayingaa message Displaying messageattachment attachmentor or element element with with a visualindication a visual indication 10 Jan 2024
indicating that the message attachment or element is a virtual three-dimensional object indicating that the message attachment or element is a virtual three-dimensional object
increases the increases the efficiency efficiency with with which which the the user user is isable abletoto perform performoperations operationsusing usinga amessaging messaging
user interface (e.g., by helping the user to provide appropriate inputs depending on whether a user interface (e.g., by helping the user to provide appropriate inputs depending on whether a
message attachment or element with which the user is interacting is or is not a virtual three- message attachment or element with which the user is interacting is or is not a virtual three-
dimensional object), thereby enhancing the operability of the device, which, additionally, dimensional object), thereby enhancing the operability of the device, which, additionally,
reduces power usage and improves battery life of the device by enabling the user to use the reduces power usage and improves battery life of the device by enabling the user to use the 2024200149
device more quickly and efficiently. device more quickly and efficiently.
[00313]
[00313] In some In embodiments, some embodiments, thethe firstuser first user interface interface is is (1026) (1026) aa file filemanagement management
application user interface (e.g., file management user interface 7036, as illustrated in Figure application user interface (e.g., file management user interface 7036, as illustrated in Figure
7O) and the first item is a file preview object (e.g., file preview object 7045 in file 70) and the first item is a file preview object (e.g., file preview object 7045 in file
information region information region 7046). 7046).
[00314]
[00314] Displaying a file preview object with a visual indication indicating that the file Displaying a file preview object with a visual indication indicating that the file
preview object is a virtual three-dimensional object increases the efficiency with which the preview object is a virtual three-dimensional object increases the efficiency with which the
user is able to perform operations using a file management application user interface (e.g., by user is able to perform operations using a file management application user interface (e.g., by
helping the helping the user user to to provide provide appropriate appropriate inputs inputs depending on whether depending on whethera afile file preview object preview object
with which the user is interacting is or is not a virtual three-dimensional object), thereby with which the user is interacting is or is not a virtual three-dimensional object), thereby
enhancingthe enhancing theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reducespower power usage and usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently efficiently
[00315]
[00315] In some In embodiments, some embodiments, thethe firstuser first user interface interface is is (1028) (1028) aa map application user map application user
interface (e.g., map application user interface 7024) and the first item is a representation of a interface (e.g., map application user interface 7024) and the first item is a representation of a
point of interest (e.g., point of interest object 7028) in a map (e.g., a three dimensional point of interest (e.g., point of interest object 7028) in a map (e.g., a three dimensional
representation of a feature that corresponds to a location on the map (e.g., including three- representation of a feature that corresponds to a location on the map (e.g., including three-
dimensional representations of terrain and/or structures that correspond to the location on the dimensional representations of terrain and/or structures that correspond to the location on the
map) or a control, that when actuated, causes display of a three dimensional representation of map) or a control, that when actuated, causes display of a three dimensional representation of
a map). a map).
[00316]
[00316] Displaying a representation of a point of interest in a map with a visual Displaying a representation of a point of interest in a map with a visual
indication indicating that the representation of the point of interest is a virtual three- indication indicating that the representation of the point of interest is a virtual three-
dimensional object increases the efficiency with which the user is able to perform operations dimensional object increases the efficiency with which the user is able to perform operations
using a map application user interface (e.g., by helping the user to provide appropriate inputs using a map application user interface (e.g., by helping the user to provide appropriate inputs
depending on whether a representation of the point of interest with which the user is depending on whether a representation of the point of interest with which the user is
122
1005066680
interacting is or is not a virtual three-dimensional object), thereby enhancing the operability interacting is or is not a virtual three-dimensional object), thereby enhancing the operability 10 Jan 2024
of the device, which, additionally, reduces power usage and improves battery life of the of the device, which, additionally, reduces power usage and improves battery life of the
device by enabling the user to use the device more quickly and efficiently. device by enabling the user to use the device more quickly and efficiently.
[00317]
[00317] In some embodiments, the visual indication that the first item corresponds to a In some embodiments, the visual indication that the first item corresponds to a
respective virtual three-dimensional object includes (1030) an animation of the first item that respective virtual three-dimensional object includes (1030) an animation of the first item that
occurs without requiring an input directed to the representation of the respective three- occurs without requiring an input directed to the representation of the respective three-
dimensional object (e.g., a continuous movement or changing visual effect applied to the first dimensional object (e.g., a continuous movement or changing visual effect applied to the first 2024200149
item (e.g., sparkling, shimmering, etc.) over time). item (e.g., sparkling, shimmering, etc.) over time).
[00318]
[00318] Displaying an animation of the first item that occurs without input directed to Displaying an animation of the first item that occurs without input directed to
the representation of the respective three-dimensional object enhances the operability of the the representation of the respective three-dimensional object enhances the operability of the
device (e.g., device (e.g., by byreducing reducing the the number of inputs number of inputs needed for aa user needed for user to to view view three-dimensional three-dimensional
aspects of the first item), which, additionally, reduces power usage and improves battery life aspects of the first item), which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00319]
[00319] In some In embodiments, some embodiments, while while displaying displaying thethe representation representation of of thesecond the second item item
(e.g., (e.g., virtual virtual chair 5020)with chair 5020) withthethe visual visual indication indication (e.g., (e.g., virtual virtual object object indicator indicator 5022) 5022) to to indicate that the second item corresponds to a respective virtual three-dimensional object, the indicate that the second item corresponds to a respective virtual three-dimensional object, the
device detects (1032) a second input by a second contact at a location on the touch-sensitive device detects (1032) a second input by a second contact at a location on the touch-sensitive
surface that corresponds to the representation of the second item (e.g., an input as described surface that corresponds to the representation of the second item (e.g., an input as described
with regard with regard to to Figures Figures 5C-5F), and, in 5C-5F), and, in response to detecting response to detecting the the second second input input by by the the second second
contact and contact in accordance and in withaa determination accordance with determinationthat that the the second input by second input by the the second secondcontact contact meets first (e.g., AR-trigger) criteria, the device displays a third user interface region on the meets first (e.g., AR-trigger) criteria, the device displays a third user interface region on the
display, including replacing display of at least a portion of the second user interface (e.g., display, including replacing display of at least a portion of the second user interface (e.g.,
messaging user interface 5008) with a representation of a field of view 5036 of the one or messaging user interface 5008) with a representation of a field of view 5036 of the one or
morecameras more cameras(e.g., (e.g., described described with withregard regardto to Figures Figures 5F-5I) 5F-5I) and andcontinuously continuouslydisplaying displayingthe the secondvirtual second virtual three-dimensional object while three-dimensional object while switching switchingfrom fromdisplaying displayingthe thesecond seconduser user interface to displaying the third user interface region. (e.g., as described in greater detail interface to displaying the third user interface region. (e.g., as described in greater detail
herein with herein reference to with reference to method 800). In method 800). In some someembodiments, embodiments,thethe device device displays displays an an
animation as the representation of the virtual object is continuously displayed while switching animation as the representation of the virtual object is continuously displayed while switching
from displaying the portion of the second user interface with the representation of the field of from displaying the portion of the second user interface with the representation of the field of
view of the one or more cameras (e.g., as described in greater detail herein with reference to view of the one or more cameras (e.g., as described in greater detail herein with reference to
operation 834). operation 834).
123
1005066680
[00320]
[00320] Using the first criteria to determine whether to display the third user interface Using the first criteria to determine whether to display the third user interface 10 Jan 2024
region enables the performance of multiple different types of operations in response to the region enables the performance of multiple different types of operations in response to the
secondinput. second input. Enabling the performance Enabling the performanceofofmultiple multipledifferent different types types of of operations operations in in response response
to an input increases the efficiency with which the user is able to perform these operations, to an input increases the efficiency with which the user is able to perform these operations,
thereby enhancing thereby enhancingthe theoperability operability of of the the device, device, which, which, additionally, additionally,reduces reduces power usage and power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently. 2024200149
[00321]
[00321] In some embodiments, (e.g., as described in greater detail herein with In some embodiments, (e.g., as described in greater detail herein with
reference to method 900) while displaying the second item (e.g., virtual chair 5020) with the reference to method 900) while displaying the second item (e.g., virtual chair 5020) with the
visual indication (e.g., virtual object indicator 5022) to indicate that the second item visual indication (e.g., virtual object indicator 5022) to indicate that the second item
corresponds to the respective virtual three-dimensional object, the device detects (1034) a corresponds to the respective virtual three-dimensional object, the device detects (1034) a
third input by a third contact at a location on the touch-sensitive surface that corresponds to third input by a third contact at a location on the touch-sensitive surface that corresponds to
the representation of the second item (e.g., an input as described with regard to Figures 6E- the representation of the second item (e.g., an input as described with regard to Figures 6E-
6I), and, in response to detecting the third input by the third contact and in accordance with a 6I), and, in response to detecting the third input by the third contact and in accordance with a
determination that the third input by the third contact meets first (e.g., staging-trigger) determination that the third input by the third contact meets first (e.g., staging-trigger)
criteria, the device displays the second virtual three-dimensional object in a fourth user criteria, the device displays the second virtual three-dimensional object in a fourth user
interface that is different from the second user interface (e.g., a staging user interface 6010 as interface that is different from the second user interface (e.g., a staging user interface 6010 as
described in described in greater greater detail detailwith withreference referencetoto method method 900). 900). In Insome some embodiments, while embodiments, while
displaying the second virtual three-dimensional object in the fourth user interface (e.g., displaying the second virtual three-dimensional object in the fourth user interface (e.g.,
staging user interface 6010, as illustrated at Figure 6I), the device detects a fourth input and, staging user interface 6010, as illustrated at Figure 6I), the device detects a fourth input and,
in response to detecting the fourth input: in accordance with a determination that the fourth in response to detecting the fourth input: in accordance with a determination that the fourth
input corresponds to a request to manipulate the second virtual three-dimensional object in input corresponds to a request to manipulate the second virtual three-dimensional object in
the fourth user interface, the device changes a display property of the second virtual three- the fourth user interface, the device changes a display property of the second virtual three-
dimensional object within the fourth user interface based on the fourth input (e.g., as dimensional object within the fourth user interface based on the fourth input (e.g., as
described with described with regard regard to to Figures 6J-6Mand/or Figures 6J-6M and/orasasdescribed describedwith withregard regardtotoFigures Figures6N-6P), 6N-6P), and, in accordance with a determination that the fourth input corresponds to a request to and, in accordance with a determination that the fourth input corresponds to a request to
display the second virtual object in an augmented reality environment (e.g., a tap input, a display the second virtual object in an augmented reality environment (e.g., a tap input, a
press input, or a touch-hold or press input followed by a drag input, at or from a location on press input, or a touch-hold or press input followed by a drag input, at or from a location on
the touch-sensitive surface that corresponds to the representation of the virtual object in the the touch-sensitive surface that corresponds to the representation of the virtual object in the
second user interface region), the device displays the second virtual three-dimensional object second user interface region), the device displays the second virtual three-dimensional object
with a representation of a field of view of the one or more cameras (e.g., as described with with a representation of a field of view of the one or more cameras (e.g., as described with
regard to regard to Figures Figures 6Q-6U). 6Q-6U).
124
1005066680
[00322]
[00322] Whiledisplaying While displayingthe the second secondthree-dimensional three-dimensionalobject objectinina afourth fourth user user interface interface 10 Jan 2024
(e.g., a staging user interface 6010), in response to the fourth input, the device either changes (e.g., a staging user interface 6010), in response to the fourth input, the device either changes
a display property of the second three-dimensional object based on the fourth input or a display property of the second three-dimensional object based on the fourth input or
displays the second three-dimensional object with a representation of a field of view of one or displays the second three-dimensional object with a representation of a field of view of one or
morecameras more camerasofofthe thedevice. device.Enabling Enablingthe theperformance performanceof of multiple multiple differenttypes different typesofof operations in response to an input (e.g., by changing a display property of the second three- operations in response to an input (e.g., by changing a display property of the second three-
dimensionalobject dimensional objector or displaying displaying the the second secondthree-dimensional three-dimensionalobject objectwith witha arepresentation representationof of 2024200149
a field of view of one or more cameras of the device) increases the efficiency with which the a field of view of one or more cameras of the device) increases the efficiency with which the
user is able to perform these operations, thereby enhancing the operability of the device, user is able to perform these operations, thereby enhancing the operability of the device,
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00323]
[00323] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 10A-10D Figures 10A-10D have have been been described described is merely is merely an example an example andnot and is is not intended intended to indicate to indicate
that the that the described described order order is isthe theonly onlyorder inin order which whichthe operations the operationscould couldbebeperformed. performed.One One of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 16000, 16000,17000, 17000,18000, 18000, 19000, and20000) 19000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 1000 1000 described described
above with respect to Figures 10A-10D. For example, the contacts, inputs, virtual objects, above with respect to Figures 10A-10D. For example, the contacts, inputs, virtual objects,
user interfaces, user interface regions, fields of view, movements, and/or animations user interfaces, user interface regions, fields of view, movements, and/or animations
described above described abovewith withreference referencetoto method method1000 1000 optionally optionally have have oneone or or more more of the of the
characteristics of the contacts, inputs, virtual objects, user interfaces, user interface regions, characteristics of the contacts, inputs, virtual objects, user interfaces, user interface regions,
fields of fields ofview, view, movements, and/oranimations movements, and/or animationsdescribed describedherein hereinwith withreference referencetotoother other methodsdescribed methods describedherein herein(e.g., (e.g., methods 800,900, methods 800, 900,16000, 16000,17000, 17000, 18000, 18000, 19000, 19000, and and 20000). 20000).
For brevity, these details are not repeated here. For brevity, these details are not repeated here.
[00324]
[00324] Figures 11A-11V illustrate example user interfaces for displaying a virtual Figures 11A-11V illustrate example user interfaces for displaying a virtual
object with different visual properties depending on whether object-placement criteria are object with different visual properties depending on whether object-placement criteria are
met. The user interfaces in these figures are used to illustrate the processes described below, met. The user interfaces in these figures are used to illustrate the processes described below,
including the including the processes in Figures processes in Figures 8A-8E, 9A-9D,10A-10D, 8A-8E, 9A-9D, 10A-10D, 16A-16G, 16A-16G, 17A-17D, 17A-17D, 18A-18I, 18A-18I,
19A-19H, and20A-20F. 19A-19H, and 20A-20F. ForFor convenience convenience of explanation, of explanation, somesome of embodiments of the the embodiments will bewill be
discussed with discussed with reference reference to to operations operations performed onaadevice performed on devicewith withaatouch-sensitive touch-sensitive display display system 112. In such embodiments, the focus selector is, optionally: a respective finger or system 112. In such embodiments, the focus selector is, optionally: a respective finger or
stylus contact, a representative point corresponding to a finger or stylus contact (e.g., a stylus contact, a representative point corresponding to a finger or stylus contact (e.g., a
125
1005066680
centroid of a respective contact or a point associated with a respective contact), or a centroid centroid of a respective contact or a point associated with a respective contact), or a centroid 10 Jan 2024
of two of or more two or contacts detected more contacts detected on on the the touch-sensitive touch-sensitive display display system 112. However, system 112. However, analogousoperations analogous operationsare, are, optionally, optionally, performed onaa device performed on devicewith withaa display display 450 450and andaaseparate separate touch-sensitive surface 451 in response to detecting the contacts on the touch-sensitive touch-sensitive surface 451 in response to detecting the contacts on the touch-sensitive
surface 451 while displaying the user interfaces shown in the figures on the display 450, surface 451 while displaying the user interfaces shown in the figures on the display 450,
along with a focus selector. along with a focus selector.
[00325]
[00325] Figures 11A-11E illustrate input to display a virtual object in a staging view. Figures 11A-11E illustrate input to display a virtual object in a staging view. 2024200149
For example, For example,the theinput input is is detected detected while while a a two-dimensional (e.g., thumbnail) two-dimensional (e.g., representation of thumbnail) representation of a three-dimensional object is displayed in a user interface (e.g., an e-mail user interface 7052, a three-dimensional object is displayed in a user interface (e.g., an e-mail user interface 7052,
a file a filemanagement userinterface management user interface 7036, 7036, aa map mapuser userinterface interface 7022, 7022,aa messaging messaginguser userinterface interface 5008, an Internet browser user interface 5060, or a third-party application user interface). 5008, an Internet browser user interface 5060, or a third-party application user interface).
[00326]
[00326] In Figure In Figure 11A, an Internet 11A, an Internet browser user interface browser user interface 5060 includes aa two- 5060 includes two- dimensional representation of three-dimensional virtual object 11002 (a chair). An input (e.g., dimensional representation of three-dimensional virtual object 11002 (a chair). An input (e.g.,
a tap input) by contact 11004 is detected at a location that corresponds to the virtual object a tap input) by contact 11004 is detected at a location that corresponds to the virtual object
11002. 11002. InInresponse response to the to the tap tap input, input, display display ofInternet of the the Internet browser browser user interface user interface 5060 is 5060 is
replaced by display of a staging user interface 6010. replaced by display of a staging user interface 6010.
[00327]
[00327] Figures 11B-11E illustrate a transition that occurs as Internet browser user Figures 11B-11E illustrate a transition that occurs as Internet browser user
interface 5060 interface is replaced 5060 is replaced by by display display of of aastaging staginguser userinterface 6010. interface InInsome 6010. someembodiments, embodiments,
virtual object 10002 gradually fades into view and/or controls of staging user interface 6010 virtual object 10002 gradually fades into view and/or controls of staging user interface 6010
(e.g., (e.g., back control6016, back control 6016, toggle toggle control control 6018,6018, and/orand/or share control share control 6020) gradually 6020) gradually fade into fade into
view during the transition. For example, controls of staging user interface 6010 fade into view during the transition. For example, controls of staging user interface 6010 fade into
view after virtual object 11002 fades into view (e.g., to delay display of the controls during a view after virtual object 11002 fades into view (e.g., to delay display of the controls during a
period of time required for a three-dimensional representation of virtual object 11002 to be period of time required for a three-dimensional representation of virtual object 11002 to be
rendered on rendered onthe the display). display). In In some embodiments, some embodiments, the"fading the “fadingin" in”ofofvirtual virtual object object 11002 11002
includes displaying includes displaying aa low-resolution, low-resolution, two-dimensional, and/orholographic two-dimensional, and/or holographicversion versionofofvirtual virtual object 11002 followed by displaying the final three-dimensional representation of virtual object 11002 followed by displaying the final three-dimensional representation of virtual
object 11002. Figures 11B-11D illustrate gradual fading-in of virtual object 11002. In Figure object 11002. Figures 11B-11D illustrate gradual fading-in of virtual object 11002. In Figure
11D, 11D, aa shadow shadow11006 11006of of virtualobject virtual object11002 11002isisdisplayed. displayed.Figures Figures11D-11E 11D-11E illustrategradual illustrate gradual fading-in of fading-in of controls controls 6016, 6016, 6018, 6018, and and 6020. 6020.
[00328]
[00328] Figures 11F-11G illustrate an input that that causes the three-dimensional Figures 11F-11G illustrate an input that that causes the three-dimensional
representation of virtual object 11002 to be displayed in a user interface that includes field of representation of virtual object 11002 to be displayed in a user interface that includes field of
view 6036 view 6036ofofone oneorormore morecameras cameras of of device device 100. 100. In In Figure Figure 11F, 11F, an an input input byby a contact11008 a contact 11008 126
1005066680
is detected at a location that corresponds to toggle control 6018. In response to the input, is detected at a location that corresponds to toggle control 6018. In response to the input, 10 Jan 2024
display of staging user interface 6010 is replaced by display of the user interface that includes display of staging user interface 6010 is replaced by display of the user interface that includes
field of field ofview view 6036 of the 6036 of the camera(s), camera(s), as as shown in Figure shown in 11G. Figure 11G.
[00329]
[00329] As illustrated As illustrated ininFigures Figures11G-11H, whenfield 11G-11H, when fieldof of view view6036 6036ofofthe thecamera(s) camera(s)isis initially displayed, a translucent representation of a virtual object may be displayed (e.g., initially displayed, a translucent representation of a virtual object may be displayed (e.g.,
when a plane that corresponds to the virtual object has not been detected in field of view when a plane that corresponds to the virtual object has not been detected in field of view
6036of 6036 of the the camera(s)). camera(s)). 2024200149
[00330]
[00330] Figures 11G-11H illustrate a translucent representation of virtual object 11002 Figures 11G-11H illustrate a translucent representation of virtual object 11002
displayed in the user interface that includes field of view 6036 of the camera(s). The displayed in the user interface that includes field of view 6036 of the camera(s). The
translucent representation of virtual object 11002 is displayed at a fixed position relative to translucent representation of virtual object 11002 is displayed at a fixed position relative to
display 112. display 112. For example,from For example, fromFigure Figure11G 11Gto to Figure Figure 11H, 11H, as as device device 100100 is is moved moved relative relative to to physical environment 5002 (as indicated by, e.g., the changed position of table 5004 in field physical environment 5002 (as indicated by, e.g., the changed position of table 5004 in field
of view 6036 of the camera(s)), virtual object 11002 remains at a fixed position relative to of view 6036 of the camera(s)), virtual object 11002 remains at a fixed position relative to
display 112. display 112.
[00331]
[00331] In some In embodiments, some embodiments, in in accordance accordance with with a determination a determination thatthat a plane a plane that that
corresponds to a virtual object has been detected in field of view 6036 of the camera(s), the corresponds to a virtual object has been detected in field of view 6036 of the camera(s), the
virtual object is placed on the detected plane. virtual object is placed on the detected plane.
[00332]
[00332] In Figure 11I, a plane that corresponds to virtual object 11002 has been In Figure 11I, a plane that corresponds to virtual object 11002 has been
detected in field of view 6036 of the camera(s) and virtual object 11002 is placed on the detected in field of view 6036 of the camera(s) and virtual object 11002 is placed on the
detected plane. The device has generated a tactile output, as illustrated at 11010 (e.g., to detected plane. The device has generated a tactile output, as illustrated at 11010 (e.g., to
indicate that at least one plane (e.g., a floor surface 5038)) has been detected in the field of indicate that at least one plane (e.g., a floor surface 5038)) has been detected in the field of
view 6036 of the camera(s). When the virtual object 11002 is placed at a position relative to a view 6036 of the camera(s). When the virtual object 11002 is placed at a position relative to a
plane detected in field of view 6036 of the camera(s), virtual object 11002 remains at a fixed plane detected in field of view 6036 of the camera(s), virtual object 11002 remains at a fixed
position relative position relativeto tophysical physicalenvironment environment 5002 capturedby 5002 captured bythe the one oneor or more morecameras. cameras.From From Figure 11I Figure 11I to to Figure Figure 11J, 11J, as as device device 100 100 is is moved relative to moved relative to physical physical environment 5002(as environment 5002 (as indicated by, e.g., the changed position of table 5004 in displayed field of view 6036 of the indicated by, e.g., the changed position of table 5004 in displayed field of view 6036 of the
camera(s)), virtual object 11002 remains at a fixed position relative to the physical camera(s)), virtual object 11002 remains at a fixed position relative to the physical
environment5002. environment 5002.
[00333]
[00333] In some In embodiments, some embodiments, while while fieldofofview field view 6036 6036 of of thethe camera(s) camera(s) is is displayed, displayed,
controls (e.g., back control 6016, toggle control 6018, and/or share control 6020) cease to be controls (e.g., back control 6016, toggle control 6018, and/or share control 6020) cease to be
displayed (e.g., in accordance with a determination that a period of time has passed during displayed (e.g., in accordance with a determination that a period of time has passed during
whichnonoinput which inputhas hasbeen beenreceived). received). In In Figures Figures 11J-11L, 11J-11L,controls controls6016, 6016,6018 6018and and6020 6020 127
1005066680
gradually fade out (e.g., as shown in Figure 11K), increasing the portion of display 112 in gradually fade out (e.g., as shown in Figure 11K), increasing the portion of display 112 in 10 Jan 2024
which field of view 6036 of the camera(s) is displayed (e.g., as shown in Figure 11L). which field of view 6036 of the camera(s) is displayed (e.g., as shown in Figure 11L).
[00334]
[00334] Figures 11M-11S Figures 11M-11S illustrateinput illustrate input for for manipulating manipulatingvirtual virtual object object 11002 whenitit 11002 when
is displayed in the user interface that includes field of view 6036 of the camera(s). is displayed in the user interface that includes field of view 6036 of the camera(s).
[00335]
[00335] In Figures In Figures 11M-11N, 11M-11N, anan input(e.g., input (e.g., aa de-pinch de-pinchgesture) gesture) by by contacts contacts 11012 11012and and 11014 forchanging 11014 for changing the the simulated simulated physical physical size of size of virtual virtual object object 11002 is 11002 is detected. detected. In In 2024200149
response to response to detection detection of of an an input, input,controls controls6016, 6016,6018 6018 and and 6020 are re-displayed. 6020 are re-displayed. As contact As contact
11012 moves 11012 moves along along a pathindicated a path indicatedbybyarrow arrow 11016 11016 andand contact contact 11014 11014 moves moves along along a patha path
indicated by arrow 11018, the size of virtual object 11002 increases. indicated by arrow 11018, the size of virtual object 11002 increases.
[00336]
[00336] In Figures In Figures 11N-11P, 11N-11P, ananinput input(e.g., (e.g., aa pinch pinch gesture) gesture) by by contacts contacts 11012-1104 11012-1104
for changing the simulated physical size of virtual object 11002 is detected. As contact 11012 for changing the simulated physical size of virtual object 11002 is detected. As contact 11012
movesalong moves alonga apath pathindicated indicatedbybyarrow arrow11020 11020andand contact contact 11014 11014 moves moves alongalong a path a path
indicated by indicated by arrow 11022,the arrow 11022, thesize size of of virtual virtualobject object11002 11002 decreases decreases (as (as shown in Figures shown in Figures 11N-11O and 11N-110 and 11O-11P). 110-11P). As illustrated As illustrated in in Figure Figure 11O, 110, when when the the sizesize of of virtualobject virtual object11002 11002 is adjusted to its original size relative to physical environment 5002 (e.g., the size of virtual is adjusted to its original size relative to physical environment 5002 (e.g., the size of virtual
object 11002 object wheninitially 11002 when initially placed placed on on the the detected detected plane plane in in the the physical physical environment 5002,asas environment 5002,
shown in Figure 11I), a tactile output (as illustrated at 11024) occurs (e.g., to provide shown in Figure 11I), a tactile output (as illustrated at 11024) occurs (e.g., to provide
feedback indicating that the virtual object 11002 has returned to its original size). In Figure feedback indicating that the virtual object 11002 has returned to its original size). In Figure
11Q, contacts 11012 11Q, contacts 11012and and11014 11014 have have liftedofoftouch-screen lifted touch-screendisplay display112. 112.
[00337]
[00337] In Figure 11R, an input (e.g., a double tap input) for returning virtual object In Figure 11R, an input (e.g., a double tap input) for returning virtual object
11002 11002 toto itsoriginal its originalsize sizerelative relativetotophysical physical environment environment 5002 is5002 is detected. detected. The inputThe is input is
detected at a location that corresponds to virtual object 11002, as indicated by contact 11026. detected at a location that corresponds to virtual object 11002, as indicated by contact 11026.
In response to the input, the virtual object 11002 is adjusted from the reduced size, illustrated In response to the input, the virtual object 11002 is adjusted from the reduced size, illustrated
in Figure 11R, to the original size of virtual object 11002, as indicated in Figure 11S. As in Figure 11R, to the original size of virtual object 11002, as indicated in Figure 11S. As
illustrated in Figure 11S, when the size of virtual object 11002 is adjusted to its original size illustrated in Figure 11S, when the size of virtual object 11002 is adjusted to its original size
relative to physical environment 5002, a tactile output (as illustrated at 11028) occurs (e.g., to relative to physical environment 5002, a tactile output (as illustrated at 11028) occurs (e.g., to
provide feedback indicating that the virtual object 11002 has returned to its original size). provide feedback indicating that the virtual object 11002 has returned to its original size).
[00338]
[00338] In Figure 11T, an input by a contact 11030 is detected at a location that In Figure 11T, an input by a contact 11030 is detected at a location that
corresponds to toggle control 6018. In response to the input, display of the user interface that corresponds to toggle control 6018. In response to the input, display of the user interface that
includes field of view 6036 of the camera(s) is replaced by staging user interface 6010, as includes field of view 6036 of the camera(s) is replaced by staging user interface 6010, as
shownininFigure shown Figure11U. 11U.
128
1005066680
[00339]
[00339] In Figure 11U, an input by a contact 11032 is detected at a location that In Figure 11U, an input by a contact 11032 is detected at a location that 10 Jan 2024
corresponds to back control 6016. In response to the input, display of staging user interface corresponds to back control 6016. In response to the input, display of staging user interface
6010 is replaced 6010 is replaced by Internet browser by Internet user interface browser user interface 5060, 5060, as as shown in Figure shown in 11V. Figure 11V.
[00340]
[00340] Figures 12A-12L illustrate example user interfaces for displaying a calibration Figures 12A-12L illustrate example user interfaces for displaying a calibration
user interface user interface object objectthat thatisis dynamically dynamicallyanimated animated in inaccordance accordance with with movement movement of of one one or or
more cameras of a device. The user interfaces in these figures are used to illustrate the more cameras of a device. The user interfaces in these figures are used to illustrate the
processes described processes described below, below,including includingthe theprocesses processesinin Figures Figures 8A-8E, 8A-8E,9A-9D, 9A-9D, 10A-10D, 10A-10D, 2024200149
16A-16G, 17A-17D, 16A-16G, 17A-17D, 18A-18I, 18A-18I, 19A-19H, 19A-19H, and 20A-20F. and 20A-20F. For convenience For convenience of explanation, of explanation,
someofofthe some the embodiments embodiments will will bebe discussed discussed with with reference reference to to operationsperformed operations performed on on a a device with device with aa touch-sensitive touch-sensitive display display system 112. In system 112. In such embodiments,thethefocus such embodiments, focusselector selectoris, is, optionally: a respective finger or stylus contact, a representative point corresponding to a optionally: a respective finger or stylus contact, a representative point corresponding to a
finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a
respective contact), or a centroid of two or more contacts detected on the touch-sensitive respective contact), or a centroid of two or more contacts detected on the touch-sensitive
display system display 112. However, system 112. However, analogous analogous operations operations are, are, optionally,performed optionally, performedon on a device a device
with a display 450 and a separate touch-sensitive surface 451 in response to detecting the with a display 450 and a separate touch-sensitive surface 451 in response to detecting the
contacts on the touch-sensitive surface 451 while displaying the user interfaces shown in the contacts on the touch-sensitive surface 451 while displaying the user interfaces shown in the
figures on the display 450, along with a focus selector. figures on the display 450, along with a focus selector.
[00341]
[00341] In accordance In withsome accordance with someembodiments, embodiments, whenwhen a request a request is received is received to display to display a a virtual object in a user interface that includes a field of view of one or more cameras, but virtual object in a user interface that includes a field of view of one or more cameras, but
additional data is needed for calibration of the device, a calibration user interface object is additional data is needed for calibration of the device, a calibration user interface object is
displayed. displayed.
[00342]
[00342] Figure 12A illustrates input that requests to display a virtual object 11002 in a Figure 12A illustrates input that requests to display a virtual object 11002 in a
user interface that includes a field of view 6036 of one or more cameras. An input by a user interface that includes a field of view 6036 of one or more cameras. An input by a
contact 12002 is detected at a location that corresponds to toggle control 6018. In response to contact 12002 is detected at a location that corresponds to toggle control 6018. In response to
the input, display of staging user interface 6010 is replaced by display of the user interface the input, display of staging user interface 6010 is replaced by display of the user interface
that includes that includes field fieldofofview view6036 6036 of of the thecamera(s), camera(s),as asshown shown in in Figure Figure 12B. 12B. A translucent A translucent
representation of virtual object 11002 is displayed in the user interface that includes field of representation of virtual object 11002 is displayed in the user interface that includes field of
view 6036 of the camera(s). While calibration is needed (e.g., because a plane that view 6036 of the camera(s). While calibration is needed (e.g., because a plane that
corresponds to virtual object 11002 has not been detected in the field of view 6036 of the corresponds to virtual object 11002 has not been detected in the field of view 6036 of the
camera(s)), the field of view 6036 of the camera(s) is blurred (e.g., to emphasize behavior of camera(s)), the field of view 6036 of the camera(s) is blurred (e.g., to emphasize behavior of
prompts and/or a calibration object, as described below). prompts and/or a calibration object, as described below).
129
1005066680
[00343]
[00343] Figures 12B-12D Figures 12B-12D illustrate an illustrate an animated animatedimage image and and textthat text thatprompt prompt theuser the usertoto 10 Jan 2024
move the device (e.g., displayed in accordance with a determination that calibration is move the device (e.g., displayed in accordance with a determination that calibration is
needed). The needed). Theanimated animatedimage image includes includes a representation12004 a representation 12004 of of thethe device device 100, 100, arrows arrows
12006 and12008 12006 and 12008that thatindicate indicatethat that side-to-side side-to-side movement movement ofofdevice device100 100isisneeded, needed,a a representation 12010 of a plane (e.g., to indicate that device 100 must move relative to a representation 12010 of a plane (e.g., to indicate that device 100 must move relative to a
plane in order to detect a plane that corresponds to virtual object 11002). Text prompt 12012 plane in order to detect a plane that corresponds to virtual object 11002). Text prompt 12012
provides information provides informationregarding regardingmovement movement of device of device 100100 needed needed for calibration. for calibration. In In Figures Figures 2024200149
12B-12C and12C-12D, 12B-12C and 12C-12D, representation representation 12004 12004 of device of device 100 100 and arrows and arrows 1200612006 are adjusted are adjusted
relative totorepresentation relative representation12010 12010 of of the theplane planetotoprovide providean anindication indicationofof movement of device movement of device
100 neededfor 100 needed for calibration. calibration. From Figure12C From Figure 12CtotoFigure Figure12D, 12D,device device100100 is ismoved moved relative relative toto
physical environment 5002 (as indicated by, e.g., the changed position of table 5004 in field physical environment 5002 (as indicated by, e.g., the changed position of table 5004 in field
of view of 6036ofofthe view 6036 the camera(s)). camera(s)). As Asaa result result of of detection detection of ofthe themovement of device movement of device 100, 100, aa calibration user interface object 12014 (an outline of a cube) is displayed, as indicated in calibration user interface object 12014 (an outline of a cube) is displayed, as indicated in
Figure 12E-1. Figure 12E-1.
[00344]
[00344] Figures 12E-1 to 12I-1 illustrate behavior of calibration user interface object Figures 12E-1 to 12I-1 illustrate behavior of calibration user interface object
12014 that corresponds 12014 that correspondstoto movement movement of of device device 100100 relativetotophysical relative physicalenvironment environment 5002, 5002, as as
illustrated in Figures 12E-2 to 12I-2, respectively. Calibration user interface object 12014 in illustrated in Figures 12E-2 to 12I-2, respectively. Calibration user interface object 12014 in
animated (e.g., the outline of the cube rotates) in response to movement (e.g., lateral animated (e.g., the outline of the cube rotates) in response to movement (e.g., lateral
movement) movement) of of device device 100 100 (e.g.,totoprovide (e.g., providefeedback feedbacktotothe theuser user regarding regardingmovement movement that that is is
helpful for calibration). In Figure 12E-1, calibration user interface object 12014 is shown helpful for calibration). In Figure 12E-1, calibration user interface object 12014 is shown
with a first angle of rotation in the user interface that includes field of view 6036 of the with a first angle of rotation in the user interface that includes field of view 6036 of the
camera(s) of camera(s) of device device 100. 100. In In Figure Figure 12E-2, 12E-2,device 100isis shown, device100 shown,held heldbybythe theuser's user’s hands hands 5006, ataafirst 5006, at first position positionrelative relativetotophysical physical environment environment 5002. 5002. From12E-2 From Figure Figure 12E-2 to Figure to Figure
12F-2, thedevice 12F-2, the device100100 has has moved moved laterally laterally (to the(to the right) right) relative relative to physical to physical environment environment
5002. Asaa result 5002. As result of of the themovement, the field movement, the field of of view view 6036 of the 6036 of the camera(s) as displayed camera(s) as displayed by by device 100 is updated and calibration user interface object 12014 has rotated (relative to its device 100 is updated and calibration user interface object 12014 has rotated (relative to its
position in position in Figure Figure 12E-1), 12E-1), as as shown in Figure shown in Figure 12F-1. 12F-1. From FromFigure Figure12F-2 12F-2 to to Figure Figure 12G-2, 12G-2, thethe
device 100 device 100has has continued continuedits its rightward rightward movement movement relativetotophysical relative physicalenvironment environment 5002. 5002. As As a result a resultof ofthe themovement, the field movement, the fieldof ofview view 6036 6036 of of the the camera(s) camera(s) as as displayed displayed by by device device 100 100
is again updated and calibration user interface object 12014 is further rotated, as shown in is again updated and calibration user interface object 12014 is further rotated, as shown in
Figure 12G-1. Figure 12G-1.From From Figure Figure 12G-2 12G-2 to Figure to Figure 12H-2, 12H-2, the the device device 100 100 has has moved moved upward upward
relative totophysical relative physicalenvironment environment 5002. Asaa result 5002. As result of of the themovement, the field movement, the field of of view view 6036 of 6036 of
the camera(s) the as displayed camera(s) as by device displayed by device 100 100isis updated. updated. As Asillustrated illustrated ininFigure Figure12G-1 to Figure 12G-1 to Figure
130
1005066680
12H-1, calibration 12H-1, calibration user user interface interface object object 1201412014 does does not not in rotate rotate in response response to the upward to the upward 10 Jan 2024
movement movement of of thedevice the deviceillustrated illustrated Figure Figure12G-2 12G-2totoFigure Figure12H-2 12H-2 (e.g.,totoprovide (e.g., provideanan indication to the user that vertical movement of the device is not contributing to the indication to the user that vertical movement of the device is not contributing to the
calibration). From calibration). From Figure 12H-2totoFigure Figure 12H-2 Figure12I-2, 12I-2, the the device device 100 100has hasmoved moved furtherrightward further rightward relative totophysical relative physicalenvironment environment 5002. Asaa result 5002. As result of of the themovement, the field movement, the field of of view view 6036 of 6036 of
the camera(s) as displayed by device 100 is again updated and calibration user interface the camera(s) as displayed by device 100 is again updated and calibration user interface
object 12014 is rotated, as shown in Figure 12I-1. object 12014 is rotated, as shown in Figure 12I-1. 2024200149
[00345]
[00345] In Figure 12J, the movement of device 100 (e.g., as illustrated in Figures 12E- In Figure 12J, the movement of device 100 (e.g., as illustrated in Figures 12E-
12I) hassatisfied 12I) has satisfiedthe therequired required calibration calibration (e.g., (e.g., andand a plane a plane that that corresponds corresponds to virtual to virtual object object
11002 has 11002 has been been detected detected in field in the the field of view of view 6036 6036 of of the camera(s)). the camera(s)). Virtual Virtual object object 11002 is 11002 is placed on the detected plane and the field of view 6036 of the camera(s) ceases to be blurred. placed on the detected plane and the field of view 6036 of the camera(s) ceases to be blurred.
Tactile output generators output a tactile output (as illustrated at 12016) to indicate that the Tactile output generators output a tactile output (as illustrated at 12016) to indicate that the
plane (e.g., a floor surface 5038) has been detected in the field of view 6036 of the camera(s). plane (e.g., a floor surface 5038) has been detected in the field of view 6036 of the camera(s).
The floor surface 5038 is highlighted to provide an indication of the plane that has been The floor surface 5038 is highlighted to provide an indication of the plane that has been
detected. detected.
[00346]
[00346] When the virtual object 11002 has been placed at a position relative to a plane When the virtual object 11002 has been placed at a position relative to a plane
detected in field of view 6036 of the camera(s), virtual object 11002 remains at a fixed detected in field of view 6036 of the camera(s), virtual object 11002 remains at a fixed
position relative position relativeto tophysical physicalenvironment environment 5002 captured by 5002 captured bythe the one oneor or more morecameras. cameras.AsAs device 100 device 100is is moved relative to moved relative to physical physical environment 5002(as(asshown environment 5002 shownin in Figure Figure 12K-2 12K-2 to to Figure 12L-2), virtual object 11002 remains at a fixed position relative to the physical Figure 12L-2), virtual object 11002 remains at a fixed position relative to the physical
environment5002 environment 5002 (asshown (as shown in in Figure Figure 12K-1 12K-1 to 12L-1). to 12L-1).
[00347]
[00347] Figures 13A-13M Figures 13A-13M illustrateexample illustrate example user user interfacesfor interfaces forconstraining constrainingrotation rotation of a virtual object about an axis. The user interfaces in these figures are used to illustrate the of a virtual object about an axis. The user interfaces in these figures are used to illustrate the
processes described processes described below, below,including includingthe theprocesses processesinin Figures Figures 8A-8E, 8A-8E,9A-9D, 9A-9D, 10A-10D, 10A-10D,
16A-16G, 17A-17D, 16A-16G, 17A-17D, 18A-18I, 18A-18I, 19A-19H, 19A-19H, and 20A-20F. and 20A-20F. For convenience For convenience of explanation, of explanation,
someofofthe some the embodiments embodiments will will bebe discussed discussed with with reference reference to to operationsperformed operations performed on on a a device with device with aa touch-sensitive touch-sensitive display display system 112. In system 112. In such embodiments,thethefocus such embodiments, focusselector selectoris, is, optionally: a respective finger or stylus contact, a representative point corresponding to a optionally: a respective finger or stylus contact, a representative point corresponding to a
finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a finger or stylus contact (e.g., a centroid of a respective contact or a point associated with a
respective contact), or a centroid of two or more contacts detected on the touch-sensitive respective contact), or a centroid of two or more contacts detected on the touch-sensitive
display system display 112. However, system 112. However,analogous analogous operations operations are, are, optionally,performed optionally, performedon on a device a device
with a display 450 and a separate touch-sensitive surface 451 in response to detecting the with a display 450 and a separate touch-sensitive surface 451 in response to detecting the
131
1005066680
contacts on the touch-sensitive surface 451 while displaying the user interfaces shown in the contacts on the touch-sensitive surface 451 while displaying the user interfaces shown in the 10 Jan 2024
figures on the display 450, along with a focus selector. figures on the display 450, along with a focus selector.
[00348]
[00348] In Figure 13A, virtual object 11002 is shown in staging user interface 6010. In Figure 13A, virtual object 11002 is shown in staging user interface 6010.
An x-axis, y-axis, and z-axis are shown relative to the virtual object 11002. An x-axis, y-axis, and z-axis are shown relative to the virtual object 11002.
[00349]
[00349] Figures 13B-13C illustrate input to rotate virtual object 11002 about the y-axis Figures 13B-13C illustrate input to rotate virtual object 11002 about the y-axis
indicated in Figure 13A. In Figure 13B, an input by contact 13002 is detected at a location indicated in Figure 13A. In Figure 13B, an input by contact 13002 is detected at a location 2024200149
that corresponds that to virtual corresponds to virtualobject object11002. 11002. The The input input moves byaa distance moves by distance d1 d1 along a path along a path
indicated by indicated by arrow 13004.AsAsthe arrow 13004. theinput inputmoves moves along along thepath, the path,the thevirtual virtual object object 11002 rotates 11002 rotates
about the y-axis (e.g., by 35 degrees) to a position indicated in Figure 13B. In the staging user about the y-axis (e.g., by 35 degrees) to a position indicated in Figure 13B. In the staging user
interface 6010, interface 6010, shadow 13006that shadow 13006 thatcorresponds correspondstotovirtual virtualobject object 11002 11002isis displayed. displayed. From From Figure 13B Figure 13BtotoFigure Figure13C, 13C,shadow shadow 13006 13006 changes changes in accordance in accordance with with the changed the changed position position of of virtual object 11002. virtual object 11002.
[00350]
[00350] After contact 13002 lifts off of touch screen 112, virtual object 11002 After contact 13002 lifts off of touch screen 112, virtual object 11002
continues rotating, continues rotating, as asshown in Figures shown in Figures 13C-13D (e.g., in 13C-13D (e.g., accordancewith in accordance withthe the"momentum" “momentum” impartedby imparted bythe the movement movement of of contact contact 13002, 13002, to to provide provide thethe impression impression thethe virtualobject virtual object 11002 behaves 11002 behaves likelike a physical a physical object). object).
[00351]
[00351] Figures 13E-13F illustrate input to rotate the virtual object 11002 about the x- Figures 13E-13F illustrate input to rotate the virtual object 11002 about the X-
axis indicated in Figure 13A. In Figure 13E, an input by contact 13008 is detected at a axis indicated in Figure 13A. In Figure 13E, an input by contact 13008 is detected at a
location that corresponds to virtual object 11002. The input moves by a distance d1 along a location that corresponds to virtual object 11002. The input moves by a distance di along a
path indicated path indicated by by arrow 13010.AsAsthe arrow 13010. theinput inputmoves moves along along thepath, the path,the thevirtual virtual object object 11002 11002
rotates about the x-axis (e.g., by five degrees) to a position indicated in Figure 13F. Although rotates about the x-axis (e.g., by five degrees) to a position indicated in Figure 13F. Although
contact 13008 contact moves 13008 moves byby thesame the same distance distance di dalong 1 along thethe x-axisininFigures x-axis Figures13E-13F 13E-13F that that
contact 13002 contact moved 13002 moved from from 13B-13C, 13B-13C, the angle the angle of rotation of rotation of of virtualobject virtual object11002 11002 about about thethe
x-axis in Figures 13E-13F is less than the angle of rotation of virtual object 11002 about the x-axis in Figures 13E-13F is less than the angle of rotation of virtual object 11002 about the
y-axis in y-axis in Figures Figures 13B-13C. 13B-13C.
[00352]
[00352] Figures 13F-13G illustrate further input to rotate the virtual object 11002 Figures 13F-13G illustrate further input to rotate the virtual object 11002
about the x-axis indicated in Figure 13A. In Figure 13F, contact 13008 continues its about the x-axis indicated in Figure 13A. In Figure 13F, contact 13008 continues its
movement, movement, moving moving by abydistance a distance d2 d 2 (greater (greater than than distancedi) distance d1)along alonga apath pathindicated indicatedbybyarrow arrow 13012. 13012. AsAs thethe input input moves moves along along the the the path, path, the virtual virtual objectrotates object 11002 11002about rotates about the x-axis the x-axis
(by 25degrees) (by 25 degrees)to to a position a position indicated indicated in Figure in Figure 13G. 13G. As As illustrated illustrated in Figures in Figures 13E-13G, 13E-13G,
movement movement of of contact13008 contact 13008 by by a distance a distance di+d1d2 + dcauses 2 causes virtualobject virtual object11002 11002to to rotate3030 rotate
132
1005066680
degrees about degrees about the the x-axis, x-axis, whereas in Figures whereas in 13B-13C,movement Figures 13B-13C, movement of contact of contact 13004 13004 by a by a 10 Jan 2024
distance d causes virtual object 11002 to rotate 35 degrees about the y-axis. distance di 1causes virtual object 11002 to rotate 35 degrees about the y-axis.
[00353]
[00353] After contact 13008 lifts off of touch screen 112, virtual object 11002 rotates After contact 13008 lifts off of touch screen 112, virtual object 11002 rotates
in a direction opposite to the direction of rotation caused by the movement of contact 13008, in a direction opposite to the direction of rotation caused by the movement of contact 13008,
as shown as in Figures shown in Figures13G-13H 13G-13H (e.g.,totoindicate (e.g., indicate that that movement movement ofof contact13008 contact 13008 caused caused an an amount of rotation of virtual object 11002 that reached beyond a rotation limit). amount of rotation of virtual object 11002 that reached beyond a rotation limit). 2024200149
[00354]
[00354] In Figures In Figures 13G-13I, shadow 13G-13I, shadow 13006 13006 is is notnotshown shown (e.g.,because (e.g., because virtualobject virtual object 11002 doesnot 11002 does notcast cast aa shadow shadowwhen whenthethe objectisisviewed object viewedfrom from below). below).
[00355]
[00355] In Figure 13I, an input (e.g., a double tap input) is detected for returning In Figure 13I, an input (e.g., a double tap input) is detected for returning
virtual object 11002 to a perspective with which it was originally displayed (e.g., as indicated virtual object 11002 to a perspective with which it was originally displayed (e.g., as indicated
in Figure 13A). The input occurs at a location that corresponds to virtual object 11002, as in Figure 13A). The input occurs at a location that corresponds to virtual object 11002, as
indicated by contact 13014. In response to the input, virtual object 11002 is rotated about the indicated by contact 13014. In response to the input, virtual object 11002 is rotated about the
y-axis (to reverse the rotation that occurred from Figure 13E-13H) and about the x-axis (to y-axis (to reverse the rotation that occurred from Figure 13E-13H) and about the x-axis (to
reverse the reverse the rotation rotationthat thatoccurred occurredfrom from Figure Figure 13B-13D). In Figure 13B-13D). In Figure13J, 13J, the the input input by by contact contact
13016 has 13016 has caused caused virtual virtual object object 1100211002 to return to return to the to the originally originally displayed displayed perspective. perspective.
[00356]
[00356] In some embodiments, input for adjusting the size of virtual object 11002 is In some embodiments, input for adjusting the size of virtual object 11002 is
received while staging user interface 6010 is displayed. For example, an input to adjust the received while staging user interface 6010 is displayed. For example, an input to adjust the
size of virtual object 11002 is a de-pinch gesture (e.g., as described with regard to Figures size of virtual object 11002 is a de-pinch gesture (e.g., as described with regard to Figures
6N-6O) 6N-60) to to increase increase the the sizesize of virtual of virtual object object 1100211002 or agesture or a pinch pinch gesture to decrease to decrease the size ofthe size of
virtual object virtual object11002 11002
[00357]
[00357] In Figure 13J, an input is received to replace display of staging user interface In Figure 13J, an input is received to replace display of staging user interface
6010 by display of a user interface that includes field of view 6036 of the camera(s). An input 6010 by display of a user interface that includes field of view 6036 of the camera(s). An input
by contact 13016 is detected at a location that corresponds to toggle control 6018. In response by contact 13016 is detected at a location that corresponds to toggle control 6018. In response
to the input, display of staging user interface 6010 is replaced by a user interface that includes to the input, display of staging user interface 6010 is replaced by a user interface that includes
field of field ofview view 6036 of the 6036 of the camera(s), camera(s), as as shown in Figure shown in Figure 13K. 13K.
[00358]
[00358] In Figure 13K, virtual object 11002 is displayed in a user interface that In Figure 13K, virtual object 11002 is displayed in a user interface that
includes field of view 6036 of the camera(s). A tactile output occurs (as illustrated at 13018) includes field of view 6036 of the camera(s). A tactile output occurs (as illustrated at 13018)
to indicate that a plane that corresponds to virtual object 11002 has been detected in field of to indicate that a plane that corresponds to virtual object 11002 has been detected in field of
view 6036 of the camera(s). The angle of rotation of virtual object 11002 in the user interface view 6036 of the camera(s). The angle of rotation of virtual object 11002 in the user interface
that includes field of view 6036 of the camera(s) corresponds to the angle of rotation of that includes field of view 6036 of the camera(s) corresponds to the angle of rotation of
virtual object 11002 in staging user interface 6010. virtual object 11002 in staging user interface 6010.
133
1005066680
[00359]
[00359] When the user interface that includes field of view 6036 of the camera(s) is When the user interface that includes field of view 6036 of the camera(s) is 10 Jan 2024
displayed, an displayed, an input input that thatincludes includeslateral lateralmovement causes lateral movement causes lateral movement movement ofofvirtual virtual object object 11002 11002 inin theuser the user interface interface that that includes includes fieldfield of view of view 6036 6036 of of the camera(s), the camera(s), as illustrated as illustrated at at Figures 13L-13M. Figures 13L-13M. In In Figure Figure 13L, 13L, a contact13020 a contact 13020 is is detected detected atata alocation locationthat that corresponds correspondstoto virtual object virtual object11002 11002 and the contact and the contact moves alongaapath moves along path indicated indicated by by arrow arrow13022. 13022.AsAsthethe contact moves, contact virtual object moves, virtual object 11002 movesalong 11002 moves alonga apath paththat thatcorresponds correspondstotomovement movementof of contact 13020 contact froma afirst 13020 from first position position (as (asshown in Figure shown in Figure 13L) to aa second 13L) to position (as second position (as shown shown 2024200149
in Figure in Figure 13M). 13M).
[00360]
[00360] In some In embodiments, some embodiments, input input provided provided when when the the useruser interface interface that that includes includes
field of field ofview view 6036 of the 6036 of the camera(s) camera(s) is is displayed displayed can can cause cause movement movement ofof virtualobject virtual object 11002 11002 from a first plane (e.g., floor plane 5038) to a second plane (e.g., table surface plane 5046), as from a first plane (e.g., floor plane 5038) to a second plane (e.g., table surface plane 5046), as
described with described with regard regard to to Figures 5AJ-5AM. Figures 5AJ-5AM.
[00361]
[00361] Figures 14A-14Z Figures 14A-14Z illustrate example illustrate exampleuser userinterfaces interfacesfor, for, in in accordance with aa accordance with
determination that a first threshold magnitude of movement is met for a first object determination that a first threshold magnitude of movement is met for a first object
manipulationbehavior, manipulation behavior,increasing increasingaasecond secondthreshold thresholdmagnitude magnitudeof of movement movement required required for for a a second object manipulation behavior. The user interfaces in these figures are used to illustrate second object manipulation behavior. The user interfaces in these figures are used to illustrate
the processes the processes described below,including described below, includingthe the processes processes in in Figures Figures 8A-8E, 8A-8E,9A-9D, 9A-9D, 10A-10D, 10A-10D,
14AA-14AD, 16A-16G, 14AA-14AD, 16A-16G, 17A-17D, 17A-17D, 18A-18I, 18A-18I, 19A-19H, 19A-19H, andand 20A-20F. 20A-20F. ForFor convenience convenience ofof
explanation, some explanation, someofofthe the embodiments embodiments will will bebe discussed discussed with with reference reference toto operations operations
performedonona adevice performed devicewith witha atouch-sensitive touch-sensitivedisplay displaysystem system112. 112.InInsuch suchembodiments, embodiments,thethe
focus selector is, optionally: a respective finger or stylus contact, a representative point focus selector is, optionally: a respective finger or stylus contact, a representative point
corresponding to a finger or stylus contact (e.g., a centroid of a respective contact or a point corresponding to a finger or stylus contact (e.g., a centroid of a respective contact or a point
associated with a respective contact), or a centroid of two or more contacts detected on the associated with a respective contact), or a centroid of two or more contacts detected on the
touch-sensitive display touch-sensitive display system 112. However, system 112. However,analogous analogous operations operations are,optionally, are, optionally, performedonona adevice performed devicewith witha adisplay display450 450and anda aseparate separatetouch-sensitive touch-sensitivesurface surface451 451inin response to detecting the contacts on the touch-sensitive surface 451 while displaying the response to detecting the contacts on the touch-sensitive surface 451 while displaying the
user interfaces shown in the figures on the display 450, along with a focus selector. user interfaces shown in the figures on the display 450, along with a focus selector.
[00362]
[00362] In Figure 14A, virtual object 11002 is displayed in a user interface that In Figure 14A, virtual object 11002 is displayed in a user interface that
includes field of view 6036 of the camera(s). As described further with regard to Figures includes field of view 6036 of the camera(s). As described further with regard to Figures
14B-14Z, translation movement 14B-14Z, translation movement meter meter 14002, 14002, scaling scaling movement movement meter meter 14004,14004, and rotation and rotation
movement movement meter meter 14006 14006 are are usedused to indicate to indicate respective respective magnitudes magnitudes of movement of movement that that correspond to object manipulation behaviors (e.g., a translation operation, a scaling operation, correspond to object manipulation behaviors (e.g., a translation operation, a scaling operation,
134
1005066680
and/or aa rotation and/or rotation operation). operation).Translation Translationmovement meter14002 movement meter 14002 indicatesa amagnitude indicates magnitudeof of 10 Jan 2024
lateral (e.g., leftward or rightward) movement of a set of contacts on touch screen display lateral (e.g., leftward or rightward) movement of a set of contacts on touch screen display
112. 112. Scaling Scaling movement meter movement meter 14004 14004 indicates indicates a magnitude a magnitude of increasing of increasing or decreasing or decreasing
distance between respective contacts in a set of contacts (e.g., a magnitude of a pinch or de- distance between respective contacts in a set of contacts (e.g., a magnitude of a pinch or de-
pinch gesture) pinch gesture) on touch screen on touch screen display display 112. 112. Rotation Rotation movement movement meter meter 14006 14006 indicates indicates a a magnitudeofofrotational magnitude rotational movement movement of of a setofofcontacts a set contactson ontouch touchscreen screendisplay display112. 112.
[00363]
[00363] Figures 14B-14E illustrate an input for rotating virtual object 11002 in the user Figures 14B-14E illustrate an input for rotating virtual object 11002 in the user 2024200149
interface that includes field of view 6036 of the one or more cameras. The input for rotating interface that includes field of view 6036 of the one or more cameras. The input for rotating
virtual object 11002 includes a gesture in which a first contact 14008 moves rotationally in a virtual object 11002 includes a gesture in which a first contact 14008 moves rotationally in a
clockwisedirection clockwise direction along along aa path path indicated indicated by by arrow 14010and arrow 14010 anda asecond secondcontact contact14012 14012 moves moves
rotationally in a clockwise direction along a path indicated by arrow 14014. In Figure 14B, rotationally in a clockwise direction along a path indicated by arrow 14014. In Figure 14B,
contacts 14008 contacts and14012 14008 and 14012with with touch touch screen screen 112 112 areare detected.InInFigure detected. Figure14C, 14C, contact14008 contact 14008 has moved has movedalong alonga apath pathindicated indicatedbybyarrow arrow14010 14010 andand contact contact 14012 14012 has has moved moved alongalong a a path path indicated by indicated by arrow 14012.Because arrow 14012. Becauseinin Figure14C Figure 14C a magnitude a magnitude of rotational of rotational movement movement of of contact 14008 contact andcontact 14008 and contact14012 14012has hasnot notreached reached thresholdRT,RT, threshold virtualobject virtual object11002 11002 has has not not
yet rotated yet rotated in inresponse response to tothe theinput. InIn input. Figure 14D, Figure 14D,a a magnitude magnitude of ofrotational rotationalmovement of movement of
contact 14008 contact andcontact 14008 and contact14012 14012has hasincreased increasedabove above threshold threshold RT RT and and virtual virtual object object 11002 11002
has rotated (relative to the position of virtual object 11002 shown in Figure 14B) in response has rotated (relative to the position of virtual object 11002 shown in Figure 14B) in response
to the to the input. input.When the magnitude When the magnitudeofofrotational rotational movement movement increases increases above above threshold threshold RT,RT, thethe
required magnitude required magnitudeofofmovement movementfor for scaling scaling virtualobject virtual object11002 11002is isincreased increased(e.g., (e.g., the the scaling threshold scaling threshold ST has increased ST has increased from fromST STtotoST', ST’,asas indicated indicated at at scaling scaling movement meter movement meter
14004) andthe 14004) and the required required magnitude magnitudeofofmovement movementfor for translating translating virtualobject virtual object11002 11002isis
increased (e.g., the translation threshold TT has increased from TT to TT’, as indicated at increased (e.g., the translation threshold TT has increased from TT to TT', as indicated at
translation movement translation meter14002). movement meter 14002). In In Figure Figure 14E, 14E, contact contact 14008 14008 andand contact contact 14012 14012 havehave
continuedto continued to move movealong alongthe therotational rotational paths paths indicated indicated by by arrows arrows14010 14010and and14014, 14014, respectively, and virtual object 11002 has continued to rotate in response to the input. In respectively, and virtual object 11002 has continued to rotate in response to the input. In
Figure 14F, Figure 14F, contacts contacts 14008 14008and and14012 14012 have have liftedoff lifted offofoftouch touchscreen screen112. 112.
[00364]
[00364] Figures 14G-14I illustrate an input for scaling (e.g., increasing the size of) Figures 14G-14I illustrate an input for scaling (e.g., increasing the size of)
virtual object 11002 in the user interface that includes field of view 6036 of the one or more virtual object 11002 in the user interface that includes field of view 6036 of the one or more
cameras. The input for increasing the size of virtual object 11002 includes a gesture in which cameras. The input for increasing the size of virtual object 11002 includes a gesture in which
a first a firstcontact contact14016 14016 moves along aa path moves along path indicated indicated by arrow 14018 by arrow 14018and anda asecond second contact contact
14020 movesalong 14020 moves along a pathindicated a path indicatedbybyarrow arrow 14022 14022 (e.g.,such (e.g., suchthat thata adistance distancebetween between contact 14016 contact andcontact 14016 and contact14020 14020increases). increases).InInFigure Figure14G, 14G,contacts contacts14016 14016 and and 14020 14020 withwith 135
1005066680
touch screen touch screen 112 112are are detected. detected. In In Figure Figure 14H, contact 14016 14H, contact 14016has hasmoved moved along along a path a path 10 Jan 2024
indicated by indicated by arrow 14018and arrow 14018 andcontact contact14020 14020hashas moved moved along along a path a path indicated indicated by arrow by arrow
14022. BecauseininFigure 14022. Because Figure14H 14Ha a magnitude magnitude of of movement movement of contact of contact 1401614016 awaycontact away from from contact 14020 has 14020 has notnot reached reached threshold threshold ST, ST, the theofsize size of virtual virtual object object 11002 11002 has hasbeen not yet notadjusted yet been adjusted in response in response to the the input. input.InInFigure Figure14I, 14I,a magnitude a magnitudeof ofscaling scalingmovement of contact movement of contact 14016 14016 and contact and contact 14020 14020has hasincreased increasedabove abovethreshold thresholdSTST and and thethe sizeofofvirtual size virtual object object 11002 11002has has been increased (relative to the size of virtual object 11002 shown in Figure 14H) in response been increased (relative to the size of virtual object 11002 shown in Figure 14H) in response 2024200149
to the to the input. input.When the magnitude When the magnitudeofofscaling scalingmovement movement increases increases above above threshold threshold ST, ST, the the
required magnitude of movement for rotating virtual object 11002 is increased (e.g., the required magnitude of movement for rotating virtual object 11002 is increased (e.g., the
rotation threshold rotation threshold RT has increased RT has increased from RT’,asasindicated fromRTRTtotoRT', indicatedatat rotation rotation movement meter movement meter
14006) andthe 14006) and the required required magnitude magnitudeofofmovement movementfor for translating translating virtualobject virtual object11002 11002isis
increased (e.g., the translation threshold TT has increased from TT to TT’, as indicated at increased (e.g., the translation threshold TT has increased from TT to TT', as indicated at
translation movement translation meter14002). movement meter 14002). In In Figure Figure 14J,contacts 14J, contacts14016 14016andand 14020 14020 havehave lifted lifted offoff
of touch of touch screen screen 112. 112.
[00365]
[00365] Figures 14K-14M illustrate an input for translating virtual object 11002 (e.g., Figures 14K-14M illustrate an input for translating virtual object 11002 (e.g.,
moving virtual object 11002 to the left) in the user interface that includes field of view 6036 moving virtual object 11002 to the left) in the user interface that includes field of view 6036
of the one of one or or more cameras.The more cameras. Theinput inputfor for moving movingvirtual virtualobject object11002 11002includes includesa agesture gestureinin whichaa first which first contact contact14024 14024 moves alonga apath moves along pathindicated indicatedby byarrow arrow14026 14026 and and a second a second
contact 14028 contact moves 14028 moves along along a pathindicated a path indicatedbybyarrow arrow 1430 1430 (e.g.,such (e.g., suchthat thatcontacts contacts14024 14024 and contact and contact 14028 14028both bothmove move leftward).InInFigure leftward). Figure14K, 14K, contacts contacts 14024 14024 andand 14028 14028 withwith touch touch
screen 112 screen 112 are are detected. detected. In In Figure Figure 14L, 14L, contact 14024 has moved 14024 has moved along along a pathindicated a path indicatedbyby arrow 14026 arrow 14026and andcontact contact14028 14028hashas moved moved along along a path a path indicated indicated by arrow by arrow 14030. 14030. Because Because
in Figure in Figure 14L 14L aa magnitude magnitudeofofleftward leftwardmovement movement of contacts of contacts 14024 14024 and and 14028 14028 has has not not reached threshold reached threshold TT, TT,virtual virtual object 11002 has not 11002 has not yet yet been movedininresponse been moved responsetotothe theinput. input. In Figure In Figure 14M, 14M, aamagnitude magnitudeofofleftward leftwardmovement movement of contact of contact 14024 14024 and contact and contact 14028 14028 has has increased above increased abovethreshold thresholdTT TTand andthe thevirtual virtual object object 11002 11002has hasbeen beenmoved movedin in thethe directionofof direction
the movement the movement ofof contacts14024 contacts 14024andand 14028. 14028. When When the magnitude the magnitude of translational of translational movement movement
increases above increases threshold TT, above threshold TT,the the required required magnitude magnitudeofofmovement movementfor for scaling scaling virtualobject virtual object 11002 11002 isisincreased increased (e.g., (e.g., thethe scaling scaling threshold threshold STincreased ST has has increased from ST from STastoindicated to ST', ST’, as indicated at scaling at scaling movement meter14004) movement meter 14004) andand thethe required required magnitude magnitude of movement of movement for rotating for rotating
virtual object 11002 is increased (e.g., the rotation threshold RT has increased from RT to virtual object 11002 is increased (e.g., the rotation threshold RT has increased from RT to
RT’, as RT', as indicated indicated at at rotation rotationmovement meter14006). movement meter 14006).InInFigure Figure14N, 14N,contacts contacts14024 14024 andand
14028 have 14028 have lifted lifted offoff of of touch touch screen screen 112. 112.
136
1005066680
[00366]
[00366] Figures 14O-14Z illustrate an input that includes gestures for translating Figures 140-14Z illustrate an input that includes gestures for translating 10 Jan 2024
virtual object 11002 (e.g., moving virtual object 11002 to the right), scaling virtual object virtual object 11002 (e.g., moving virtual object 11002 to the right), scaling virtual object
11002 (e.g.,increasing 11002 (e.g., increasingthethe size size of virtual of virtual object object 11002), 11002), and rotating and rotating virtualvirtual objectIn11002. In object 11002.
Figure 140, Figure 14O,contacts contacts14032 14032and and14036 14036 with with touch touch screen screen 112112 are are detected. detected. In In Figures Figures 14O- 140-
14P, 14P, contact contact 14032 movesalong 14032 moves along a pathindicated a path indicatedbybyarrow arrow 14034 14034 andand contact contact 14036 14036 moves moves
along aa path along path indicated indicated by by arrow 14038.AAmagnitude arrow 14038. magnitudeof of rightward rightward movement movement of contacts of contacts
14032 and14036 14032 and 14036hashasincreased increasedabove above threshold threshold TT TT and and the the virtual virtual object11002 object 11002 hashas been been 2024200149
movedininthe moved thedirection direction of of the the movement movement ofof contacts14032 contacts 14032andand 14036. 14036. As As a result a result of of the the
satisfaction ofofthreshold satisfaction thresholdTT TT by by movement movement ofofcontacts contacts14032 14032 and and 14036, 14036, thethe required required
magnitudeofofmovement magnitude movementfor for scaling scaling virtualobject virtual object11002 11002is isincreased increasedtotoST' ST’and andthe therequired required magnitudeofofmovement magnitude movementfor for rotating rotating virtualobject virtual object11002 11002isistotoRT'. RT’.After Afterthe thethreshold threshold TT TT has been has been satisfied satisfied (as (asindicated indicatedby bythe thehigh highwater watermark mark 14043 shownatattranslation 14043 shown translation movement movement meter 14002 meter 14002ininFigure Figure14Q), 14Q),any anylateral lateral movement movement of of contact contact 14032 14032 and and 14036 14036 will will causecause lateral movement lateral ofvirtual movement of virtual object object 11002. 11002.
[00367]
[00367] In Figures In Figures 14Q-14R, contact14032 14Q-14R, contact 14032 moves moves along along a path a path indicated indicated by arrow by arrow
14040 andcontact 14040 and contact14036 14036moves moves along along a path a path indicated indicated by by arrow arrow 14042. 14042. In Figure In Figure 14R,14R, the the
magnitudeofofmovement magnitude movement of contact of contact 14032 14032 awayaway from from contact contact 1403614036 has exceeded has exceeded the original the original
scaling threshold scaling threshold ST, ST, but has has not not reached reached the increased increased scaling threshold ST’. scaling threshold ST'. When the When the
increased scaling movement threshold ST’ is in effect, scaling does not occur until the increased scaling movement threshold ST' is in effect, scaling does not occur until the
magnitudeofofmovement magnitude movement of contact of contact 14032 14032 awayaway from from contact contact 1403614036 increases increases above above the the increased scaling increased scaling movement threshold movement threshold ST’, ST', SOso thesize the sizeofofvirtual virtual object object 11002 has not 11002 has not been been changedfrom changed fromFigure Figure14Q-14R. 14Q-14R. In Figures In Figures 14R-14S, 14R-14S, the distance the distance between between contact contact 1403214032 and and 14046 continuestoto increase 14046 continues increase as as contact contact 14032 moves 14032 moves along along a pathindicated a path indicatedbybyarrow arrow 14044 14044
and contact and contact 14036 14036moves moves along along a path a path indicatedbyby indicated arrow arrow 14046. 14046. In In Figure Figure 14S, 14S, thethe
magnitudeofofmovement magnitude movement of contact of contact 14032 14032 awayaway from from contact contact 1403614036 has exceeded has exceeded the the increased scaling threshold ST’ and the size of virtual object 11002 has increased. After the increased scaling threshold ST' and the size of virtual object 11002 has increased. After the
threshold ST' threshold ST’ has has been beensatisfied satisfied (as (as indicated indicatedby by the thehigh highwater watermark mark 14047 shownatatscaling 14047 shown scaling movement movement meter meter 14004 14004 in Figure in Figure 14T), 14T), any any scaling scaling movement movement of contact of contact 1403214032 and and 14036 14036 will cause scaling of virtual object 11002. will cause scaling of virtual object 11002.
[00368]
[00368] In Figures In Figures 14T-14U, contact14032 14T-14U, contact 14032 moves moves along along a path a path indicated indicated by by arrow arrow
14048 andcontact 14048 and contact14036 14036moves moves along along a path a path indicated indicated by by arrow arrow 14050. 14050. Because Because the the
threshold TT threshold TThas hasbeen beensatisfied satisfied (as (as indicated indicated by by the the high high water water mark 14043shown mark 14043 shownat at
137
1005066680
translation movement translation meter14002), movement meter 14002), virtualobject virtual object11002 11002moves moves freely freely in in thedirection the directionofofthe the 10 Jan 2024
lateral movement lateral ofcontacts movement of contacts14032 14032and and14036. 14036.
[00369]
[00369] In Figures In Figures 14V-14W, contact 14V-14W, contact 14032 14032 moves moves along along a path a path indicated indicated by arrow by arrow
14052 andcontact 14052 and contact14036 14036moves moves along along a path a path indicated indicated by by arrow arrow 14054. 14054. The The movement movement of of the contacts the contacts 14032 and14036 14032 and 14036includes includestranslational translational movement movement (leftward (leftward movement movement of of contacts 14032 contacts and14036) 14032 and 14036)and and scalingmovement scaling movement (movement (movement that decreases that decreases the distance the distance
betweencontact between contact14032 14032and and contact14036 contact 14036 (e.g.,a apinch (e.g., pinchgesture)). gesture)). Because Becausethe thetranslation translation 2024200149
threshold TT threshold hasbeen TT has beensatisfied satisfied (as (as indicated indicated by by the the high high water water mark 14043shown mark 14043 shownat at
translation movement translation meter14002), movement meter 14002), virtualobject virtual object11002 11002moves moves freely freely in in thedirection the directionofofthe the lateral movement lateral ofcontacts movement of contacts14032 14032and and14036, 14036, andand because because thethe increased increased scaling scaling threshold threshold
ST’ has ST' has been beensatisfied satisfied (as (as indicated indicatedby by the thehigh highwater watermark mark 14047 shownatatscaling 14047 shown scaling movement movement meter meter 14004), 14004), virtual virtual object11002 object 11002 scales scales freelyininresponse freely responsetotothe themovement movementof of contact 14032 contact towardcontact 14032 toward contact14036. 14036.From From Figure Figure 14V14V to 14W, to 14W, the size the size of virtual of virtual object object
11002 hasdecreased 11002 has decreasedand andvirtual virtualobject object 11002 11002has hasmoved moved leftward leftward in in response response to to the the
movement movement of of contact14032 contact 14032 along along thethe path path indicated indicated by by arrow arrow 14052 14052 and and the the movement movement of of contact 14036 contact alongthe 14036 along thepath path indicated indicated by by arrow arrow14054. 14054.
[00370]
[00370] In Figures In Figures 14X-14Z, contact14032 14X-14Z, contact 14032 moves moves rotationally rotationally in in a a counterclockwise counterclockwise
direction along direction along a a path path indicated indicated by by arrow arrow 14056 andcontact 14056 and contact14036 14036moves moves rotationally rotationally inin aa
counterclockwisedirection counterclockwise directionalong alongaapath path indicated indicated by by arrow arrow14058. 14058.InInFigure Figure14Y, 14Y,thethe magnitudeofofrotational magnitude rotational movement movement of of contact contact 14032 14032 andand contact contact 14036 14036 has has exceeded exceeded the the original scaling original scaling threshold threshold RT, RT, but but has has not not reached reached the the increased increased scaling scalingthreshold thresholdRT’. RT'. When When
the increased scaling movement threshold RT’ is in effect, rotation of virtual object 11002 the increased scaling movement threshold RT' is in effect, rotation of virtual object 11002
does not does not occur occur until until the the magnitude of rotational magnitude of rotational movement movement ofofcontacts contacts14032 14032andand 14036 14036
increases above increases the increased above the increased rotational rotational movement RT’,SOsovirtual thresholdRT', movement threshold virtualobject object 11002 11002has has not rotated not rotated from from Figure 14X-14Y.InInFigures Figure 14X-14Y. Figures14Y-14Z, 14Y-14Z, contact contact 14032 14032 and and 14046 14046 continue continue to to moverotationally move rotationally in in aa counterclockwise direction as counterclockwise direction as contact contact 14032 moves 14032 moves along along a path a path
indicated by indicated arrow 14060 by arrow 14060and andcontact contact14036 14036 moves moves along along a path a path indicated indicated by arrow by arrow 14062. 14062.
In Figure In Figure 14Z, the magnitude 14Z, the ofrotational magnitude of rotational movement movement of of contact14032 contact 14032 andand contact contact 14036 14036 has has exceededthe exceeded theincreased increasedscaling scaling threshold threshold RT' andthe RT’and thevirtual virtual object object 11002 hasrotated 11002 has rotated in in response to the input. response to the input.
[00371]
[00371] Figures 14AA-14AD Figures 14AA-14AD are are flowflow diagrams diagrams illustrating illustrating operations operations for, for, in in
accordancewith accordance withaadetermination determinationthat thataa first first threshold thresholdmagnitude of movement magnitude of movement isismet metfor fora a
138
1005066680
first object first objectmanipulation manipulation behavior, behavior, increasing increasing aasecond second threshold threshold magnitude of movement magnitude of movement 10 Jan 2024
required for required for aa second second object object manipulation behavior. The manipulation behavior. Theoperations operationsdescribed describedwith withregard regardtoto Figures 14AA-14AD Figures 14AA-14AD are are performed performed at anatelectronic an electronic device device (e.g., (e.g., device device 300, 300, Figure Figure 3, 3, or or
portable multifunction portable device 100, multifunction device 100, Figure Figure 1A) 1A)having havinga adisplay displaygeneration generationcomponent component (e.g.,a (e.g., a display, a projector, a heads up display or the like) and a touch-sensitive surface (e.g., a display, a projector, a heads up display or the like) and a touch-sensitive surface (e.g., a
touch-sensitive surface, or a touch-screen display that serves both as the display generation touch-sensitive surface, or a touch-screen display that serves both as the display generation
componentandand component thetouch-sensitive the touch-sensitivesurface). surface).Some Some operations operations described described with with regard regard to to 2024200149
Figures 14AA-14AD Figures 14AA-14AD are,are, optionally, optionally, combined combined and/or and/or the order the order of some of some operations operations is, is, optionally, changed. optionally, changed.
[00372]
[00372] At operation 14066, a first portion of a user input that includes movement of At operation 14066, a first portion of a user input that includes movement of
one or one or more contactsis more contacts is detected. detected. At At operation operation 14068, it isisdetermined 14068, it determined whether the movement whether the movement
of the one or more contacts (e.g., at a location that corresponds to a virtual object 11002) of the one or more contacts (e.g., at a location that corresponds to a virtual object 11002)
increases above an object rotation threshold (e.g., rotation threshold RT indicated by rotation increases above an object rotation threshold (e.g., rotation threshold RT indicated by rotation
movement movement meter meter 14006). 14006). In In accordance accordance withwith a determination a determination thatthat the the movement movement ofone of the the or one or more contacts increases above an object rotation threshold (e.g., as described with regard to more contacts increases above an object rotation threshold (e.g., as described with regard to
Figures 14B-14D), Figures 14B-14D),the theflow flowproceeds proceedstoto operation14070. operation 14070. In In accordance accordance with with a determination a determination
that the that the movement movement ofofthe theone oneorormore morecontacts contactsdoes doesnot notincrease increaseabove aboveananobject objectrotation rotation threshold, the threshold, the flow flow proceeds proceeds to to operation operation 14074. 14074.
[00373]
[00373] At operation 14070, the object (e.g., virtual object 11002) is rotated based on At operation 14070, the object (e.g., virtual object 11002) is rotated based on
the first portion of the user input (e.g., as described with regard to Figures 14B-14D). At the first portion of the user input (e.g., as described with regard to Figures 14B-14D). At
operation 14072, an object translation threshold is increased (e.g., from TT to TT’, as operation 14072, an object translation threshold is increased (e.g., from TT to TT', as
described with regard to Figure 14D), and an object scaling threshold is increased (e.g., from described with regard to Figure 14D), and an object scaling threshold is increased (e.g., from
STto ST to ST', ST’, as as described with regard described with regard to to Figure Figure 14D). Flowproceeds 14D). Flow proceedsfrom from operation operation 14072 14072 to to operation 14086 operation 14086ofofFigure Figure14AB, 14AB,as as indicatedatatA.A. indicated
[00374]
[00374] At operation At operation 14074, 14074,it it is isdetermined determined whether the movement whether the movement of of thetheoneone oror more more
contacts (e.g., at a location that corresponds to a virtual object 11002) increases above an contacts (e.g., at a location that corresponds to a virtual object 11002) increases above an
object translation threshold (e.g., translation threshold TT indicated by translation movement object translation threshold (e.g., translation threshold TT indicated by translation movement
meter 14002). meter 14002).In In accordance accordancewith witha adetermination determinationthat thatthe themovement movementof of thethe oneone or or more more
contacts increases above an object translation threshold (e.g., as described with regard to contacts increases above an object translation threshold (e.g., as described with regard to
Figures 14K-14M), Figures 14K-14M), theflow the flow proceeds proceeds to to operation operation 14076. 14076. In In accordance accordance withwith a determination a determination
that the that the movement movement ofofthe theone oneorormore morecontacts contactsdoes doesnot notincrease increaseabove aboveananobject objecttranslation translation threshold, the threshold, the flow flow proceeds to operation proceeds to operation 14080. 14080.
139
1005066680
[00375]
[00375] At operation 14076, the object (e.g., virtual object 11002) is translated based At operation 14076, the object (e.g., virtual object 11002) is translated based 10 Jan 2024
on the first portion of the user input (e.g., as described with regard to Figures 14K-14M). At on the first portion of the user input (e.g., as described with regard to Figures 14K-14M). At
operation 14078, an object rotation threshold is increased (e.g., from RT to RT’, as described operation 14078, an object rotation threshold is increased (e.g., from RT to RT', as described
with regard to Figure 14M) and an object scaling threshold is increased (e.g., from ST to ST’, with regard to Figure 14M) and an object scaling threshold is increased (e.g., from ST to ST',
as described as described with regard to with regard to Figure Figure 14M). Flowproceeds 14M). Flow proceedsfrom from operation operation 14078 14078 to operation to operation
14100 of Figure 14100 of Figure14AC, 14AC,asasindicated indicatedatatB.B.
[00376]
[00376] At operation At operation 14080, 14080,itit is is determined determined whether the movement whether the movement of of thethe one one or or more more 2024200149
contacts (e.g., at a location that corresponds to a virtual object 11002) increases above an contacts (e.g., at a location that corresponds to a virtual object 11002) increases above an
object scaling threshold object threshold (e.g., (e.g.,scaling threshold scaling STSTindicated threshold bybyscaling indicated movement scaling meter movement meter
14004). In accordance 14004). In withaa determination accordance with determinationthat that the the movement movement of of theone the one oror more more contacts contacts
increases above an object scaling threshold (e.g., as described with regard to Figures 14G- increases above an object scaling threshold (e.g., as described with regard to Figures 14G-
14I), 14I), the theflow flow proceeds proceeds to to operation operation 14082. 14082. In In accordance with aa determination accordance with determinationthat that the the movement movement of of theone the one oror more more contacts contacts does does notnot increase increase above above an an object object scaling scaling threshold, threshold,
the flow the flow proceeds to operation proceeds to operation 14085. 14085.
[00377]
[00377] At operation 14082, the object (e.g., virtual object 11002) is scaled based on At operation 14082, the object (e.g., virtual object 11002) is scaled based on
the first portion of the user input (e.g., as described with regard to Figures 14G-14I). At the first portion of the user input (e.g., as described with regard to Figures 14G-14I). At
operation 14084, an object rotation threshold is increased (e.g., from RT to RT’, as described operation 14084, an object rotation threshold is increased (e.g., from RT to RT', as described
with regard to Figure 14I) and an object translation threshold is increased (e.g., from TT to with regard to Figure 14I) and an object translation threshold is increased (e.g., from TT to
TT', as described TT’, as with regard described with regard to to Figure Figure 14I). 14I). Flow proceedsfrom Flow proceeds fromoperation operation14084 14084to to
operation 14114 operation 14114ofofFigure Figure14AD, 14AD,as as indicatedatatC.C. indicated
[00378]
[00378] At operation 14085, an additional portion of user input that includes At operation 14085, an additional portion of user input that includes
movement movement of of theone the one oror more more contacts contacts is isdetected. detected.Flow Flowproceeds proceeds from from operation operation 14086 14086 to to operation 14066. operation 14066.
[00379]
[00379] In Figure 14AB, at operation 14086, an additional portion of user input that In Figure 14AB, at operation 14086, an additional portion of user input that
includes movement includes movement of of theone the oneorormore more contacts contacts is isdetected. detected.Flow Flowproceeds proceeds from from operation operation
14086 to operation 14086 to operation 14088. 14088.
[00380]
[00380] At operation At operation 14088, 14088,it it is is determined determined whether the movement whether the movement of of thethe one one or or more more
contacts is contacts is rotation rotationmovement. In accordance movement. In accordancewith withaadetermination determinationthat thatthe the movement movement of of thethe
one or one or more morecontacts contactsis is rotation rotation movement, theflow movement, the flowproceeds proceedstotooperation operation14090. 14090.InIn accordancewith accordance withaadetermination determinationthat thatthe the movement movement of of thethe one one or or more more contacts contacts is is not not
rotation movement, rotation theflow movement, the flowproceeds proceedstotooperation operation14092. 14092.
140
1005066680
[00381]
[00381] At operation 14090, the object (e.g., virtual object 11002) is rotated based on At operation 14090, the object (e.g., virtual object 11002) is rotated based on 10 Jan 2024
the additional portion of the user input (e.g., as described with regard to Figures 14D-14E). the additional portion of the user input (e.g., as described with regard to Figures 14D-14E).
Because the rotation threshold was previously met, the object rotates freely in accordance Because the rotation threshold was previously met, the object rotates freely in accordance
with additional rotation input. with additional rotation input.
[00382]
[00382] At operation At operation 14092, 14092,it it is isdetermined determined whether the movement whether the movement of of thetheoneone or or more more
contacts increases above an increased object translation threshold (e.g., translation threshold contacts increases above an increased object translation threshold (e.g., translation threshold
TT’indicated TT' indicated by bytranslation translation movement meter movement meter 14002 14002 in Figure in Figure 14D). 14D). In accordance In accordance withwith a a 2024200149
determinationthat determination that the the movement movement ofof theone the oneorormore more contactsincreases contacts increasesabove above theincreased the increased object translation threshold, the flow proceeds to operation 14094. In accordance with a object translation threshold, the flow proceeds to operation 14094. In accordance with a
determinationthat determination that the the movement movement ofof theone the oneorormore more contactsdoes contacts does notincrease not increaseabove above thethe
increased object translation threshold, the flow proceeds to operation 14096. increased object translation threshold, the flow proceeds to operation 14096.
[00383]
[00383] At operation 14094, the object (e.g., virtual object 11002) is translated based At operation 14094, the object (e.g., virtual object 11002) is translated based
on the additional portion of the user input. on the additional portion of the user input.
[00384]
[00384] At operation At operation 14096, 14096,it it is is determined determined whether the movement whether the movement of of thetheoneone or or more more
contacts increases above an increased object scaling threshold (e.g., scaling threshold ST’ contacts increases above an increased object scaling threshold (e.g., scaling threshold ST'
indicated by indicated by scaling scaling movement meter movement meter 14004 14004 in Figure in Figure 14D). 14D). In accordance In accordance withwith a a determination that determination that the the movement movement ofof theone the oneorormore morecontacts contactsincreases increasesabove above theincreased the increased object scaling object scaling threshold, threshold, the theflow flowproceeds proceeds to tooperation operation 14098. 14098. In In accordance with aa accordance with
determinationthat determination that the the movement movement ofof theone the oneorormore more contactsdoes contacts does notincrease not increaseabove above the the
increased object scaling threshold, the flow returns to operation 14086. increased object scaling threshold, the flow returns to operation 14086.
[00385]
[00385] At operation 14098, the object (e.g., virtual object 11002) is scaled based on At operation 14098, the object (e.g., virtual object 11002) is scaled based on
the additional portion of the user input. the additional portion of the user input.
[00386]
[00386] In Figure 14AC, at operation 14100, an additional portion of user input that In Figure 14AC, at operation 14100, an additional portion of user input that
includes movement includes movement of of theone the oneorormore more contacts contacts is isdetected. detected.Flow Flowproceeds proceeds from from operation operation
14100 to operation 14100 to operation 14102. 14102.
[00387]
[00387] At operation At operation 14102, 14102,itit is is determined determined whether the movement whether the movement of of thethe one one or or more more
contacts is contacts is translation translationmovement. In accordance movement. In withaa determination accordance with determinationthat that the the movement movement of of
the one the one or or more contacts is more contacts is translation translationmovement, the flow movement, the flow proceeds proceedstoto operation operation140104. 140104.InIn accordancewith accordance withaadetermination determinationthat thatthe the movement movement of of thethe one one or or more more contacts contacts is is not not
translation movement, translation theflow movement, the flowproceeds proceedstotooperation operation14106. 14106.
141
1005066680
[00388]
[00388] At operation 14104, the object (e.g., virtual object 11002) is translated based At operation 14104, the object (e.g., virtual object 11002) is translated based 10 Jan 2024
on the additional portion of the user input. Because the translation threshold was previously on the additional portion of the user input. Because the translation threshold was previously
met, the object translates freely in accordance with additional translation input. met, the object translates freely in accordance with additional translation input.
[00389]
[00389] At operation At operation 14106, 14106,it it is isdetermined determined whether the movement whether the movement of of thetheoneone oror more more
contacts increases above an increased object rotation threshold (e.g., rotation threshold RT’ contacts increases above an increased object rotation threshold (e.g., rotation threshold RT'
indicated by indicated by rotation rotation movement meter14006 movement meter 14006 in in Figure Figure 14M). 14M). In accordance In accordance withwith a a determinationthat determination that the movement movement ofof theone the oneorormore more contactsincreases contacts increasesabove above theincreased the increased 2024200149
object rotation threshold, the flow proceeds to operation 14108. In accordance with a object rotation threshold, the flow proceeds to operation 14108. In accordance with a
determinationthat determination that the movement movement ofof theone the oneorormore more contactsdoes contacts does notincrease not increaseabove above the the
increased object rotation threshold, the flow proceeds to operation 14110. increased object rotation threshold, the flow proceeds to operation 14110.
[00390]
[00390] At operation 14108, the object (e.g., virtual object 11002) is rotated based on At operation 14108, the object (e.g., virtual object 11002) is rotated based on
the additional portion of the user input. the additional portion of the user input.
[00391]
[00391] At operation At operation 14110, 14110,it it is isdetermined determined whether the movement whether the movement of of thetheoneone oror more more
contacts increases above an increased object scaling threshold (e.g., scaling threshold ST’ contacts increases above an increased object scaling threshold (e.g., scaling threshold ST'
indicated by indicated by scaling scaling movement meter movement meter 14004 14004 in Figure in Figure 14M). 14M). In accordance In accordance with with a a determinationthat determination that the the movement movement ofof theone the oneorormore more contactsincreases contacts increasesabove above theincreased the increased object scaling threshold, object threshold, the theflow flowproceeds proceeds to tooperation operation 14112. 14112. In In accordance with aa accordance with
determinationthat determination that the movement movement ofof theone the oneorormore more contactsdoes contacts does notincrease not increaseabove above the the
increased object scaling threshold, the flow returns to operation 14100. increased object scaling threshold, the flow returns to operation 14100.
[00392]
[00392] At operation 14112, the object (e.g., virtual object 11002) is scaled based on At operation 14112, the object (e.g., virtual object 11002) is scaled based on
the additional portion of the user input. the additional portion of the user input.
[00393]
[00393] In Figure 14AD, at operation 14114, an additional portion of user input that In Figure 14AD, at operation 14114, an additional portion of user input that
includes movement includes movement of of theone the oneorormore more contacts contacts is isdetected. detected.Flow Flowproceeds proceeds from from operation operation
14114 to operation 14114 to operation 14116. 14116.
[00394]
[00394] At operation At operation 14116, 14116,it it is isdetermined determined whether the movement whether the movement of of thetheoneone or or more more
contacts is contacts is scaling scalingmovement. In accordance movement. In accordancewith witha adetermination determinationthat thatthe themovement movementof of thethe
one or one or more contacts is more contacts is scaling movement, theflow movement, the flowproceeds proceedstotooperation operation140118. 140118.In In
accordancewith accordance withaadetermination determinationthat thatthe the movement movement of of thetheoneone or or more more contacts contacts is is notscaling not scaling movement, movement, theflow the flowproceeds proceeds to to operation operation 14120. 14120.
142
1005066680
[00395]
[00395] At operation 14118, the object (e.g., virtual object 11002) is scaled based on At operation 14118, the object (e.g., virtual object 11002) is scaled based on 10 Jan 2024
the additional portion of the user input. Because the scaling threshold was previously met, the the additional portion of the user input. Because the scaling threshold was previously met, the
object scales freely in accordance with additional scaling input. object scales freely in accordance with additional scaling input.
[00396]
[00396] At operation At operation 14120, 14120,it it is isdetermined determined whether the movement whether the movement of of thetheoneone oror more more
contacts increases above an increased object rotation threshold (e.g., rotation threshold RT’ contacts increases above an increased object rotation threshold (e.g., rotation threshold RT'
indicated by indicated by rotation rotation movement meter14006 movement meter 14006 in in Figure Figure 14I).InInaccordance 14I). accordance with with a a determinationthat determination that the the movement movement ofof theone the oneorormore morecontacts contactsincreases increasesabove above theincreased the increased 2024200149
object rotation object rotation threshold, threshold,the theflow flowproceeds proceeds to tooperation operation14122. 14122. In Inaccordance accordance with with a a
determination that determination that the the movement movement ofof theone the oneorormore morecontacts contactsdoes doesnotnotincrease increaseabove above the the
increased object rotation threshold, the flow proceeds to operation 14124. increased object rotation threshold, the flow proceeds to operation 14124.
[00397]
[00397] At operation 14122, the object (e.g., virtual object 11002) is rotated based on At operation 14122, the object (e.g., virtual object 11002) is rotated based on
the additional portion of the user input. the additional portion of the user input.
[00398]
[00398] At operation At operation 14124, 14124,it it is isdetermined determined whether the movement whether the movement of of thetheoneone oror more more
contacts increases above an increased object translation threshold (e.g., translation threshold contacts increases above an increased object translation threshold (e.g., translation threshold
TT’indicated TT' indicated by bytranslation translation movement meter movement meter 14002 14002 in Figure in Figure 14I). 14I). In In accordance accordance with with a a determinationthat determination that the the movement movement ofof theone the oneorormore more contactsincreases contacts increasesabove above theincreased the increased object translation threshold, the flow proceeds to operation 14126. In accordance with a object translation threshold, the flow proceeds to operation 14126. In accordance with a
determinationthat determination that the the movement movement ofof theone the oneorormore morecontacts contactsdoes does notincrease not increaseabove above the the
increased object translation threshold, the flow proceeds to operation 14114. increased object translation threshold, the flow proceeds to operation 14114.
[00399]
[00399] Figures 15A-15AI Figures 15A-15AI illustrate example illustrate exampleuser userinterfaces interfacesfor for generating generating an an audio audio alert in accordance with a determination that movement of a device causes a virtual object to alert in accordance with a determination that movement of a device causes a virtual object to
moveoutside move outsideofofaa displayed displayedfield field of of view of one view of or more one or devicecameras. more device cameras.The Theuser userinterfaces interfaces in these figures are used to illustrate the processes described below, including the processes in these figures are used to illustrate the processes described below, including the processes
in Figures in Figures8A-8E, 8A-8E,9A-9D, 9A-9D,10A-10D, 10A-10D, 16A-16G, 16A-16G, 17A-17D, 18A-18I, 19A-19H, 17A-17D, 18A-18I, 19A-19H,and and 20A-20F. 20A-20F. For convenience For convenienceofofexplanation, explanation,some someofofthe theembodiments embodimentswillwill be be discussed discussed with with reference reference to to operations performed operations performedonona adevice devicewith witha atouch-sensitive touch-sensitivedisplay displaysystem system112. 112.InInsuch such embodiments, the focus selector is, optionally: a respective finger or stylus contact, a embodiments, the focus selector is, optionally: a respective finger or stylus contact, a
representative point corresponding to a finger or stylus contact (e.g., a centroid of a representative point corresponding to a finger or stylus contact (e.g., a centroid of a
respective contact or a point associated with a respective contact), or a centroid of two or respective contact or a point associated with a respective contact), or a centroid of two or
morecontacts more contactsdetected detectedon onthe the touch-sensitive touch-sensitive display display system 112. However, system 112. However, analogous analogous
operations are, optionally, performed on a device with a display 450 and a separate touch- operations are, optionally, performed on a device with a display 450 and a separate touch-
sensitive surface 451 in response to detecting the contacts on the touch-sensitive surface 451 sensitive surface 451 in response to detecting the contacts on the touch-sensitive surface 451
143
1005066680
while displaying the user interfaces shown in the figures on the display 450, along with a while displaying the user interfaces shown in the figures on the display 450, along with a 10 Jan 2024
focus selector. focus selector.
[00400]
[00400] Figures 15A-15AI Figures 15A-15AI illustrate user illustrate user interfaces interfaces and and device operations that device operations that occur occur
when an accessibility feature is active. In some embodiments, the accessibility feature when an accessibility feature is active. In some embodiments, the accessibility feature
includes aa mode includes in which mode in whicha adecreased decreasednumber numberof of inputs inputs or or alternativeinputs alternative inputsare are usable usable for for accessing device features (e.g., to increase the ease of accessing device features for users with accessing device features (e.g., to increase the ease of accessing device features for users with
limited ability to provide the input gestures described above). For example, the accessibility limited ability to provide the input gestures described above). For example, the accessibility 2024200149
mode is a switch control mode in which a first input gesture (e.g., a swipe input) is used to mode is a switch control mode in which a first input gesture (e.g., a swipe input) is used to
advance or reverse through available device operations, and a selection input (e.g., a double advance or reverse through available device operations, and a selection input (e.g., a double
tap input) is used to perform a currently indicated operation. As the user interacts with the tap input) is used to perform a currently indicated operation. As the user interacts with the
device, audio alerts are generated (e.g., to provide feedback to the user to indicate that an device, audio alerts are generated (e.g., to provide feedback to the user to indicate that an
operation has been performed, to indicate a current display status of a virtual object 11002 operation has been performed, to indicate a current display status of a virtual object 11002
relative to a staging user interface or a field of view of one or more cameras of the device, relative to a staging user interface or a field of view of one or more cameras of the device,
etc.). etc.).
[00401]
[00401] In Figure In Figure 15A, 15A, aa messaging messaginguser userinterface interface5008 5008includes includesa atwo-dimensional two-dimensional representation of representation of three-dimensional virtual object three-dimensional virtual object 11002. 11002. A selection cursor A selection cursor 15001 is shown 15001 is shown
surrounding three-dimensional virtual object 11002 (e.g., to indicate that a currently selected surrounding three-dimensional virtual object 11002 (e.g., to indicate that a currently selected
operation is an operation that will be performed on virtual object 11002). An input (e.g., a operation is an operation that will be performed on virtual object 11002). An input (e.g., a
double tap input) by contact 15002 is detected for performing the currently indicated double tap input) by contact 15002 is detected for performing the currently indicated
operation (e.g., displaying a three-dimensional representation of virtual object 11002 in operation (e.g., displaying a three-dimensional representation of virtual object 11002 in
staging user interface 6010). In response to the input, display of the messaging user interface staging user interface 6010). In response to the input, display of the messaging user interface
5060 is replaced by display of a staging user interface 6010, as shown in Figure 15B. 5060 is replaced by display of a staging user interface 6010, as shown in Figure 15B.
[00402]
[00402] In Figure 15B, virtual object 11002 is displayed in staging user interface 6010. In Figure 15B, virtual object 11002 is displayed in staging user interface 6010.
An audio alert is generated (e.g., by device speaker 111), as indicated at 15008, to indicate a An audio alert is generated (e.g., by device speaker 111), as indicated at 15008, to indicate a
status of status of the thedevice. device.For Forexample, example, the theaudio audio alert alert15008 15008 includes includes an an announcement, “chairis announcement, "chair is nowshown now shownin in thestaging the stagingview," view,”asasindicated indicatedatat 15010. 15010.
[00403]
[00403] In Figure In Figure 15B, 15B, aa selection selection cursor cursor 15001 is shown 15001 is surroundingshare shown surrounding sharecontrol control 6020 (e.g.,totoindicate 6020 (e.g., indicatethat thata acurrently currently selected selected operation operation is a share is a share operation). operation). An(e.g., An input input (e.g., a rightward a swipealong rightward swipe alongaa path path indicated indicated by by arrow arrow15006) 15006)bybycontact contact15004 15004is is detected.InIn detected.
response to the input, the selected operation is advanced to a next operation. response to the input, the selected operation is advanced to a next operation.
[00404]
[00404] In Figure 15C, a tilt up control 15012 is displayed (e.g., to indicate that a In Figure 15C, a tilt up control 15012 is displayed (e.g., to indicate that a
currently selected operation is an operation for tilting the displayed virtual object 11002 currently selected operation is an operation for tilting the displayed virtual object 11002
144
1005066680
upward). An audio alert is generated, as indicated at 15014, to indicate a status of the device. upward). An audio alert is generated, as indicated at 15014, to indicate a status of the device. 10 Jan 2024
For example, the audio alert includes an announcement, “selected: tilt up button,” as For example, the audio alert includes an announcement, "selected: tilt up button," as
indicated at indicated at 15016. 15016. An input (e.g., An input (e.g., aarightward rightwardswipe swipe along along aa path path indicated indicated by by arrow arrow 15020) 15020)
by contact 15018 is detected. In response to the input, the selected operation is advanced to a by contact 15018 is detected. In response to the input, the selected operation is advanced to a
next operation. next operation.
[00405]
[00405] In Figure 15D, a tilt down control 15022 is displayed (e.g., to indicate that a In Figure 15D, a tilt down control 15022 is displayed (e.g., to indicate that a
currently selected operation is an operation for tilting the displayed virtual object 11002 currently selected operation is an operation for tilting the displayed virtual object 11002 2024200149
downward). An audio alert is generated, as indicated at 15024, to indicate a status of the downward). An audio alert is generated, as indicated at 15024, to indicate a status of the
device. For device. For example, the audio example, the audio alert alert includes includes an an announcement, “selected:tilt announcement, "selected: tilt down button,” down button,"
as indicated at 15026. An input (e.g., a double tap input) by contact 15028 is detected. In as indicated at 15026. An input (e.g., a double tap input) by contact 15028 is detected. In
response to the input, the selected operation is performed (e.g., the virtual object 11002 is response to the input, the selected operation is performed (e.g., the virtual object 11002 is
tilted downward tilted inthe downward in the staging staging view). view).
[00406]
[00406] In Figure 15E, the virtual object 11002 is tilted downward in the staging view. In Figure 15E, the virtual object 11002 is tilted downward in the staging view.
An audio alert is generated, as indicated at 15030, to indicate a status of the device. For An audio alert is generated, as indicated at 15030, to indicate a status of the device. For
example,the example, the audio audioalert alert includes includes an an announcement, “Chairtilted announcement, "Chair tilted five five degrees degrees down. down.Chair Chairisis now tilted 10 degrees toward the screen,” as indicated at 15032. now tilted 10 degrees toward the screen," as indicated at 15032.
[00407]
[00407] In Figure 15F, an input (e.g., a rightward swipe along a path indicated by In Figure 15F, an input (e.g., a rightward swipe along a path indicated by
arrow 15036) by contact 15034 is detected. In response to the input, the selected operation is arrow 15036) by contact 15034 is detected. In response to the input, the selected operation is
advancedtotoaa next advanced next operation. operation.
[00408]
[00408] In Figure 15G, a rotate clockwise control 15038 is displayed (e.g., to indicate In Figure 15G, a rotate clockwise control 15038 is displayed (e.g., to indicate
that a currently selected operation is an operation for rotating the displayed virtual object that a currently selected operation is an operation for rotating the displayed virtual object
11002 clockwise).Audio 11002 clockwise). Audioalert alert15040 15040includes includesananannouncement, announcement, “selected: "selected: rotate rotate clockwise clockwise
button,” as indicated at 15042. An input (e.g., a rightward swipe along a path indicated by button," as indicated at 15042. An input (e.g., a rightward swipe along a path indicated by
arrow 15046) by contact 15044 is detected. In response to the input, the selected operation is arrow 15046) by contact 15044 is detected. In response to the input, the selected operation is
advancedtotoaa next advanced next operation. operation.
[00409]
[00409] In Figure 15H, a rotate counterclockwise control 15048 is displayed (e.g., to In Figure 15H, a rotate counterclockwise control 15048 is displayed (e.g., to
indicate that a currently selected operation is an operation for rotating the displayed virtual indicate that a currently selected operation is an operation for rotating the displayed virtual
object 11002 object counterclockwise).AnAn 11002 counterclockwise). audio audio alert15050 alert 15050 includes includes anan announcement, announcement, “selected: "selected:
rotate counterclockwise button,” as indicated at 15052. An input (e.g., a double tap input) by rotate counterclockwise button," as indicated at 15052. An input (e.g., a double tap input) by
contact 15054 is detected. In response to the input, the selected operation is performed (e.g., contact 15054 is detected. In response to the input, the selected operation is performed (e.g.,
the virtual object 11002 is rotated counterclockwise in the staging view, as indicated in the virtual object 11002 is rotated counterclockwise in the staging view, as indicated in
Figure 15I). Figure 15I).
145
1005066680
[00410]
[00410] In Figure In Figure 15I, 15I, audio audio alert alert15056 15056 includes includes an an announcement, “Chairrotated announcement, "Chair rotatedbyby 10 Jan 2024
five degrees five degrees counterclockwise. Chairis counterclockwise. Chair is now nowrotated rotated five five degrees degrees away awaythe thescreen," screen,”as as indicated at 15058. indicated at 15058.
[00411]
[00411] In Figure 15J, an input (e.g., a rightward swipe along a path indicated by In Figure 15J, an input (e.g., a rightward swipe along a path indicated by
arrow 15062) by contact 15060 is detected. In response to the input, the selected operation is arrow 15062) by contact 15060 is detected. In response to the input, the selected operation is
advancedtotoaa next advanced next operation. operation. 2024200149
[00412]
[00412] In Figure 15K, a zoom control 15064 is displayed (e.g., to indicate that a In Figure 15K, a zoom control 15064 is displayed (e.g., to indicate that a
currently selected operation is an operation for zooming the displayed virtual object 11002). currently selected operation is an operation for zooming the displayed virtual object 11002).
Audioalert Audio alert 15066 includesananannouncement, 15066 includes announcement, “scale: "scale: adjustable,”asasindicated adjustable," indicatedatat 15068. 15068.The The keyword"adjustable" keyword “adjustable”ininconjunction conjunctionwith witha acontrol controlname nameinin theannouncement the announcement indicates indicates that that
a swipe input (e.g., a vertical swipe input) is usable to operate the control. For example, an a swipe input (e.g., a vertical swipe input) is usable to operate the control. For example, an
upwardswipe upward swipeinput inputisisprovided providedbybycontact contact5070 5070asasitit moves movesupward upward along along a path a path indicated indicated by by
arrow 5072. In response to the input, the zoom operation is performed (e.g., the size of virtual arrow 5072. In response to the input, the zoom operation is performed (e.g., the size of virtual
object 11002 object is increased, 11002 is increased, as as indicated indicated in inFigures Figures15K-15L). 15K-15L).
[00413]
[00413] In Figure In Figure 15L, audio alert 15L, audio alert 15074 includes an 15074 includes an announcement, announcement, “Chair "Chair is is now now
adjusted to 150 percent of original size,” as indicated at 15076. An input for reducing the size adjusted to 150 percent of original size," as indicated at 15076. An input for reducing the size
of virtual of virtualobject object11002 11002 (e.g., (e.g.,a downward swipeinput) a downward swipe input) is is provided by contact provided by contact 5078 that 5078 that
movesdownward moves downward along along a path a path indicated indicated by arrow by arrow 5078. 5078. In response In response to the to the input, input, thethe zoom zoom
operation is performed (e.g., the size of virtual object 11002 is decreased, as indicated in operation is performed (e.g., the size of virtual object 11002 is decreased, as indicated in
Figures 15L-15M). Figures 15L-15M).
[00414]
[00414] In Figure In Figure 15M, audioalert 15M, audio alert 15082 15082includes includesananannouncement, announcement, “Chair "Chair is now is now
adjusted to 100 percent of original size,” as indicated at 15084. Because the size of virtual adjusted to 100 percent of original size," as indicated at 15084. Because the size of virtual
object 11002 is adjusted to its originally displayed size in staging view 6010, a tactile output object 11002 is adjusted to its originally displayed size in staging view 6010, a tactile output
(as illustrated at 15086) occurs (e.g., to provide feedback indicating that the virtual object (as illustrated at 15086) occurs (e.g., to provide feedback indicating that the virtual object
11002 has 11002 has returned returned to its to its original original size). size).
[00415]
[00415] In Figure 15N, an input (e.g., a rightward swipe along a path indicated by In Figure 15N, an input (e.g., a rightward swipe along a path indicated by
arrow 15090) by contact 15088 is detected. In response to the input, the selected operation is arrow 15090) by contact 15088 is detected. In response to the input, the selected operation is
advancedtotoaa next advanced next operation. operation.
[00416]
[00416] In Figure In Figure 15O, 150, aa selection selection cursor cursor 15001 is shown 15001 is surroundingback shown surrounding back control control
6016 (e.g., to indicate that a currently selected operation is an operation for returning to the 6016 (e.g., to indicate that a currently selected operation is an operation for returning to the
previous user previous user interface). interface). Audio Audio alert alert15092 15092 includes includes an an announcement, “selected:return announcement, "selected: return
146
1005066680
button,” as indicated at 15094. An input (e.g., a rightward swipe along a path indicated by button," as indicated at 15094. An input (e.g., a rightward swipe along a path indicated by 10 Jan 2024
arrow 15098) by contact 15096 is detected. In response to the input, the selected operation is arrow 15098) by contact 15096 is detected. In response to the input, the selected operation is
advancedtotoaa next advanced next operation. operation.
[00417]
[00417] In Figure In Figure 15P, a selection 15P, a selection cursor cursor 15001 is shown 15001 is surroundingtoggle shown surrounding togglecontrol control 6018 (e.g., to indicate that a currently selected operation is an operation for toggling between 6018 (e.g., to indicate that a currently selected operation is an operation for toggling between
display of staging user interface 6010 and display of a user interface that includes field of display of staging user interface 6010 and display of a user interface that includes field of
view 6036 view 6036ofofthe the camera(s)). camera(s)). Audio Audioalert alert 15098 15098includes includesananannouncement, announcement, “selected: "selected: world world 2024200149
view/staging view toggle,” as indicated at 50100. An input (e.g., a double tap input) by view/staging view toggle," as indicated at 50100. An input (e.g., a double tap input) by
contact 15102 is detected. In response to the input, display of staging user interface 6010 is contact 15102 is detected. In response to the input, display of staging user interface 6010 is
replaced by display of a user interface that includes field of view 6036 of the camera(s) (as replaced by display of a user interface that includes field of view 6036 of the camera(s) (as
indicated in indicated in Figure Figure 15Q). 15Q).
[00418]
[00418] Figures 15Q-15T Figures 15Q-15T illustrate aa calibration illustrate calibration sequence that occurs sequence that occurs when field of when field of view 6036 of the camera(s) is displayed (e.g., because a plane that corresponds to virtual view 6036 of the camera(s) is displayed (e.g., because a plane that corresponds to virtual
object 11002 object hasnot 11002 has not yet yet been been detected detected in in field field of ofview view 6036 of the 6036 of the camera(s)). camera(s)). During the During the
calibration sequence, a translucent representation of virtual object 11002 is displayed, field of calibration sequence, a translucent representation of virtual object 11002 is displayed, field of
view 6036 view 6036ofofthe the camera(s) camera(s)isis blurred, blurred, and a prompt and a that includes prompt that includes an an animated image animated image
(including representation 12004 of device 12004 of device 100 100and andrepresentation representation12010 12010ofofa aplane) plane)isis displayed to displayed to prompt the user prompt the user to to move thedevice. move the device. In In Figure Figure 15Q, 15Q,audio audioalert alert 15102 15102includes includes an announcement, an “move announcement,"move the the device device to to detect detect a plane,”asasindicated a plane," indicatedatat 50104. 50104.From From Figure Figure
15Q to Figure 15Q to Figure 15R, 15R,device device100 100isismoved moved relativetotophysical relative physicalenvironment environment 5002 5002 (as(as indicated indicated
by, e.g., the changed position of table 5004 in field of view 6036 of the camera(s)). As a by, e.g., the changed position of table 5004 in field of view 6036 of the camera(s)). As a
result of detection of the movement of device 100, a calibration user interface object 12014 is result of detection of the movement of device 100, a calibration user interface object 12014 is
displayed, as indicated in Figure 15S. displayed, as indicated in Figure 15S.
[00419]
[00419] In Figure In Figure 15S, audio alert 15S, audio alert 15106 includes an 15106 includes an announcement, announcement, “move "move the the device device
to detect a plane,” as indicated at 50108. In Figures 15S-15T, calibration user interface object to detect a plane," as indicated at 50108. In Figures 15S-15T, calibration user interface object
12014 rotates as 12014 rotates as device device 100 is moved 100 is relative to moved relative to physical physical environment 5002(as environment 5002 (asindicated indicatedby, by, e.g., the changed position of table 5004 in field of view 6036 of the camera(s)). In Figure e.g., the changed position of table 5004 in field of view 6036 of the camera(s)). In Figure
15T, sufficientmotion 15T, sufficient motionhas has occurred occurred for a for a plane plane that corresponds that corresponds toobject to virtual virtual11002 object 11002 to be to be
detected in detected in field fieldof ofview view6036 6036 of of the thecamera(s) camera(s) and and audio audio alert alert15110 15110 includes includes an an
announcement, announcement, “plane "plane detected,”asasindicated detected," indicatedatat50112. 50112.InInFigures Figures15U-15V, 15U-15V,thethe translucency translucency
of virtual object 11002 is reduced and virtual object 11002 is placed on the detected plane. of virtual object 11002 is reduced and virtual object 11002 is placed on the detected plane.
147
1005066680
[00420]
[00420] In Figure In Figure 15V, audioalert 15V, audio alert 15114 includesan 15114 includes anannouncement, announcement, “chair "chair is isnow now 10 Jan 2024
projected in the world, 100 percent visible, occupying 10 percent of the screen,” as indicated projected in the world, 100 percent visible, occupying 10 percent of the screen," as indicated
at 50116. Tactile output generators output a tactile output (as illustrated at 15118) to indicate at 50116. Tactile output generators output a tactile output (as illustrated at 15118) to indicate
that virtual object 11002 has been placed on a plane. Virtual object 11002 is displayed at a that virtual object 11002 has been placed on a plane. Virtual object 11002 is displayed at a
fixed position relative to physical environment 5002. fixed position relative to physical environment 5002.
[00421]
[00421] In Figures In Figures 15V-15W, device 15V-15W, device 100100 is is moved moved relative relative to to physical physical environment environment
5002 (as indicated by, e.g., the changed position of table 5004 in field of view 6036 of the 5002 (as indicated by, e.g., the changed position of table 5004 in field of view 6036 of the 2024200149
camera(s)) such that that virtual object 11002 is no longer visible in field of view 6036 of the camera(s)) such that that virtual object 11002 is no longer visible in field of view 6036 of the
camera(s). As camera(s). As aa result result of of the themovement ofvirtual movement of virtual object object 11002 out of 11002 out of field field of ofview view 6036 of 6036 of
the camera(s), the camera(s), audio alert 15122 audio alert includes an 15122 includes an announcement, “chairisisnot announcement, "chair noton onthe the screen," as screen,” as indicated at indicated at 50124. 50124.
[00422]
[00422] In Figures In Figures 15W-15X, device 15W-15X, device 100100 hashas moved moved relative relative to physical to physical environment environment
5002 such that virtual object 11002 is again visible in field of view 6036 of the camera(s) in 5002 such that virtual object 11002 is again visible in field of view 6036 of the camera(s) in
Figure 15X. Figure 15X.AsAsa aresult result of of the the movement movement ofofvirtual virtual object object 11002 11002into intofield field of of view 6036of view 6036 of the camera(s), the camera(s), audio alert 15118 audio alert is generated, 15118 is generated, including including an an announcement, “chairisis now announcement, "chair now projected in the world, 100 percent visible, occupying 10 percent of the screen,” as indicated projected in the world, 100 percent visible, occupying 10 percent of the screen," as indicated
at 50120. at 50120.
[00423]
[00423] In Figures In Figures 15X-15Y, device100100 15X-15Y, device has has moved moved relative relative to to physical physical environment environment
5002 (e.g.,such 5002 (e.g., suchthat thatdevice device 100100 is “closer” is "closer" to virtual to virtual object object 11002 11002 as projected as projected in field in of field of
view 6036 of the camera(s) and such that virtual object 11002 is partially visible in field of view 6036 of the camera(s) and such that virtual object 11002 is partially visible in field of
view 6036 view 6036ofofthe the camera(s) camera(s)ininFigure Figure15Y). 15Y).AsAsa aresult result of of the the movement movement ofof virtualobject virtual object 11002 partiallyoutout 11002 partially of of thethe field field of of view view 60366036 of theofcamera(s), the camera(s), audio audio alert alert 15126 15126anincludes an includes
announcement, “chairisis9090percent announcement,"chair percentvisible, visible, occupying 20percent occupying 20 percentofofthe the screen," as indicated screen,” as indicated
at 50128. at 50128.
[00424]
[00424] In some In embodiments, some embodiments, an an input input provided provided at at a locationthat a location thatcorresponds correspondstoto virtual object virtual object11002 11002 causes causes an an audio messagethat audio message thatincludes includes verbal verbal information informationabout aboutvirtual virtual object 11002 to be provided. In contrast, when an input is provided at a location that is away object 11002 to be provided. In contrast, when an input is provided at a location that is away
fromvirtual from virtual object object 11002 andcontrols, 11002 and controls, an an audio messagethat audio message that includes includes verbal verbal information information about virtual object 11002 is not provided. In Figure 15Z, an audio output 15130 (e.g., a about virtual object 11002 is not provided. In Figure 15Z, an audio output 15130 (e.g., a
“click” or “buzz”) occurs to indicate that contact 15132 is detected at a location that does not "click" or "buzz") occurs to indicate that contact 15132 is detected at a location that does not
correspond to a location of a control or a virtual object 11002 in the user interface. In Figure correspond to a location of a control or a virtual object 11002 in the user interface. In Figure
15AA, 15AA, an an input input by contact by contact 1513415134 is detected is detected at a location at a location that corresponds that corresponds to aoflocation of to a location
148
1005066680
virtual object 11002. In response to the input, an audio alert 15136 that corresponds to virtual virtual object 11002. In response to the input, an audio alert 15136 that corresponds to virtual 10 Jan 2024
object 11002 (e.g., indicating the status of virtual object 11002) is generated, including an object 11002 (e.g., indicating the status of virtual object 11002) is generated, including an
announcement, “chairisis9090percent announcement,"chair percentvisible, visible, occupying 20percent occupying 20 percentofofthe the screen," as indicated screen,” as indicated
at 50138 at 50138
[00425]
[00425] Figures 15AB-15AI Figures 15AB-15AI illustrateinput illustrate inputfor for selection selection and performanceofof and performance
operations in a switch control mode while a user interface that includes field of view 6036 of operations in a switch control mode while a user interface that includes field of view 6036 of
the camera(s) is displayed. the camera(s) is displayed. 2024200149
[00426]
[00426] In Figure In Figure 15AB, 15AB, ananinput input(e.g., (e.g., aa rightward rightward swipe along aa path swipe along path indicated indicated by by
arrow 15142) by contact 15140 is detected. In response to the input, an operation is selected, arrow 15142) by contact 15140 is detected. In response to the input, an operation is selected,
as indicated as indicated at atFigure Figure 15AC. 15AC.
[00427]
[00427] In Figure In Figure 15AC, 15AC, a arightward rightwardlateral lateral movement movement control control 15144 15144 is is displayed displayed
(e.g., (e.g., to to indicate that aa currently indicate that currentlyselected selectedoperation operation is operation is an an operation for moving for moving virtual virtual object object
11002 to the 11002 to the right). right).Audio Audio alert alert15146 15146 includes includes an an announcement, “selected:move announcement, "selected: move right right
button,” as indicated at 15148. An input (e.g., a double tap input) by contact 15150 is button," as indicated at 15148. An input (e.g., a double tap input) by contact 15150 is
detected. In response to the input, the selected operation is performed (e.g., the virtual object detected. In response to the input, the selected operation is performed (e.g., the virtual object
11002 11002 isismoved moved to the to the right right in field in the the field of view of view 6036 6036 of the of the camera(s), camera(s), as indicated as indicated in Figure in Figure
15AD). 15AD).
[00428]
[00428] In Figure In Figure 15AD, themovement 15AD, the movement of virtual of virtual object11002 object 11002 is is reported reported byby audio audio
alert 15152 alert 15152 that that includes includes an an announcement, “chairisis 100 announcement, "chair 100percent percent visible, visible, occupying 30 occupying 30
percent of the screen,” as indicated at 15154. percent of the screen," as indicated at 15154.
[00429]
[00429] In Figure In Figure 15AE, aninput 15AE, an input(e.g., (e.g., aa rightward rightward swipe along aa path swipe along path indicated indicated by by
arrow 15158) by contact 15156 is detected. In response to the input, the selected operation is arrow 15158) by contact 15156 is detected. In response to the input, the selected operation is
advancedtotoaa next advanced next operation. operation.
[00430]
[00430] In Figure In Figure 15AF, 15AF, aaleftward leftwardlateral lateral movement control15160 movement control 15160is is displayed(e.g., displayed (e.g., to indicate that a currently selected operation is an operation for moving virtual object 11002 to indicate that a currently selected operation is an operation for moving virtual object 11002
to the left). An audio alert 15162 includes an announcement, “selected: move left,” as to the left). An audio alert 15162 includes an announcement, "selected: move left," as
indicated at indicated at 15164. 15164. An input (e.g., An input (e.g., aarightward rightwardswipe swipe along along aa path path indicated indicated by by arrow arrow 15168) 15168)
by contact 15166 is detected. In response to the input, the selected operation is advanced to a by contact 15166 is detected. In response to the input, the selected operation is advanced to a
next operation. next operation.
[00431]
[00431] In Figure 15AG, a clockwise rotation control 15170 is displayed (e.g., to In Figure 15AG, a clockwise rotation control 15170 is displayed (e.g., to
indicate that a currently selected operation is an operation for rotating virtual object 11002 indicate that a currently selected operation is an operation for rotating virtual object 11002
149
1005066680
clockwise). An clockwise). Anaudio audioalert alert 15172 15172includes includesananannouncement, announcement, “selected: "selected: rotateclockwise," rotate clockwise,” asas 10 Jan 2024
indicated at indicated at 15174. 15174. An input (e.g., An input (e.g., aarightward rightwardswipe swipe along along aa path path indicated indicated by by arrow arrow 15178) 15178)
by contact 15176 is detected. In response to the input, the selected operation is advanced to a by contact 15176 is detected. In response to the input, the selected operation is advanced to a
next operation. next operation.
[00432]
[00432] In Figure In Figure 15AH, 15AH, a acounterclockwise counterclockwise rotationcontrol rotation control15180 15180is isdisplayed displayed(e.g., (e.g., to indicate that a currently selected operation is an operation for rotating virtual object 11002 to indicate that a currently selected operation is an operation for rotating virtual object 11002
clockwise). An clockwise). Anaudio audioalert alert 15182 15182includes includesananannouncement, announcement, “selected: "selected: rotate rotate 2024200149
counterclockwise,” as indicated at 15184. An input (e.g., a double tap input) by contact counterclockwise," as indicated at 15184. An input (e.g., a double tap input) by contact
15186 15186 isisdetected. detected.In In response response to the to the input, input, the selected the selected operation operation is performed is performed (e.g., the(e.g., the
virtual object 11002 is rotated counterclockwise as indicated in Figure 15AI). virtual object 11002 is rotated counterclockwise as indicated in Figure 15AI).
[00433]
[00433] In Figure In Figure 15AI, an audio 15AI, an audio alert alert 15190 includes an 15190 includes an announcement, announcement, “Chair "Chair
rotated by five degrees counterclockwise. Chair is now rotated by zero degrees relative to the rotated by five degrees counterclockwise. Chair is now rotated by zero degrees relative to the
screen,” as indicated at 15164. screen," as indicated at 15164.
[00434]
[00434] In some embodiments, a reflection is generated on at least one surface (e.g., an In some embodiments, a reflection is generated on at least one surface (e.g., an
underside surface) of an object (e.g., virtual object 11002). The reflection is generated using underside surface) of an object (e.g., virtual object 11002). The reflection is generated using
imagedata image datacaptured capturedbybyone oneorormore morecameras cameras of of device device 100. 100. ForFor example, example, the the reflection reflection isis
based on at least a portion of the captured image data (e.g., an image, a set of images, and/or based on at least a portion of the captured image data (e.g., an image, a set of images, and/or
video) that corresponds to a horizontal plane (e.g., floor plane 5038) detected in the field of video) that corresponds to a horizontal plane (e.g., floor plane 5038) detected in the field of
view 6036 view 6036ofofthe the one oneoror more morecameras. cameras.InInsome some embodiments, embodiments, generating generating the reflection the reflection
includes generating includes generating aa spherical spherical model that includes model that includes the the captured captured image data (e.g., image data (e.g., by by mapping mapping
captured image data onto a model of a virtual sphere). captured image data onto a model of a virtual sphere).
[00435]
[00435] In some In embodiments, some embodiments, a reflectiongenerated a reflection generatedonona asurface surfaceofofananobject object includes a reflection gradient (e.g., such that a portion of a surface that is closer to a plane has includes a reflection gradient (e.g., such that a portion of a surface that is closer to a plane has
a higher magnitude of reflectivity than a portion of a surface that is further from the plane). In a higher magnitude of reflectivity than a portion of a surface that is further from the plane). In
someembodiments, some embodiments, a magnitude a magnitude of reflectivity of reflectivity ofof a areflection reflection generated generatedononaasurface surface of of an an object is based on a reflectivity value of a texture that corresponds to the surface. For object is based on a reflectivity value of a texture that corresponds to the surface. For
example, no reflection is generated at a non-reflective portion of the surface. example, no reflection is generated at a non-reflective portion of the surface.
[00436]
[00436] In some In embodiments, some embodiments, thethe reflectionisisadjusted reflection adjusted over overtime. time. For For example, example,the the reflection is adjusted as input is received for moving and/or scaling the object (e.g., as the reflection is adjusted as input is received for moving and/or scaling the object (e.g., as the
object moves, a reflection of the object is adjusted to reflect a portion of the plane that is at a object moves, a reflection of the object is adjusted to reflect a portion of the plane that is at a
location that corresponds to the object). In some embodiments, the reflection is not adjusted location that corresponds to the object). In some embodiments, the reflection is not adjusted
when the object is rotated (e.g., around the z-axis). when the object is rotated (e.g., around the z-axis).
150
1005066680
[00437]
[00437] In some In embodiments, some embodiments, prior prior toto displayingthe displaying theobject objectatat aa determined determinedlocation location 10 Jan 2024
(e.g., (e.g., on a plane on a planedetected detectedin in the the field field of of view view 60366036 ofcamera(s) of the the camera(s) that corresponds that corresponds to the to the object), no reflection is generated on the surface of the object. For example, no reflection is object), no reflection is generated on the surface of the object. For example, no reflection is
generated on a surface of an object when a translucent representation of the object is generated on a surface of an object when a translucent representation of the object is
displayed (e.g., displayed (e.g., asasdescribed describedwith withregard regardto toFigures Figures11G-11H) and/or when 11G-11H) and/or whencalibration calibrationisis being performed being performed(e.g., (e.g., as as described described with with regard regard to to Figures Figures 12B-12I). 12B-12I).
[00438]
[00438] In some In embodiments, some embodiments, a reflectionofofananobject a reflection objectis is generated generated on on one oneor or more more 2024200149
planes detected planes detected in in the the field fieldofofview view6036 6036 of ofthe thecamera(s). camera(s).In Insome some embodiments, embodiments, nono
reflection of the object is generated in the field of view 6036 of the camera(s). reflection of the object is generated in the field of view 6036 of the camera(s).
[00439]
[00439] Figures 16A-16G Figures 16A-16G areare flow flow diagrams diagrams illustratingmethod illustrating method 16000 16000 of displaying of displaying a a virtual object in a user interface that includes a field of view of one or more cameras using virtual object in a user interface that includes a field of view of one or more cameras using
different visual different visualproperties propertiesdepending depending on on whether object-placementcriteria whether object-placement criteria are are met. met. Method Method
16000 16000 isisperformed performed atelectronic at an an electronic device device (e.g., (e.g., devicedevice 300,3,Figure 300, Figure 3, or portable or portable
multifunction device multifunction device 100, 100, Figure Figure1A) 1A)having havinga adisplay displaygeneration generationcomponent component (e.g.,a adisplay, (e.g., display, a projector, a heads up display or the like), one or more input devices (e.g., a touch-sensitive a projector, a heads up display or the like), one or more input devices (e.g., a touch-sensitive
surface, or surface, or aatouch-screen touch-screen display display that thatserves servesboth bothasasthe display the generation display component generation component and and
the touch-sensitive the touch-sensitive surface), surface),and and one one or ormore more cameras (e.g., one cameras (e.g., one or or more more rear-facing rear-facing cameras cameras
on a side of the device opposite from the display and the touch-sensitive surface). In some on a side of the device opposite from the display and the touch-sensitive surface). In some
embodiments, the display is a touch-screen display and the touch-sensitive surface is on or embodiments, the display is a touch-screen display and the touch-sensitive surface is on or
integrated with integrated with the the display. display.In Insome some embodiments, thedisplay embodiments, the displayis is separate separate from the touch- from the touch- sensitive surface. sensitive surface.Some operations in Some operations in method 16000are, method 16000 are,optionally, optionally, combined combined and/or and/or the the
order of some operations is, optionally, changed. order of some operations is, optionally, changed.
[00440]
[00440] The device receives (16002) (e.g., while a staging user interface that includes The device receives (16002) (e.g., while a staging user interface that includes
a movable representation of a virtual object is displayed, and before the field of view of the a movable representation of a virtual object is displayed, and before the field of view of the
cameras is displayed) a request to display a virtual object (e.g., a representation of a three- cameras is displayed) a request to display a virtual object (e.g., a representation of a three-
dimensional model) in a first user interface region (e.g., an augmented reality viewer dimensional model) in a first user interface region (e.g., an augmented reality viewer
interface) that includes at least a portion of a field of view of the one or more cameras (e.g., interface) that includes at least a portion of a field of view of the one or more cameras (e.g.,
the request is an input by a contact that is detected on a representation of the virtual object on the request is an input by a contact that is detected on a representation of the virtual object on
a touch-screen display, or the contact is detected on an affordance (e.g., a tap on the “AR a touch-screen display, or the contact is detected on an affordance (e.g., a tap on the "AR
view” or “world view” button) that is concurrently displayed with a representation of the view" or "world view" button) that is concurrently displayed with a representation of the
virtual object virtual objectand and that thatisis configured configuredtoto trigger display trigger of an display of AR view an AR viewwhen when invoked invoked by the by the
151
1005066680
first contact). For example, the request is an input to display virtual object 11002 in field of first contact). For example, the request is an input to display virtual object 11002 in field of 10 Jan 2024
view 6036 view 6036ofofthe the one oneor or more morecameras, cameras,asasdescribed describedwith withregard regardtotoFigure Figure11F. 11F.
[00441]
[00441] In response to the request to display the virtual object in the first user interface In response to the request to display the virtual object in the first user interface
region (e.g., the request to display the virtual object in a view of the physical environment region (e.g., the request to display the virtual object in a view of the physical environment
surroundingthe surrounding the device), device), the the device device displays displays (16004), (16004), via via the the display displaygeneration generation component, component, aa
representation of the virtual object over at least a portion of the field of view of the one or representation of the virtual object over at least a portion of the field of view of the one or
more cameras that is included the first user interface region (e.g., the field of view of the one more cameras that is included the first user interface region (e.g., the field of view of the one 2024200149
or more cameras are displayed in response to the request to display the virtual object in the or more cameras are displayed in response to the request to display the virtual object in the
first user interface region), wherein the field of view of the one or more cameras is a view of first user interface region), wherein the field of view of the one or more cameras is a view of
a physical a physical environment inwhich environment in whichthe theone oneorormore morecameras cameras areare located.For located. Forexample, example, as as
described with regard to Figure 11G, virtual object 11002 is displayed in field of view 6036 described with regard to Figure 11G, virtual object 11002 is displayed in field of view 6036
of the of the one one or or more camerasthat more cameras that is is aa view view of of physical physical environment 5002ininwhich environment 5002 whichthe theone oneoror more cameras are located. Displaying the representation of the virtual object includes, in more cameras are located. Displaying the representation of the virtual object includes, in
accordancewith accordance withaadetermination determinationthat thatobject-placement object-placementcriteria criteria are are not not met, met, wherein the wherein the
object-placement criteria require that a placement location (e.g., a plane) for the virtual object object-placement criteria require that a placement location (e.g., a plane) for the virtual object
be identified in the field of view of the one or more cameras in order for the object-placement be identified in the field of view of the one or more cameras in order for the object-placement
criteria to be met (e.g., the object-placement criteria are not met when the device has not criteria to be met (e.g., the object-placement criteria are not met when the device has not
identified a location or plane for placing the virtual object relative to the field of view of the identified a location or plane for placing the virtual object relative to the field of view of the
one or more cameras in the first user interface region (e.g., plane identification is still in one or more cameras in the first user interface region (e.g., plane identification is still in
progress, or there is not enough image data to identify the plane)), displaying the progress, or there is not enough image data to identify the plane)), displaying the
representation of the virtual object with a first set of visual properties (e.g., at a first representation of the virtual object with a first set of visual properties (e.g., at a first
translucency level, or a first brightness level, or a first saturation level, etc.) and with a first translucency level, or a first brightness level, or a first saturation level, etc.) and with a first
orientation that is independent of which portion of the physical environment is displayed in orientation that is independent of which portion of the physical environment is displayed in
the field of view of the one or more cameras (e.g., the virtual object floats above the cameras’ the field of view of the one or more cameras (e.g., the virtual object floats above the cameras'
field of view with an orientation that is relative to a predefined plane independent of the field of view with an orientation that is relative to a predefined plane independent of the
physical environment (e.g., the orientation set in the staging view), and independent of the physical environment (e.g., the orientation set in the staging view), and independent of the
changesoccurring changes occurringinin the the camera's camera’sfield field of of view (e.g., changes view (e.g., changes due due to to movement movement ofofthe thedevice device relative totothe relative thephysical physicalenvironment)). environment)).For For example, example, in in Figures Figures 11G-11H, becausea 11G-11H, because a placement location for virtual object 11002 has not been identified in field of view 6036 of placement location for virtual object 11002 has not been identified in field of view 6036 of
the cameras, a translucent version of virtual object 11002 is displayed. As the device moves the cameras, a translucent version of virtual object 11002 is displayed. As the device moves
(as shown (as fromFigure shown from Figure11G 11Gto to Figure Figure 11H), 11H), thethe orientationofofvirtual orientation virtual object object 11002 11002isis unchanged.InInsome unchanged. someembodiments, embodiments, the the object-placement object-placement criteria criteria include include a requirement a requirement that that
the field of view is stable and provides a stationary view of the physical environment (e.g., the field of view is stable and provides a stationary view of the physical environment (e.g.,
152
1005066680
the camera the movesless camera moves lessthan thanaathreshold thresholdamount amountduring during atatleast leastaa threshold threshold amount amountofoftime, time, 10 Jan 2024
and/or at and/or at least leasta apredetermined predetermined amount of time amount of time has has elapsed elapsed since since the the request request was received, was received,
and/or the and/or the camera has been camera has beencalibrated calibrated for for plane plane detection with with sufficient sufficientprior priormovement of movement of
the device. In accordance with a determination that the object-placement criteria are met (e.g., the device. In accordance with a determination that the object-placement criteria are met (e.g.,
the object-placement criteria are met when the device has not identified a location or plane the object-placement criteria are met when the device has not identified a location or plane
for placing the virtual object relative to the field of view of the one or more cameras in the for placing the virtual object relative to the field of view of the one or more cameras in the
first user interface region), the device displays the representation of the virtual object with a first user interface region), the device displays the representation of the virtual object with a 2024200149
second set of visual properties (e.g., at a second translucency level, or a second brightness second set of visual properties (e.g., at a second translucency level, or a second brightness
level, or a second saturation level, etc.) that are distinct from the first set of visual properties level, or a second saturation level, etc.) that are distinct from the first set of visual properties
and with and with aa second secondorientation orientation that that corresponds to aa plane corresponds to plane in in the thephysical physicalenvironment environment
detected in detected in the the field fieldofofview viewofofthe one the oneoror more morecameras. cameras. For For example, example, in in Figure Figure 11I, 11I, because because
a placement location for virtual object 11002 has been identified (e.g., a plane the a placement location for virtual object 11002 has been identified (e.g., a plane the
correspondstoto the corresponds the floor floor surface surface 5038 5038 in in physical physical environment 5002)ininfield environment 5002) field of of view 6036ofof view 6036
the cameras, a non-translucent version of virtual object 11002 is displayed. The orientation the cameras, a non-translucent version of virtual object 11002 is displayed. The orientation
(e.g., the position on touch screen display 112) of virtual object 11002 has changed from the (e.g., the position on touch screen display 112) of virtual object 11002 has changed from the
first orientation first orientationshown shown in inFigure Figure11H 11H to to the the second second orientation orientation shown in Figure shown in 11I. As Figure 11I. the As the
device moves device moves(as (asshown shown from from Figure Figure 11I11I to to Figure Figure 11J), 11J), theorientation the orientationofofvirtual virtual object 11002 changes 11002 changes (because (because virtual virtual objectobject 11002 11002 is is now displayed now displayed at a fixed at a fixed orientation orientation relative relative to physical environment 5002). Displaying a virtual object with a first set of visual properties to physical environment 5002). Displaying a virtual object with a first set of visual properties
or a second set of visual properties, depending on whether object-placement criteria are met, or a second set of visual properties, depending on whether object-placement criteria are met,
provides visual feedback to the user (e.g., to indicate that a request to display the virtual provides visual feedback to the user (e.g., to indicate that a request to display the virtual
object has been received, but that additional time and/or calibration information is needed for object has been received, but that additional time and/or calibration information is needed for
placing the virtual object in the field of view of the one or more cameras). Providing placing the virtual object in the field of view of the one or more cameras). Providing
improvedvisual improved visualfeedback feedbacktotothe theuser user enhances enhancesthe theoperability operability of of the the device device and makesthe and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and user-device interface more efficient (e.g., by helping the user to provide proper inputs and
avoid attempting to provide input for manipulating the virtual object prior to placement of the avoid attempting to provide input for manipulating the virtual object prior to placement of the
object at the second orientation that corresponds to the plane), which, additionally, reduces object at the second orientation that corresponds to the plane), which, additionally, reduces
powerusage power usageand andimproves improves battery battery lifeofofthe life the device device by byenabling enablingthe the user user to to use the device device
more quickly and efficiently. more quickly and efficiently.
[00442]
[00442] In some In embodiments, some embodiments, thethe device device detects(16006) detects (16006) thatthe that theobject-placement object-placement criteria are met while the representation of the virtual object is displayed with the first set of criteria are met while the representation of the virtual object is displayed with the first set of
visual properties and the first orientation (e.g., a plane for placing the virtual object is visual properties and the first orientation (e.g., a plane for placing the virtual object is
identified while the virtual object is suspended in the translucent state over a view of the identified while the virtual object is suspended in the translucent state over a view of the
153
1005066680
physical environment physical environmentsurrounding surrounding thedevice). the device).Detecting Detectingthat thatobject-placement object-placementcriteria criteria are are 10 Jan 2024
met while the virtual object is displayed with a first set of visual properties (e.g., in a met while the virtual object is displayed with a first set of visual properties (e.g., in a
translucent state), without requiring further user input for initiating detection of object translucent state), without requiring further user input for initiating detection of object
placementcriteria, placement criteria, reduces reduces the the number of inputs number of inputs required required for for object object placement. Reducingthe placement. Reducing the numberofofinputs number inputsneeded neededtotoperform performananoperation operationenhances enhances thethe operabilityofofthe operability thedevice deviceand and makesthe makes theuser-device user-deviceinterface interface more moreefficient, efficient, which, which, additionally, additionally, reduces reduces power usageand power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and 2024200149
efficiently. efficiently.
[00443]
[00443] In some In embodiments, some embodiments, in in response response to to detectingthat detecting thatthe theobject-placement object-placement criteria are met, the device displays (16008), via the display generation component, an criteria are met, the device displays (16008), via the display generation component, an
animated transition showing the representation of the virtual object moving (e.g., rotating, animated transition showing the representation of the virtual object moving (e.g., rotating,
scaling, translating, and/or a combination of the above) from the first orientation to the scaling, translating, and/or a combination of the above) from the first orientation to the
second orientation and changing from having the first set of visual properties to having the second orientation and changing from having the first set of visual properties to having the
second set of visual properties. For example, once the plane for placing the virtual object is second set of visual properties. For example, once the plane for placing the virtual object is
identified in the camera’s field of view, the virtual object is placed onto that plane with the identified in the camera's field of view, the virtual object is placed onto that plane with the
visible adjustment of its orientation, size, and translucency (and the like). Displaying an visible adjustment of its orientation, size, and translucency (and the like). Displaying an
animated transition from the first orientation to the second orientation (e.g., without requiring animated transition from the first orientation to the second orientation (e.g., without requiring
further user input to reorient the virtual object in the first user interface) reduces the number further user input to reorient the virtual object in the first user interface) reduces the number
of inputs of inputs required required for forobject objectplacement. placement. Reducing thenumber Reducing the numberofof inputsneeded inputs neededto to perform perform an an
operation enhances operation enhancesthe the operability operability of of the the device device and and makes the user-device makes the user-deviceinterface interface more more
efficient, which, additionally, reduces power usage and improves battery life of the device by efficient, which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00444]
[00444] In some In embodiments, some embodiments, detecting detecting thatthe that theobject-placement object-placement criteriaare criteria are met met includes one or more of (16010): detecting that a plane has been identified in the field of includes one or more of (16010): detecting that a plane has been identified in the field of
view of view of the the one or more one or cameras;detecting more cameras; detectingless less than than aa threshold threshold amount ofmovement amount of movement between the device and the physical environment for at least a threshold amount of time (e.g., between the device and the physical environment for at least a threshold amount of time (e.g.,
leading to a substantially stationary view of physical environment in the camera’s field of leading to a substantially stationary view of physical environment in the camera's field of
view); and detecting that at least a predetermined amount of time has elapsed since receiving view); and detecting that at least a predetermined amount of time has elapsed since receiving
the request for displaying the virtual object in the first user interface region. Detecting that the request for displaying the virtual object in the first user interface region. Detecting that
the object-placement criteria are met (e.g., by detecting a plane in the field of view of the one the object-placement criteria are met (e.g., by detecting a plane in the field of view of the one
or more or cameraswithout more cameras withoutrequiring requiringuser userinput inputtoto detect detect the the plane) plane) reduces reduces the the number of number of
inputs required for inputs for object objectplacement. Reducingthe placement. Reducing thenumber numberof of inputsneeded inputs needed to to perform perform an an operation enhances operation enhancesthe the operability operability of of the the device device and and makes the user-device makes the user-deviceinterface interface more more 154
1005066680
efficient, which, additionally, reduces power usage and improves battery life of the device by efficient, which, additionally, reduces power usage and improves battery life of the device by 10 Jan 2024
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00445]
[00445] In some In embodiments, some embodiments, thethe device device detects(16012) detects (16012) firstmovement first movement of the of the oneone or or more cameras (e.g., rotation and/or translation of the device relative to the physical more cameras (e.g., rotation and/or translation of the device relative to the physical
environment around the device) while the representation of the virtual object is displayed environment around the device) while the representation of the virtual object is displayed
with the first set of visual properties and the first orientation (e.g., while the virtual object is with the first set of visual properties and the first orientation (e.g., while the virtual object is
suspendedininthe suspended the translucent translucent state state over over aaview view of of the thephysical physicalenvironment environment surrounding the surrounding the 2024200149
device) over a first portion of the physical environment (e.g., the first portion of the physical device) over a first portion of the physical environment (e.g., the first portion of the physical
environment is visible to the user through the translucent virtual object) captured in the field environment is visible to the user through the translucent virtual object) captured in the field
of view of of the view of the one one or or more cameras.For more cameras. Forexample, example,ininFigures Figures11G-11H, 11G-11H,the the oneone or or more more
camerasmove cameras move (asindicated (as indicatedby, by,e.g., e.g., the the changed position of changed position of table table 5004 in field 5004 in fieldof ofview view 6036 6036
of the cameras) while a translucent representation of virtual object 11002 is displayed. The of the cameras) while a translucent representation of virtual object 11002 is displayed. The
walls and table of the physical environment, as captured in field of view 6036 of the cameras walls and table of the physical environment, as captured in field of view 6036 of the cameras
and displayed in the user interface, are visible through the translucent virtual object 11002. In and displayed in the user interface, are visible through the translucent virtual object 11002. In
response to response to detecting detecting the the first firstmovement of the movement of the one one or or more cameras,the more cameras, the device devicedisplays displays (16014) the representation of the virtual object with the first set of visual properties and the (16014) the representation of the virtual object with the first set of visual properties and the
first orientation over a second portion of the physical environment captured in the field of first orientation over a second portion of the physical environment captured in the field of
view of view of the the one or more one or cameras,wherein more cameras, whereinthe thesecond second portionofofthe portion thephysical physicalenvironment environmentis is
distinct from the first portion of the physical environment. For example, while the translucent distinct from the first portion of the physical environment. For example, while the translucent
version of version of the the virtual virtualobject objectisis displayed hovering displayed hoveringover overthe physical the environment physical environment shown in the shown in the field of view of the camera, the view of the physical environment within the field of view of field of view of the camera, the view of the physical environment within the field of view of
the camera shifts and scales (e.g., behind the translucent virtual object) when the device the camera shifts and scales (e.g., behind the translucent virtual object) when the device
movesrelative moves relative to to the the physical physical environment. Therefore,during environment. Therefore, duringthe the movement movement of of thethe device, device,
the translucent version of the virtual object becomes overlaid on top of different portions of the translucent version of the virtual object becomes overlaid on top of different portions of
the physical environment represented in the field of view, as a result of the translation and the physical environment represented in the field of view, as a result of the translation and
scaling of the view of the physical environment within the field of view of the camera. For scaling of the view of the physical environment within the field of view of the camera. For
example,inin Figure example, Figure 11H, 11H,field field of of view 6036ofofthe view 6036 the cameras camerasdisplays displaysaasecond secondportion portionofof physical environment 5002 that is distinct from the first portion of physical environment physical environment 5002 that is distinct from the first portion of physical environment
5002 displayed 5002 displayed in Figure in Figure 11G. 11G. The orientation The orientation of the translucent of the translucent representation representation of virtual of virtual
object 11002 object doesnot 11002 does notchange changeasasthe themovement movementof of thethe oneone or or more more cameras cameras occurs occurs in Figures in Figures
11G-11H. Displaying 11G-11H. Displaying the virtual the virtual objectobject with a with first aorientation first orientation in response in response to detecting to detecting
movement movement of of theone the one oror more more cameras cameras provides provides visual visual feedback feedback to the to the user user (e.g.,totoindicate (e.g., indicate that the virtual object has not yet been placed at a fixed position relative to the physical that the virtual object has not yet been placed at a fixed position relative to the physical
155
1005066680
environmentand environment andthus thusdoes doesnot notmove moveas as thethe portionofofthe portion thephysical physicalenvironment environment captured captured in in 10 Jan 2024
the field the fieldof ofview view of ofthe theone oneor ormore more cameras changesin cameras changes in accordance accordancewith withmovement movement of the of the
one or one or more cameras).Providing more cameras). Providing improved improved visual visual feedback feedback to the to the user user enhances enhances the the
operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping
the user to avoid attempting to provide input for manipulating the virtual object prior to the user to avoid attempting to provide input for manipulating the virtual object prior to
placement of the object at the second orientation that corresponds to the plane), which, placement of the object at the second orientation that corresponds to the plane), which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the 2024200149
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00446]
[00446] In some In embodiments, some embodiments, thethe device device detects(16016) detects (16016) second second movement movement ofone of the the one or more cameras (e.g., rotation and/or translation of the device relative to the physical or more cameras (e.g., rotation and/or translation of the device relative to the physical
environment around the device) while the representation of the virtual object is displayed environment around the device) while the representation of the virtual object is displayed
with the second set of visual properties and the second orientation (e.g., after the object- with the second set of visual properties and the second orientation (e.g., after the object-
placement criteria have been met and the virtual object has been placed on a plane detected in placement criteria have been met and the virtual object has been placed on a plane detected in
the physical environment in the field of view of the cameras) over a third portion of the the physical environment in the field of view of the cameras) over a third portion of the
physical environment (e.g., direct view of the third portion of the physical environment (e.g., physical environment (e.g., direct view of the third portion of the physical environment (e.g.,
a portion of the detected plane that supports the virtual object) is blocked by the virtual a portion of the detected plane that supports the virtual object) is blocked by the virtual
object) captured in the field of view of the one or more cameras. For example, in Figures 11I- object) captured in the field of view of the one or more cameras. For example, in Figures 11I-
11J, 11J, the the one one or or more more cameras move cameras move (asindicated (as indicatedby, by,e.g., e.g., the the changed position of changed position of table table 5004 5004
in field of view 6036 of the cameras) while a non-translucent representation of virtual object in field of view 6036 of the cameras) while a non-translucent representation of virtual object
11002 is displayed. 11002 is displayed. In In response response to to detecting detecting the thesecond second movement movement ofofthe thedevice, device,the the device device maintains (16018) display of the representation of the virtual object with the second set of maintains (16018) display of the representation of the virtual object with the second set of
visual properties and the second orientation over the third portion of the physical visual properties and the second orientation over the third portion of the physical
environmentcaptured environment capturedininthe thefield field of of view of the view of the one one or or more cameras,while more cameras, whilethe thephysical physical environmentasascaptured environment capturedininthe thefield field of of view view of of the the one one or or more camerasmoves more cameras moves (e.g.,shifts (e.g., shifts and scales) and scales) in in accordance with the accordance with the second movement second movement of of thethe device,andand device, thesecond the second orientation continues to correspond to the plane in the physical environment detected in the orientation continues to correspond to the plane in the physical environment detected in the
field of field ofview view of of the theone oneor ormore more cameras. cameras. For example,after For example, after the the non-translucent non-translucent version version of of
the virtual object is dropped at a resting location on a plane detected in the physical the virtual object is dropped at a resting location on a plane detected in the physical
environment shown in the field of view of the camera, the virtual object’s location and environment shown in the field of view of the camera, the virtual object's location and
orientation is fixed relative to the physical environment within the field of view of the orientation is fixed relative to the physical environment within the field of view of the
camera, and the virtual object will shift and scale with the physical environment in the field camera, and the virtual object will shift and scale with the physical environment in the field
of view of the cameras as the device moves relative to the physical environment (e.g., the of view of the cameras as the device moves relative to the physical environment (e.g., the
non-translucent representation of virtual object 11002 remains fixed at an orientation relative non-translucent representation of virtual object 11002 remains fixed at an orientation relative
156
1005066680
to the to the floor floorplane planeininphysical physicalenvironment environment 5002 as the 5002 as the movement movement ofof theone the oneorormore more cameras cameras 10 Jan 2024
occurs in Figures 11I-11J). Maintaining display of the virtual object at the second orientation occurs in Figures 11I-11J). Maintaining display of the virtual object at the second orientation
in response in response to to detecting detecting movement movement ofofthe theone oneorormore morecameras cameras provides provides visual visual feedback feedback to to the user (e.g., to indicate that the virtual object has been placed at a fixed position relative to the user (e.g., to indicate that the virtual object has been placed at a fixed position relative to
the physical the physical environment andthus environment and thusmoves movesas as theportion the portionofofthe thephysical physicalenvironment environment captured captured
in the in the field fieldofofview viewofofthe one the oneoror more morecameras cameras changes in accordance changes in withmovement accordance with movementof of thethe
one or one or more cameras).Providing more cameras). Providing improved improved visual visual feedback feedback to the to the user user enhances enhances the the 2024200149
operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping
the user to provide proper inputs for a virtual object that has placed at the second orientation the user to provide proper inputs for a virtual object that has placed at the second orientation
that corresponds that to the corresponds to the plane), plane),which, which, additionally, additionally,reduces reducespower power usage usage and improvesbattery and improves battery life of the device by enabling the user to use the device more quickly and efficiently. life of the device by enabling the user to use the device more quickly and efficiently.
[00447]
[00447] In some In embodiments, some embodiments, in in accordance accordance with with a determination a determination that that thethe object- object-
placement criteria are met (e.g., the object-placement criteria are met when the device has not placement criteria are met (e.g., the object-placement criteria are met when the device has not
identified a location or plane for placing the virtual object relative to the field of view of the identified a location or plane for placing the virtual object relative to the field of view of the
one or more cameras in the first user interface region), the device generates (16020) (e.g., one or more cameras in the first user interface region), the device generates (16020) (e.g.,
with one or more tactile output generators of the device) a tactile output in conjunction with with one or more tactile output generators of the device) a tactile output in conjunction with
displaying the representation of the virtual object with the second set of visual properties displaying the representation of the virtual object with the second set of visual properties
(e.g., at a reduced translucency level, or a higher brightness level, or a higher saturation level, (e.g., at a reduced translucency level, or a higher brightness level, or a higher saturation level,
etc.) and with the second orientation that corresponds to the plane in the physical etc.) and with the second orientation that corresponds to the plane in the physical
environment detected in the field of view of the one or more cameras (e.g., the generation of environment detected in the field of view of the one or more cameras (e.g., the generation of
the tactile output is synchronized with the completion of the transition to the non-translucent the tactile output is synchronized with the completion of the transition to the non-translucent
appearance of the virtual object and the completion of the rotation and translation of the appearance of the virtual object and the completion of the rotation and translation of the
virtual object to settle at the drop location on the plane detected in the physical environment). virtual object to settle at the drop location on the plane detected in the physical environment).
For example, as shown in Figure 11I, a tactile output as indicated at 11010 is generated in For example, as shown in Figure 11I, a tactile output as indicated at 11010 is generated in
conjunction with displaying the non-translucent representation of virtual object 11002 conjunction with displaying the non-translucent representation of virtual object 11002
attached to a plane (e.g., floor surface 5038) that corresponds to virtual object 11002. attached to a plane (e.g., floor surface 5038) that corresponds to virtual object 11002.
Generating a tactile output in accordance with a determination that object-placement criteria Generating a tactile output in accordance with a determination that object-placement criteria
are met provides the user with improved tactile feedback (e.g., indicating that the operation to are met provides the user with improved tactile feedback (e.g., indicating that the operation to
place the place the virtual virtualobject objectwas wassuccessfully successfullyexecuted). executed).Providing Providing improved feedbacktotothe improved feedback the user user enhances the operability of the device (e.g., by providing sensory information that allows a enhances the operability of the device (e.g., by providing sensory information that allows a
user to perceive that object-placement criteria have been met without cluttering the user user to perceive that object-placement criteria have been met without cluttering the user
interface with interface with displayed displayed information) information) and makesthe and makes theuser-device user-deviceinterface interface more moreefficient efficient
157
1005066680
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling 10 Jan 2024
the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00448]
[00448] In some In embodiments, some embodiments, while while displaying displaying thethe representation representation of of thevirtual the virtualobject object with the second set of visual properties and with the second orientation that corresponds to with the second set of visual properties and with the second orientation that corresponds to
the plane in the physical environment detected in the field of view of the one or more the plane in the physical environment detected in the field of view of the one or more
cameras, the device receives (16022) an update regarding at least a location or an orientation cameras, the device receives (16022) an update regarding at least a location or an orientation
of the plane in the physical environment detected in the field of view of the one or more of the plane in the physical environment detected in the field of view of the one or more 2024200149
cameras (e.g., the updated plane location and orientation is a result of more accurate cameras (e.g., the updated plane location and orientation is a result of more accurate
calculation based on additional data accumulated after the initial plane detection result was calculation based on additional data accumulated after the initial plane detection result was
used to used to place place the the virtual virtualobject, object,oror more moretime-consuming computationmethods time-consuming computation methods (e.g.,fewer (e.g., fewer approximations, etc.)). In response to receiving the update regarding at least the location or approximations, etc.)). In response to receiving the update regarding at least the location or
the orientation of the plane in the physical environment detected in the field of view of the the orientation of the plane in the physical environment detected in the field of view of the
one or more cameras, the device adjusts (16024) at least a location and/or an orientation of one or more cameras, the device adjusts (16024) at least a location and/or an orientation of
the representation of the virtual object in accordance with the update (e.g., gradually moving the representation of the virtual object in accordance with the update (e.g., gradually moving
(e.g., translating and rotating) the virtual object closer to the updated plane). Adjusting a (e.g., translating and rotating) the virtual object closer to the updated plane). Adjusting a
location and/or an orientation of a virtual object in response to receiving an update regarding location and/or an orientation of a virtual object in response to receiving an update regarding
a plane in the physical environment (e.g., without requiring user input for placing a virtual a plane in the physical environment (e.g., without requiring user input for placing a virtual
object relative to a plane) reduces the number of inputs needed to adjust the virtual object. object relative to a plane) reduces the number of inputs needed to adjust the virtual object.
Reducingthe Reducing thenumber numberof of inputsneeded inputs needed to to perform perform an an operation operation enhances enhances the the operability operability of of the device and makes the user-device interface more efficient, which, additionally, reduces the device and makes the user-device interface more efficient, which, additionally, reduces
powerusage power usageand andimproves improves battery battery lifeofofthe life the device device by byenabling enablingthe the user user to to use the device device
more quickly and efficiently. more quickly and efficiently.
[00449]
[00449] In some embodiments, the first set of visual properties include (16026) a first In some embodiments, the first set of visual properties include (16026) a first
size and a first translucency level (e.g., before being dropped into the AR view, the object has size and a first translucency level (e.g., before being dropped into the AR view, the object has
a fixed size relative to the display and a fixed high translucency level) and the second set of a fixed size relative to the display and a fixed high translucency level) and the second set of
visual properties include (16028) a second size that is distinct from the first size (e.g., once visual properties include (16028) a second size that is distinct from the first size (e.g., once
dropped in the AR view, the object is displayed with a simulated physical size in relation to dropped in the AR view, the object is displayed with a simulated physical size in relation to
the size and a drop location in the physical environment), and a second translucency level that the size and a drop location in the physical environment), and a second translucency level that
is lower than (e.g., more opaque than) the first translucency level (e.g., the object is no longer is lower than (e.g., more opaque than) the first translucency level (e.g., the object is no longer
translucent in translucent in the theAR view). For AR view). example,inin Figure For example, Figure 11H, 11H,a atranslucent translucent representation representation of of virtual object 11002 is shown with a first size, and in Figure 11I, a non-translucent virtual object 11002 is shown with a first size, and in Figure 11I, a non-translucent
representation of virtual object 11004 is shown with a second (smaller) size. Displaying a representation of virtual object 11004 is shown with a second (smaller) size. Displaying a
virtual object with a first size and a first translucency level or a second size and a second virtual object with a first size and a first translucency level or a second size and a second
158
1005066680
translucency level, translucency level, depending onwhether depending on whetherobject-placement object-placement criteriaare criteria aremet, met, provides providesvisual visual 10 Jan 2024
feedback to the user (e.g., to indicate that a request to display the virtual object has been feedback to the user (e.g., to indicate that a request to display the virtual object has been
received, but that additional time and/or calibration information is needed for placing the received, but that additional time and/or calibration information is needed for placing the
virtual object virtual objectin inthe thefield of of field view of of view thethe oneone or more cameras). or more cameras).Providing Providingimproved visual improved visual
feedbackto feedback to the the user enhances the operability enhances the operability of of the thedevice device and and makes the user-device makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and avoid interface more efficient (e.g., by helping the user to provide proper inputs and avoid
attempting to provide input for manipulating the virtual object prior to placement of the attempting to provide input for manipulating the virtual object prior to placement of the 2024200149
object at the second orientation that corresponds to the plane), which, additionally, reduces object at the second orientation that corresponds to the plane), which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00450]
[00450] In some embodiments, the request to display the virtual object in the first user In some embodiments, the request to display the virtual object in the first user
interface region (e.g., the AR view) that includes at least a portion of the field of view of the interface region (e.g., the AR view) that includes at least a portion of the field of view of the
one or more cameras is received (16030) while the virtual object is displayed in a respective one or more cameras is received (16030) while the virtual object is displayed in a respective
user interface (e.g., a staging user interface) that does not include at least a portion of the field user interface (e.g., a staging user interface) that does not include at least a portion of the field
of view of the one or more cameras (e.g., virtual object is oriented relative to a virtual stage of view of the one or more cameras (e.g., virtual object is oriented relative to a virtual stage
that has an orientation that is independent of the physical environment of the device). The that has an orientation that is independent of the physical environment of the device). The
first orientation corresponds to an orientation of the virtual object while the virtual object is first orientation corresponds to an orientation of the virtual object while the virtual object is
displayed in the respective user interface at a time when the request is received. For example, displayed in the respective user interface at a time when the request is received. For example,
as described with regard to Figure 11F, a request to display virtual object 11002 in a user as described with regard to Figure 11F, a request to display virtual object 11002 in a user
interface that includes field of view 6036 of the cameras is received while staging user interface that includes field of view 6036 of the cameras is received while staging user
interface 6010 (that does not include the field of view of the cameras) is displayed. The interface 6010 (that does not include the field of view of the cameras) is displayed. The
orientation of virtual object 11002 in Figure 11G, in which virtual object 11002 is displayed orientation of virtual object 11002 in Figure 11G, in which virtual object 11002 is displayed
in a user interface that includes field of view 6036 of the cameras, corresponds to the in a user interface that includes field of view 6036 of the cameras, corresponds to the
orientation of virtual object 11002 in Figure 11F, in which virtual object 11002 is displayed orientation of virtual object 11002 in Figure 11F, in which virtual object 11002 is displayed
in staging user interface 6010. Displaying the virtual object in a first user interface (e.g., a in staging user interface 6010. Displaying the virtual object in a first user interface (e.g., a
displayed augmented displayed augmentedreality realityview) view)with withananorientation orientationthat that corresponds correspondsto to an an orientation orientation of the virtual object as displayed in a (previously displayed) interface (e.g., a staging user the virtual object as displayed in a (previously displayed) interface (e.g., a staging user
interface) provides visual feedback to the user (e.g., to indicate that object manipulation input interface) provides visual feedback to the user (e.g., to indicate that object manipulation input
provided while the staging user interface is displayed can be used to establish an orientation provided while the staging user interface is displayed can be used to establish an orientation
of the of the object object in inthe theAR AR view). Providingimproved view). Providing improvedvisual visualfeedback feedbacktoto theuser the userenhances enhancesthethe operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping
the user to provide proper inputs and avoid attempting to provide input for manipulating the the user to provide proper inputs and avoid attempting to provide input for manipulating the
virtual object prior to placement of the object at the second orientation that corresponds to the virtual object prior to placement of the object at the second orientation that corresponds to the
159
1005066680
plane), which, plane), which, additionally, additionally, reduces reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by 10 Jan 2024
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00451]
[00451] In some In embodiments, some embodiments, thethe firstorientation first orientation corresponds correspondstoto (16032) (16032)aa predefined orientation (e.g., a default orientation such as the orientation that the virtual object predefined orientation (e.g., a default orientation such as the orientation that the virtual object
is displayed at when it is first displayed in a respective user interface that does not include at is displayed at when it is first displayed in a respective user interface that does not include at
least a portion of the field of view of the one or more cameras). Displaying the virtual object least a portion of the field of view of the one or more cameras). Displaying the virtual object
in a first user interface (e.g., a displayed augmented reality view) with a first set of visual in a first user interface (e.g., a displayed augmented reality view) with a first set of visual 2024200149
properties and properties and with a predefined with a orientation reduces predefined orientation reduces power usageand power usage andimproves improves batterylife battery lifeof of the device (e.g., by allowing a pre-generated translucent representation of the virtual object to the device (e.g., by allowing a pre-generated translucent representation of the virtual object to
be displayed rather than rendering a translucent representation in accordance with an be displayed rather than rendering a translucent representation in accordance with an
orientation established in a staging user interface). orientation established in a staging user interface).
[00452]
[00452] In some embodiments, while displaying the virtual object in the first user In some embodiments, while displaying the virtual object in the first user
interface region (e.g., the AR view) with the second set of visual properties and the second interface region (e.g., the AR view) with the second set of visual properties and the second
orientation that corresponds to the plane in the physical environment detected in the field of orientation that corresponds to the plane in the physical environment detected in the field of
view of view of the the one or more one or cameras,the more cameras, thedevice devicedetects detects (16034) (16034)aarequest requestto to change changeaasimulated simulated physical size of the of the virtual object (e.g., as a result of a scaling input (e.g., a pinch or de- physical size of the of the virtual object (e.g., as a result of a scaling input (e.g., a pinch or de-
pinch gesture directed to the virtual object)) from a first simulated physical size to a second pinch gesture directed to the virtual object)) from a first simulated physical size to a second
simulated physical size (e.g., from 80% of the default size to 120% of the default size, or vice simulated physical size (e.g., from 80% of the default size to 120% of the default size, or vice
versa) relative to the physical environment captured in the field of view of the one or more versa) relative to the physical environment captured in the field of view of the one or more
cameras. For example, an input to decrease the simulated physical size of the of the virtual cameras. For example, an input to decrease the simulated physical size of the of the virtual
object 11002 object is aa pinch 11002 is pinch gesture gesture as as described described with with regard regard to to Figures Figures 11N-11P. In response 11N-11P. In responsetoto detecting the request to change the simulated physical size of the virtual object, the device detecting the request to change the simulated physical size of the virtual object, the device
gradually changes (16036) a displayed size of the representation of the virtual object in the gradually changes (16036) a displayed size of the representation of the virtual object in the
first user interface region in accordance with a gradual change of the simulated physical size first user interface region in accordance with a gradual change of the simulated physical size
of the virtual object from the first simulated physical size to the second simulated physical of the virtual object from the first simulated physical size to the second simulated physical
size (e.g., the displayed size of the virtual object grows or shrinks while the displayed size of size (e.g., the displayed size of the virtual object grows or shrinks while the displayed size of
the physical the physical environment capturedininthe environment captured the field field of of view view of of the the one one or or more more cameras remains cameras remains
unchanged) and, during the gradual change of the displayed size of the representation of the unchanged) and, during the gradual change of the displayed size of the representation of the
virtual object in the first user interface region, in accordance with a determination that the virtual object in the first user interface region, in accordance with a determination that the
simulated physical size of the virtual object has reached a predefined simulated physical size simulated physical size of the virtual object has reached a predefined simulated physical size
(e.g., (e.g., 100% 100% ofof the the default default size), size), thethe device device generates generates a tactile a tactile outputoutput to indicate to indicate that the that the
simulated physical size of the virtual object has reached the predefined simulated physical simulated physical size of the virtual object has reached the predefined simulated physical
size. For example, as described with regard to Figures 11N-11P, the displayed size of the size. For example, as described with regard to Figures 11N-11P, the displayed size of the
160
1005066680
representation of virtual object 11002 gradually decreases in response to the pinch gesture representation of virtual object 11002 gradually decreases in response to the pinch gesture 10 Jan 2024
input. In Figure 11O, when the displayed size of the representation of virtual object 11002 input. In Figure 110, when the displayed size of the representation of virtual object 11002
reaches 100% of the size of virtual object 11002 (e.g., the size of virtual object 11002 as reaches 100% of the size of virtual object 11002 (e.g., the size of virtual object 11002 as
originally displayed in the user interface that includes the field of view 6036 of the one or originally displayed in the user interface that includes the field of view 6036 of the one or
more cameras, as indicated in Figure 11I), a tactile output is generated, as indicated at 11024. more cameras, as indicated in Figure 11I), a tactile output is generated, as indicated at 11024.
Generating a tactile output in accordance with a determination that the simulated physical Generating a tactile output in accordance with a determination that the simulated physical
size of the virtual object has reached a predefined simulated physical size provides the user size of the virtual object has reached a predefined simulated physical size provides the user 2024200149
with feedback (e.g., indicating that no further input is needed to return the simulated size of with feedback (e.g., indicating that no further input is needed to return the simulated size of
the virtual object to the predefined size). Providing improved tactile feedback enhances the the virtual object to the predefined size). Providing improved tactile feedback enhances the
operability of the device (e.g., by providing sensory information that allows a user to perceive operability of the device (e.g., by providing sensory information that allows a user to perceive
that the predefined simulated physical size of the virtual object has been reached without that the predefined simulated physical size of the virtual object has been reached without
cluttering the user interface with displayed information), which, additionally, reduces power cluttering the user interface with displayed information), which, additionally, reduces power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently.
[00453]
[00453] In some embodiments, while displaying the virtual object in the first user In some embodiments, while displaying the virtual object in the first user
interface region (e.g., the AR view) at the second simulated physical size of the virtual object interface region (e.g., the AR view) at the second simulated physical size of the virtual object
(e.g., 120% of the default size, or 80% of the default size, as a result of a scaling input (e.g., a (e.g., 120% of the default size, or 80% of the default size, as a result of a scaling input (e.g., a
pinch or de-pinch gesture directed to the virtual object)) that is distinct from the predefined pinch or de-pinch gesture directed to the virtual object)) that is distinct from the predefined
simulated physical size, the device detects (16038) a request to return the virtual object to the simulated physical size, the device detects (16038) a request to return the virtual object to the
predefined simulated physical size (e.g., detecting a tap or double tap input on the touch- predefined simulated physical size (e.g., detecting a tap or double tap input on the touch-
screen (e.g., on the virtual object, or alternatively, outside of the virtual object)). For screen (e.g., on the virtual object, or alternatively, outside of the virtual object)). For
example, after a pinch input has caused a reduction in size of virtual object 11002 (as example, after a pinch input has caused a reduction in size of virtual object 11002 (as
described with regard to Figures 11N-11P), a double tap input is detected at a location that described with regard to Figures 11N-11P), a double tap input is detected at a location that
corresponds to the virtual object 11002 (as described with regard to Figure 11R). In response corresponds to the virtual object 11002 (as described with regard to Figure 11R). In response
to detecting the request to return the virtual object to the predefined simulated physical size, to detecting the request to return the virtual object to the predefined simulated physical size,
the device changes (16040) the displayed size of the representation of the virtual object in the the device changes (16040) the displayed size of the representation of the virtual object in the
first user interface region in accordance with a change of the simulated physical size of the first user interface region in accordance with a change of the simulated physical size of the
virtual object to the predefined simulated physical size (e.g., the displayed size of the virtual virtual object to the predefined simulated physical size (e.g., the displayed size of the virtual
object grows or shrinks while the displayed size of the physical environment captured in the object grows or shrinks while the displayed size of the physical environment captured in the
field of field ofview view of of the theone one or ormore more cameras remainsunchanged). cameras remains unchanged).ForFor example, example, in in response response to to the double tap input described with regard to Figure 11R, the size of the virtual object 11002 the double tap input described with regard to Figure 11R, the size of the virtual object 11002
returns to the size of virtual object 11002 as displayed in Figure 11I (the size of virtual object returns to the size of virtual object 11002 as displayed in Figure 11I (the size of virtual object
11002 11002 asas originally originally displayed displayed in user in the the user interface interface that includes that includes theoffield the field viewof view 6036 6036 of the of the
161
1005066680
one or one or more cameras).InInsome more cameras). someembodiments, embodiments, in accordance in accordance withwith a determination a determination that that the the 10 Jan 2024
simulated physical size of the virtual object has reached the predefined simulated physical simulated physical size of the virtual object has reached the predefined simulated physical
size (e.g., 100% of the default size), the device generates a tactile output to indicate that the size (e.g., 100% of the default size), the device generates a tactile output to indicate that the
simulated physical size of the virtual object has reached the predefined simulated physical simulated physical size of the virtual object has reached the predefined simulated physical
size. Changing the displayed size of a virtual object to a predefined size in response to size. Changing the displayed size of a virtual object to a predefined size in response to
detecting a request to return the virtual object to the predefined simulated physical size (e.g. detecting a request to return the virtual object to the predefined simulated physical size (e.g.
by providing an option to adjust a displayed size precisely to a predefined simulated physical by providing an option to adjust a displayed size precisely to a predefined simulated physical 2024200149
size, rather than requiring the user to estimate when input provided to adjust the display size size, rather than requiring the user to estimate when input provided to adjust the display size
is sufficient to display the virtual object at the predefined simulated physical size) reduces the is sufficient to display the virtual object at the predefined simulated physical size) reduces the
numberofofinputs number inputsneeded neededtotodisplay displaythe theobject object with with aa predefined predefined size. size. Reducing Reducingthe thenumber number of inputs of inputs needed to perform needed to an operation perform an operationenhances enhancesthe theoperability operability of of the the device device and makes and makes
the user-device the user-device interface interface more efficient, which, more efficient, which,additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00454]
[00454] In some In embodiments, some embodiments, thethe device device selectsthe selects theplane planefor forsetting setting the the second second
orientation of the representation of the virtual object with the second set of visual properties orientation of the representation of the virtual object with the second set of visual properties
in accordance with a respective position and orientation of the one or more cameras relative in accordance with a respective position and orientation of the one or more cameras relative
to the physical environment (e.g., a current position and orientation at the time when the to the physical environment (e.g., a current position and orientation at the time when the
object-placement criteria are met), wherein selecting the plane includes (16042): in object-placement criteria are met), wherein selecting the plane includes (16042): in
accordancewith accordance withaadetermination determinationthat thatthe the object-placement object-placementcriteria criteria were metwhen were met whenthe the representation of the virtual object was displayed over a first portion of the physical representation of the virtual object was displayed over a first portion of the physical
environment (e.g., the base of the translucent object is overlapping with a plane in the first environment (e.g., the base of the translucent object is overlapping with a plane in the first
portion of portion of the the physical physical environment) capturedin environment) captured in the the field fieldof ofview view of ofthe theone oneor ormore more cameras cameras
(e.g., (e.g., as as aa result result of of the the device pointing device pointing in in a firstdirection a first direction in in thethe physical physical environment), environment),
selecting a first plane of multiple planes detected in the physical environment in the field of selecting a first plane of multiple planes detected in the physical environment in the field of
view of view of the the one or more one or cameras(e.g., more cameras (e.g., in in accordance withaa greater accordance with greater proximity betweenthe proximity between the object’s base and the first plane on the display, and the greater proximity between the first object's base and the first plane on the display, and the greater proximity between the first
plane and the first portion of the physical environment in the physical world) as the plane for plane and the first portion of the physical environment in the physical world) as the plane for
setting the second orientation of the representation of the virtual object with the second set of setting the second orientation of the representation of the virtual object with the second set of
visual properties; and in accordance with a determination that the object-placement criteria visual properties; and in accordance with a determination that the object-placement criteria
were met were metwhen when therepresentation the representationofofthe thevirtual virtual object object was displayed over was displayed over aa second secondportion portionof of the physical environment (e.g., the base of the translucent object is overlapping with a plane the physical environment (e.g., the base of the translucent object is overlapping with a plane
in the second portion of the physical environment) captured in the field of view of the one or in the second portion of the physical environment) captured in the field of view of the one or
162
1005066680
more cameras (e.g., as a result of the device pointing in a second direction in the physical more cameras (e.g., as a result of the device pointing in a second direction in the physical 10 Jan 2024
environment), selecting a second plane of the multiple planes detected in the physical environment), selecting a second plane of the multiple planes detected in the physical
environmentininthe environment thefield field of of view of the view of the one one or or more cameras(e.g., more cameras (e.g., in in accordance with aa accordance with
greater proximity greater theobject's betweenthe proximity between object’s base base and andthe the second secondplane planeononthe thedisplay, display, and and the the greater proximity greater betweenthe proximity between thesecond secondplane planeand andthe thesecond secondportion portionofofthe thephysical physical environment in the physical world) as the plane for setting the second orientation of the environment in the physical world) as the plane for setting the second orientation of the
representation of the virtual object with the second set of visual properties, wherein the first representation of the virtual object with the second set of visual properties, wherein the first 2024200149
portion of the physical environment is distinct from the second portion of the physical portion of the physical environment is distinct from the second portion of the physical
environment, and the first plane is distinct from the second plane. Selecting a first plane or a environment, and the first plane is distinct from the second plane. Selecting a first plane or a
second plane as a plane relative to which a virtual object will be set (e.g., without requiring second plane as a plane relative to which a virtual object will be set (e.g., without requiring
user input to designate which of many detected planes will be the plane relative to which the user input to designate which of many detected planes will be the plane relative to which the
virtual object is set) reduces the number of inputs needed to select a plane. Reducing the virtual object is set) reduces the number of inputs needed to select a plane. Reducing the
numberofofinputs number inputsneeded neededtotoperform performananoperation operationenhances enhances thethe operabilityofofthe operability thedevice deviceand and makesthe makes theuser-device user-deviceinterface interface more moreefficient, efficient, which, which, additionally, additionally, reduces reduces power usageand power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00455]
[00455] In some In embodiments, some embodiments, thethe device device displays displays (16044) (16044) a snapshot a snapshot affordance affordance (e.g., (e.g.,
a camera shutter button) concurrently with displaying the virtual object in the first user a camera shutter button) concurrently with displaying the virtual object in the first user
interface region (e.g., the AR view) with the second set of visual properties and the second interface region (e.g., the AR view) with the second set of visual properties and the second
orientation. In response to activation of the snapshot affordance, the device captures (16046) orientation. In response to activation of the snapshot affordance, the device captures (16046)
a snapshot image including a current view of the representation of the virtual object at a a snapshot image including a current view of the representation of the virtual object at a
placementlocation placement locationin in the the physical physical environment inthe environment in the field field of of view view of of the the one one or or more more
cameras, with the second set of visual properties and the second orientation that corresponds cameras, with the second set of visual properties and the second orientation that corresponds
to the plane in the physical environment detected in the field of view of the one or more to the plane in the physical environment detected in the field of view of the one or more
cameras. Displaying cameras. Displayinga asnapshot snapshotaffordance affordance forcapturing for capturinga asnapshot snapshotimage image of of a currentview a current view of an of an object object reduces reduces the the number of inputs number of inputs needed neededtoto capture capture aa snapshot snapshot image imageofofananobject. object. Reducingthe Reducing thenumber numberof of inputsneeded inputs needed to to perform perform an an operation operation enhances enhances the the operability operability of of the device and makes the user-device interface more efficient, which, additionally, reduces the device and makes the user-device interface more efficient, which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00456]
[00456] In some In embodiments, some embodiments, thethe device device displays displays (16048) (16048) oneone or or more more control control
affordances (e.g., affordance to switch back to the staging user interface, affordance to exit affordances (e.g., affordance to switch back to the staging user interface, affordance to exit
the AR viewer, affordance to capture a snap shot, etc.) with the representation of the virtual the AR viewer, affordance to capture a snap shot, etc.) with the representation of the virtual
163
1005066680
object having the second set of visual properties in the first user interface region. For object having the second set of visual properties in the first user interface region. For 10 Jan 2024
example, in Figure 11J, a set of controls that includes back control 6016, toggle control 6018, example, in Figure 11J, a set of controls that includes back control 6016, toggle control 6018,
and share and share control control 6020 is displayed. While 6020 is displaying the While displaying the one one or or more morecontrol control affordances affordances with the representation of the virtual object having the second set of visual properties, the with the representation of the virtual object having the second set of visual properties, the
device detects (16050) that control-fading criteria are met (e.g., no user input has been device detects (16050) that control-fading criteria are met (e.g., no user input has been
detected on the touch-sensitive surface for a threshold amount of time (e.g., with or without detected on the touch-sensitive surface for a threshold amount of time (e.g., with or without
movement movement of of thedevice the deviceandand update update to to thefield the fieldof of view viewofofthe the cameras)). cameras)). In In response response to to 2024200149
detecting that the control fading criteria are met, the device ceases (16052) to display the one detecting that the control fading criteria are met, the device ceases (16052) to display the one
or more control affordances while continuing to display the representation of the virtual or more control affordances while continuing to display the representation of the virtual
object having the second set of visual properties in the first user interface region including the object having the second set of visual properties in the first user interface region including the
field of field ofview view of of the theone oneor ormore more cameras. cameras. For example,asas described For example, describedwith withregard regardtoto Figures Figures 11K-11L, controls6016, 11K-11L, controls 6016,6018, 6018,and and6020 6020 gradually gradually fade fade outout andand cease cease to to bebe displayed displayed when when
no user no user input input is isdetected detectedfor fora athreshold thresholdamount amount of oftime. time.InInsome some embodiments, after the embodiments, after the control affordances are faded away, a tap input on the touch-sensitive surface or an control affordances are faded away, a tap input on the touch-sensitive surface or an
interaction with the virtual object causes the device to redisplay the control affordances interaction with the virtual object causes the device to redisplay the control affordances
concurrently with the representation of the virtual object in the first user interface region. concurrently with the representation of the virtual object in the first user interface region.
Automatically ceasing to display controls in response to determining that control fading Automatically ceasing to display controls in response to determining that control fading
criteria are criteria aremet metreduces reducesthe thenumber number of of inputs inputs needed to cease needed to cease displaying displaying controls. controls. Reducing Reducing
the number the ofinputs number of inputs needed neededtotoperform performananoperation operationenhances enhancesthethe operabilityofofthe operability thedevice device and makes and makesthe theuser-device user-deviceinterface interface more moreefficient, efficient, which, additionally, reduces which, additionally, reduces power usage power usage
and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly
and efficiently. and efficiently.
[00457]
[00457] In some embodiments, in response to the request to display the virtual object In some embodiments, in response to the request to display the virtual object
in the first user interface region: prior to displaying the representation of the virtual object in the first user interface region: prior to displaying the representation of the virtual object
over at least a portion of the field of view of the one or more cameras that is included the first over at least a portion of the field of view of the one or more cameras that is included the first
user interface region, in accordance with a determination that calibration criteria are not met user interface region, in accordance with a determination that calibration criteria are not met
(e.g., (e.g., because thereisisnot because there notsufficient sufficientamount amount of images of images from different from different viewing viewing angles to angles to
generating dimension generating dimensionand andspatial spatialrelationship relationship data data for for the thephysical physicalenvironment captured in environment captured in the field of view of the one or more cameras), the device displays (16054) a prompt for the the field of view of the one or more cameras), the device displays (16054) a prompt for the
user to move the device relative to the physical environment (e.g., displaying a visual prompt user to move the device relative to the physical environment (e.g., displaying a visual prompt
to move the device and, optionally displaying a calibration user interface object (e.g., a to move the device and, optionally displaying a calibration user interface object (e.g., a
bouncywireframe bouncy wireframe ballororaacube ball cubethat that moves movesininaccordance accordance with with movement movement of device) of the the device) in in the first user interface region (e.g., the calibration user interface object is overlaid on a the first user interface region (e.g., the calibration user interface object is overlaid on a
164
1005066680
blurred image of the field of view of the one or more cameras), as described in greater detail blurred image of the field of view of the one or more cameras), as described in greater detail 10 Jan 2024
belowwith below withreference referencetoto method method17000). 17000).Displaying Displaying a prompt a prompt for for the the user user to to move move the the device device
relative to the physical environment provides visual feedback to the user (e.g., to indicate that relative to the physical environment provides visual feedback to the user (e.g., to indicate that
movement of the device is needed to obtain information for placing the virtual object in the movement of the device is needed to obtain information for placing the virtual object in the
field of field ofview view of of the thecamera(s)). camera(s)). Providing Providing improved visualfeedback improved visual feedbacktotothe theuser user enhances enhancesthe the operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping
the user to the to provide provide calibration calibrationinput), input),which, which,additionally, additionally,reduces power reduces powerusage usageand and improves improves 2024200149
battery life of the device by enabling the user to use the device more quickly and efficiently). battery life of the device by enabling the user to use the device more quickly and efficiently).
[00458]
[00458] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 16A-16G Figures 16A-16G have have been been described described is merely is merely an example an example andnot and is is not intended intended to indicate to indicate
that the that the described described order order is isthe theonly onlyorder orderinin which whichthe theoperations operationscould couldbebeperformed. performed.One One of of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 1000, 1000, 17000, 17000,18000, 18000, 19000, and20000) 19000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 16000 16000 described described
abovewith above withrespect respect to to Figures 16A-16G.ForFor Figures 16A-16G. example, example, contacts, contacts, inputs,virtual inputs, virtualobjects, objects, user user
interface regions, fields of view, tactile outputs, movements, and/or animations described interface regions, fields of view, tactile outputs, movements, and/or animations described
abovewith above withreference referenceto to method method16000 16000 optionally optionally have have oneone or or more more of of thethe characteristicsofof characteristics
the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs, the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs,
movements,and/or movements, and/oranimations animations described described herein herein with with reference reference to to othermethods other methods described described
herein (e.g., herein (e.g.,methods methods 800, 900, 1000, 17000, 18000, 1000, 17000, 18000,19000, 19000,and and20000). 20000). For For brevity,these brevity, these details are not repeated here. details are not repeated here.
[00459]
[00459] Figures 17A-17D Figures 17A-17D areare flow flow diagrams diagrams illustratingmethod illustrating method 17000 17000 of displaying of displaying a a calibration user calibration user interface interfaceobject objectthat is dynamically that animated is dynamically animatedininaccordance accordance with with movement movement
of one of one or or more camerasofofaadevice. more cameras device. Method Method 17000 17000 is is performed performed at an at an electronic electronic device device (e.g., (e.g.,
device 300, device 300, Figure Figure 3, 3, or portable portable multifunction multifunction device device 100, 100, Figure Figure 1A) havingaadisplay 1A) having display generation component (e.g., a display, a projector, a heads up display or the like), one or generation component (e.g., a display, a projector, a heads up display or the like), one or
more input devices (e.g., a touch-sensitive surface, or a touch-screen display that serves both more input devices (e.g., a touch-sensitive surface, or a touch-screen display that serves both
as the as the display display generation generation component andthe component and thetouch-sensitive touch-sensitivesurface), surface), one one or or more morecameras cameras (e.g., (e.g., one or more one or morerear-facing rear-facing cameras cameras on aofside on a side the of the device device oppositeopposite from the from the display anddisplay and
the touch-sensitive surface), and one or more attitude sensors (e.g., accelerometers, the touch-sensitive surface), and one or more attitude sensors (e.g., accelerometers,
gyroscopes, and/or magnetometers) for detecting changes in attitude (e.g., orientation (e.g., gyroscopes, and/or magnetometers) for detecting changes in attitude (e.g., orientation (e.g.,
rotation, yaw, and/or tilt angles) and position relative to the surrounding physical rotation, yaw, and/or tilt angles) and position relative to the surrounding physical
165
1005066680
environment)ofofthe environment) thedevice deviceincluding includingthe the one oneor or more morecameras. cameras.Some Some operations operations in in method method 10 Jan 2024
17000 are, optionally, 17000 are, optionally, combined and/orthe combined and/or theorder order of of some someoperations operationsis, is, optionally, optionally, changed. changed.
[00460]
[00460] Thedevice The devicereceives receives(17002) (17002)a arequest requestto to display display an an augmented augmentedreality realityview viewofof a physical a physical environment (e.g., the environment (e.g., the physical physical environment surroundingthe environment surrounding thedevice deviceincluding includingthe the one or more cameras) in a first user interface region that includes a representation of a field one or more cameras) in a first user interface region that includes a representation of a field
of view of the one or more cameras (e.g., the field of view captures at least a portion of the of view of the one or more cameras (e.g., the field of view captures at least a portion of the
physical environment). physical environment).InIn some someembodiments, embodiments,the the request request is is a a tapinput tap inputdetected detectedonona abutton button 2024200149
to switch from a staging view of a virtual object to an augmented reality view of the virtual to switch from a staging view of a virtual object to an augmented reality view of the virtual
object. In object. In some embodiments,the some embodiments, therequest requestisis aa selection selection of of an an augmented reality affordance augmented reality affordance displayed next to a representation of a virtual object in a two-dimensional user interface. In displayed next to a representation of a virtual object in a two-dimensional user interface. In
someembodiments, some embodiments,thethe request request is isactivation activationofofan anaugmented augmented realitymeasuring reality measuring application application
(e.g., (e.g.,a ameasure measure app app that thatfacilitate facilitatemeasurements measurements of ofthe thephysical physicalenvironment). environment). For For example, example,
the request is a tap input detected at toggle 6018 for displaying virtual object 11002 in field the request is a tap input detected at toggle 6018 for displaying virtual object 11002 in field
of view of 6036ofofthe view 6036 the one one or or more morecameras, cameras,asasdescribed describedwith withregard regardtotoFigure Figure12A. 12A.
[00461]
[00461] In response to receiving the request to display the augmented reality view of In response to receiving the request to display the augmented reality view of
the physical environment, the device displays (17004) the representation of the field of view the physical environment, the device displays (17004) the representation of the field of view
of the one or more cameras (e.g., the device displays a blurred version of the physical of the one or more cameras (e.g., the device displays a blurred version of the physical
environment in the field of view of the one or more cameras when the calibration criteria are environment in the field of view of the one or more cameras when the calibration criteria are
not met). For example, the device displays a blurred representation of the field of view 6036 not met). For example, the device displays a blurred representation of the field of view 6036
of the of the one one or or more cameras,as more cameras, as shown shownininFigure Figure12E-1. 12E-1.InInaccordance accordance with with a determination a determination
that calibration criteria are not met for the augmented reality view of the physical that calibration criteria are not met for the augmented reality view of the physical
environment (e.g., because there is not a sufficient amount of image data (e.g., from different environment (e.g., because there is not a sufficient amount of image data (e.g., from different
viewing angles) to generate dimension and spatial relationship data for the physical viewing angles) to generate dimension and spatial relationship data for the physical
environmentcaptured environment capturedininthe thefield field of of view of the view of the one one or or more cameras,because more cameras, becausea aplane planethat that corresponds to the virtual object is not detected in the field of view of the one or more corresponds to the virtual object is not detected in the field of view of the one or more
cameras, and/or because there is not sufficient information to begin or proceed with plane cameras, and/or because there is not sufficient information to begin or proceed with plane
detection based on available image data from the cameras), the device displays (e.g., via the detection based on available image data from the cameras), the device displays (e.g., via the
display-generation component, and in the first user interface region that includes the display-generation component, and in the first user interface region that includes the
representation of the field of view of one or more cameras (e.g., a blurred version of the field representation of the field of view of one or more cameras (e.g., a blurred version of the field
of view)) a calibration user interface object (e.g., a scan prompt object, such as a bouncy cube of view)) a calibration user interface object (e.g., a scan prompt object, such as a bouncy cube
or aa wireframe or object) that wireframe object) that is isdynamically dynamically animated in accordance animated in withmovement accordance with movement of the of the oneone
or more or camerasininthe more cameras the physical physical environment. environment.For Forexample, example, in in Figures Figures 12E-1 12E-1 to to 12I-1, 12I-1,
calibration user interface object 12014 is displayed. Animation of calibration user interface calibration user interface object 12014 is displayed. Animation of calibration user interface
166
1005066680
object in object in accordance with movement accordance with movement of of thethe one one or or more more cameras cameras is described is described with with regard regard to,to, 10 Jan 2024
e.g., Figures e.g., Figures12E-1 12E-1 to to 12F-1. 12F-1. In In some embodiments, some embodiments, analyzing analyzing thethe fieldofofview field viewofofthe theone oneoror more cameras to detect one or more planes (e.g., a floor, wall, table, etc.) in the field of view more cameras to detect one or more planes (e.g., a floor, wall, table, etc.) in the field of view
of the one or more cameras occurs when an initial part of an input that corresponds to the of the one or more cameras occurs when an initial part of an input that corresponds to the
request to display the representation of the augmented reality view is received. In some request to display the representation of the augmented reality view is received. In some
embodiments, the analyzing occurs prior to receiving the request (e.g., while the virtual embodiments, the analyzing occurs prior to receiving the request (e.g., while the virtual
object is displayed in a staging view). Displaying the calibration user interface object object is displayed in a staging view). Displaying the calibration user interface object 2024200149
includes: while displaying the calibration user interface object, detecting, via the one or more includes: while displaying the calibration user interface object, detecting, via the one or more
attitude sensors, a change in attitude (e.g., location and/or orientation (e.g., rotation, tilt, yaw attitude sensors, a change in attitude (e.g., location and/or orientation (e.g., rotation, tilt, yaw
angles)) of angles)) of the the one one or ormore more cameras in the cameras in the physical environment; and,in environment; and, in response responseto to detecting the detecting the change in attitude change in attitudeof ofthe theone oneorormore morecameras cameras in in the the physical physicalenvironment, environment,
adjusting at least one display parameter (e.g., orientation, size, rotation, or location on the adjusting at least one display parameter (e.g., orientation, size, rotation, or location on the
display) of the calibration user interface object (e.g., a scan prompt object, such as a bouncy display) of the calibration user interface object (e.g., a scan prompt object, such as a bouncy
cube or a wireframe object) in accordance with the detected change in attitude of the one or cube or a wireframe object) in accordance with the detected change in attitude of the one or
morecameras more camerasininthe thephysical physicalenvironment. environment.For Forexample, example, Figures Figures 12E-1 12E-1 to 12F-1, to 12F-1, which which
correspond to Figures 12E-2 to 12F-2, respectively, illustrate lateral movement of the device correspond to Figures 12E-2 to 12F-2, respectively, illustrate lateral movement of the device
100 relative totophysical 100 relative physicalenvironment 5002, and environment 5002, andaa corresponding correspondingchange changeinin displayedfield displayed fieldofof view 6036 view 6036ofofthe the one oneor or more morecameras camerasofof thedevice. the device.InInFigures Figures12E-2 12E-2toto12F-2, 12F-2,calibration calibration user interface user interface object object12014 12014 rotates rotates in inresponse responseto tothe themovement of the movement of the one or more one or cameras. more cameras.
[00462]
[00462] While displaying the calibration user interface object (e.g., a scan prompt While displaying the calibration user interface object (e.g., a scan prompt
object, such object, such as as aabouncy cube or bouncy cube or aa wireframe object) that wireframe object) that moves onthe moves on thedisplay display in in accordance accordance with the with the detected detected change in attitude change in attitude ofofthe theone oneorormore more cameras in the cameras in the physical physical environment, environment,
the device detects (17006) that the calibration criteria are met. For example, as described with the device detects (17006) that the calibration criteria are met. For example, as described with
regard to Figures 12E-12J, the device determines that the calibration criteria are met in regard to Figures 12E-12J, the device determines that the calibration criteria are met in
response to response to the the movement movement ofof thedevice the devicethat thatoccurs occursfrom from12E-1 12E-1to to 12I-1. 12I-1.
[00463]
[00463] In response to detecting that the calibration criteria are met, the device ceases In response to detecting that the calibration criteria are met, the device ceases
(17008) (17008) toto display display thethe calibration calibration useruser interface interface object object (e.g.,(e.g., a scana prompt scan prompt object, object, such as asuch as a
bouncycube bouncy cubeorora awireframe wireframeobject). object).InInsome someembodiments, embodiments, after after thethe device device ceases ceases to to display display
the calibration user interface object, the device displays the representation of the field of view the calibration user interface object, the device displays the representation of the field of view
of the of the cameras without the cameras without the blurring. blurring. In In some embodiments,a arepresentation some embodiments, representationofofthe thevirtual virtual object is displayed over the un-blurred representation of the field of view of the cameras. For object is displayed over the un-blurred representation of the field of view of the cameras. For
example,inin Figure example, Figure 12J, 12J, in in response to the response to the movement movement ofofthe thedevice devicedescribed describedwith withregard regardtoto 12E-1 12E-1 toto12I-1, 12I-1,thethe calibration calibration useruser interface interface object object 12014 12014 is no displayed, is no longer longer displayed, and virtual and virtual
167
1005066680
object 11002 is displayed over the un-blurred representation 6036 of the field of view of the object 11002 is displayed over the un-blurred representation 6036 of the field of view of the 10 Jan 2024
camera(s). Adjusting a display parameter of a calibration user interface object in accordance camera(s). Adjusting a display parameter of a calibration user interface object in accordance
with movement with movement of of one one or or more more cameras cameras (e.g., (e.g., device device cameras cameras that that capture capture thethe physical physical
environment of the device) provides visual feedback to the user (e.g., to indicate that environment of the device) provides visual feedback to the user (e.g., to indicate that
movement movement of of thedevice the deviceisisneeded neededforforcalibration). calibration). Providing Providingimproved improved visual visual feedback feedback to to the user the user enhances the operability enhances the operability of of the thedevice deviceand and makes the user-device makes the interface more user-device interface more
efficient (e.g., by helping the user to move the device in a manner that provides information efficient (e.g., by helping the user to move the device in a manner that provides information 2024200149
neededtoto meet needed meetcalibration calibration criteria), criteria), which, which,additionally, additionally,reduces power reduces powerusage usage and and improves improves
battery life of the device by enabling the user to use the device more quickly and efficiently. battery life of the device by enabling the user to use the device more quickly and efficiently.
[00464]
[00464] In some In embodiments, some embodiments, thethe request request toto displaythe display theaugmented augmented realityview reality view of of the the
physical environment physical environment(e.g., (e.g., the the physical physical environment surroundingthe environment surrounding thedevice deviceincluding includingthe the one or more cameras) in the first user interface region that includes the representation of the one or more cameras) in the first user interface region that includes the representation of the
field of view of the one or more cameras includes (17010) a request to display a field of view of the one or more cameras includes (17010) a request to display a
representation of a virtual three-dimensional object (e.g., a virtual object having a three- representation of a virtual three-dimensional object (e.g., a virtual object having a three-
dimensionalmodel) dimensional model)ininthe theaugmented augmented realityview reality viewofof thephysical the physicalenvironment. environment.In In some some
embodiments,thetherequest embodiments, requestisisaa tap tap input input detected detected on on a a button button to to switch switch from from a a staging staging view view of of
a virtual object to an augmented reality view of the virtual object. In some embodiments, the a virtual object to an augmented reality view of the virtual object. In some embodiments, the
request is a selection of an augmented reality affordance displayed next to a representation of request is a selection of an augmented reality affordance displayed next to a representation of
a virtual object in a two-dimensional user interface. For example, in Figure 12A, an input by a virtual object in a two-dimensional user interface. For example, in Figure 12A, an input by
contact 12002 at a location that corresponds to toggle control 6018 is a request to display contact 12002 at a location that corresponds to toggle control 6018 is a request to display
virtual object 11002 in a user interface that includes field of view 6036 of the cameras, as virtual object 11002 in a user interface that includes field of view 6036 of the cameras, as
shownininFigure shown Figure12B. 12B.Displaying Displayinganan augmented augmented reality reality view view of of a physical a physical environment environment in in response to a request to display a virtual object in the augmented reality view reduces the response to a request to display a virtual object in the augmented reality view reduces the
numberofofinputs number inputsneeded needed(e.g., (e.g., to to display display both both the the view view of of the the physical physical environment andthe environment and the virtual object). virtual object).Reducing Reducing the the number of inputs number of inputs needed to perform needed to performananoperation operationenhances enhancesthethe operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping
the user the user to to provide provide calibration calibrationinput), input),which, which,additionally, additionally,reduces power reduces powerusage usageand and improves improves
battery life of the device by enabling the user to use the device more quickly and efficiently. battery life of the device by enabling the user to use the device more quickly and efficiently.
[00465]
[00465] In some embodiments, the device displays (17012) (e.g., after the calibration In some embodiments, the device displays (17012) (e.g., after the calibration
criteria are met) the representation of the virtual three-dimensional object in the first user criteria are met) the representation of the virtual three-dimensional object in the first user
interface region that includes the representation of the field of view of the one or more interface region that includes the representation of the field of view of the one or more
cameras after ceasing to display the calibration user interface object. In some embodiments, cameras after ceasing to display the calibration user interface object. In some embodiments,
in response to the request, after the calibration is completed and the field of view of the in response to the request, after the calibration is completed and the field of view of the
168
1005066680
camera is displayed in full clarity, the virtual object drops to a predefined position and/or camera is displayed in full clarity, the virtual object drops to a predefined position and/or 10 Jan 2024
orientation relative to a predefined plane identified in the field of view of the one or more orientation relative to a predefined plane identified in the field of view of the one or more
cameras (e.g., a physical surface, such as a vertical wall or horizontal floor surface that can cameras (e.g., a physical surface, such as a vertical wall or horizontal floor surface that can
serve as a support plane for the three-dimensional representation of the virtual object). For serve as a support plane for the three-dimensional representation of the virtual object). For
example, in Figure 12J, the device has ceased to display the calibration user interface object example, in Figure 12J, the device has ceased to display the calibration user interface object
12014 thatwaswas 12014 that displayed displayed in Figures in Figures 12E-12I, 12E-12I, and object and virtual virtual11002 object 11002 isindisplayed is displayed a user in a user interface that includes field of view 6036 of the cameras. Displaying a virtual object in a interface that includes field of view 6036 of the cameras. Displaying a virtual object in a 2024200149
displayed augmented reality view after ceasing to display the calibration user interface object displayed augmented reality view after ceasing to display the calibration user interface object
provides visual feedback (e.g., to indicate that calibration criteria have been met). Providing provides visual feedback (e.g., to indicate that calibration criteria have been met). Providing
improvedvisual improved visualfeedback feedbacktotothe theuser user enhances enhancesthe theoperability operability of of the the device device and makesthe and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and user-device interface more efficient (e.g., by helping the user to provide proper inputs and
avoid attempting to provide input for manipulating the virtual object before calibration avoid attempting to provide input for manipulating the virtual object before calibration
criteria are met), which, additionally, reduces power usage and improves battery life of the criteria are met), which, additionally, reduces power usage and improves battery life of the
device by enabling the user to use the device more quickly and efficiently. device by enabling the user to use the device more quickly and efficiently.
[00466]
[00466] In some In embodiments, some embodiments, thethe device device displays displays (17014) (17014) (e.g.,before (e.g., beforethe thecalibration calibration criteria are met) the representation of the virtual three-dimensional object in the first user criteria are met) the representation of the virtual three-dimensional object in the first user
interface region concurrently with the calibration user interface object (e.g., behind the interface region concurrently with the calibration user interface object (e.g., behind the
calibration user interface object), wherein the representation of the virtual three-dimensional calibration user interface object), wherein the representation of the virtual three-dimensional
object remains at a fixed location in the first user interface region (e.g., the virtual three- object remains at a fixed location in the first user interface region (e.g., the virtual three-
dimensional object is not placed at a location in the physical environment) during the dimensional object is not placed at a location in the physical environment) during the
movement movement of of theone the one oror more more cameras cameras in the in the physical physical environment environment (e.g., (e.g., while while thethe calibration calibration
user interface object is moved in the first user interface region in accordance with the user interface object is moved in the first user interface region in accordance with the
movement movement of of theone the oneorormore more cameras). cameras). ForFor example, example, in Figures in Figures 12E-1 12E-1 to 12I-1, to 12I-1, a a representation of virtual object 1102 is displayed concurrently with calibration user interface representation of virtual object 1102 is displayed concurrently with calibration user interface
object 12014. object Asthe 12014. As the device device 100 100that that includes includes the the one or more one or camerasmoves more cameras moves (e.g.,asas (e.g.,
illustrated ininFigures illustrated Figures12E-1 12E-1 to to12F-1 12F-1 and and corresponding Figures12E-2 corresponding Figures 12E-2toto12F-2), 12F-2),virtual virtual object 1102 remains at a fixed location in the user interface that includes field of view 6036 object 1102 remains at a fixed location in the user interface that includes field of view 6036
of the one or more cameras. Displaying a virtual object concurrently with a calibration user of the one or more cameras. Displaying a virtual object concurrently with a calibration user
interface object provides visual feedback (e.g., to indicate the object for which calibration is interface object provides visual feedback (e.g., to indicate the object for which calibration is
being performed). being performed).Providing Providingimproved improved visual visual feedback feedback to to thethe userenhances user enhances thethe operabilityofof operability
the device and makes the user-device interface more efficient (e.g., by helping the user to the device and makes the user-device interface more efficient (e.g., by helping the user to
provide calibration input that corresponds a plane relative to which the virtual object will be provide calibration input that corresponds a plane relative to which the virtual object will be
169
1005066680
placed), which, placed), additionally, reduces which, additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by 10 Jan 2024
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00467]
[00467] In some In embodiments, some embodiments, thethe request request toto displaythe display theaugmented augmented realityview reality view of of the the
physical environment physical environment(e.g., (e.g., the the physical physical environment surroundingthe environment surrounding thedevice deviceincluding includingthe the one or more cameras) in the first user interface region that includes the representation of the one or more cameras) in the first user interface region that includes the representation of the
field of view of the one or more cameras includes (17016) a request to display the field of view of the one or more cameras includes (17016) a request to display the
representation of the field of view of the one or more cameras (e.g., with one or more user representation of the field of view of the one or more cameras (e.g., with one or more user 2024200149
interface objects and/or controls (e.g., outlines of planes, objects, pointers, icons, markers, interface objects and/or controls (e.g., outlines of planes, objects, pointers, icons, markers,
etc.)) without requesting display of a representation of any virtual three-dimensional object etc.)) without requesting display of a representation of any virtual three-dimensional object
(e.g., (e.g., aa virtual virtual object havinga athree-dimensional object having three-dimensional model) model) in the in the physical physical environment environment captured captured in the in the field fieldofofview viewofofthe one the oneoror more morecameras. cameras. In Insome some embodiments, therequest embodiments, the requestisis aa selection of an augmented reality affordance displayed next to a representation of a virtual selection of an augmented reality affordance displayed next to a representation of a virtual
object in object in aa two-dimensional user interface. two-dimensional user interface. In Insome some embodiments, therequest embodiments, the requestisis activation activation of of
an augmented an augmentedreality realitymeasuring measuringapplication application(e.g., (e.g., aa measure appthat measure app that facilitate facilitate measurements measurements
of the physical environment). Requesting to display the representation of the field of view of of the physical environment). Requesting to display the representation of the field of view of
the one or more cameras without requesting display of a representation of any virtual three- the one or more cameras without requesting display of a representation of any virtual three-
dimensional object provides feedback (e.g., by using the same calibration user interface dimensional object provides feedback (e.g., by using the same calibration user interface
object to indicate that calibration is needed regardless of whether a virtual object is object to indicate that calibration is needed regardless of whether a virtual object is
displayed). Providing displayed). Providingimproved improved feedback feedback to to thethe userenhances user enhances thethe operabilityofofthe operability thedevice device and makes and makesthe theuser-device user-deviceinterface interface more moreefficient efficient which, which,additionally, additionally, reduces powerusage reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly
and efficiently. and efficiently.
[00468]
[00468] In some In embodiments, some embodiments, in in response response to to receivingthetherequest receiving requesttotodisplay displaythe the augmentedreality augmented realityview viewofofthe the physical physical environment, environment,the thedevice devicedisplays displays(17018) (17018)the the representation of the field of view of the one or more cameras (e.g., displaying a blurred representation of the field of view of the one or more cameras (e.g., displaying a blurred
version of version of the the physical physical environment in the environment in the field field of ofview view of ofthe theone oneor ormore more cameras whenthe cameras when the calibration criteria are not met) and, in accordance with a determination that the calibration calibration criteria are not met) and, in accordance with a determination that the calibration
criteria are met for the augmented reality view of the physical environment (e.g., because criteria are met for the augmented reality view of the physical environment (e.g., because
there is a sufficient amount of image data (e.g., from different viewing angles) to generate there is a sufficient amount of image data (e.g., from different viewing angles) to generate
dimension and spatial relationship data for the physical environment captured in the field of dimension and spatial relationship data for the physical environment captured in the field of
view of the one or more cameras, because a plane that corresponds to the virtual object has view of the one or more cameras, because a plane that corresponds to the virtual object has
been detected in the field of view of the one or more cameras, and/or because there is been detected in the field of view of the one or more cameras, and/or because there is
sufficient information sufficient information to to begin begin or orproceed proceed with with plane plane detection detection based based on on available image data image data 170
1005066680
from the cameras), the device forgoes display of the calibration user interface object (e.g., a from the cameras), the device forgoes display of the calibration user interface object (e.g., a 10 Jan 2024
scan prompt scan promptobject, object, such such as as aa bouncy cubeororaawireframe bouncy cube wireframeobject). object).InIn some someembodiments, embodiments,the the
scanning of scanning of the the physical physical environment forplanes environment for planesbegins beginswhile whilethe thevirtual virtual three-dimensional three-dimensional
object is displayed in a staging user interface which enables the device to, in some object is displayed in a staging user interface which enables the device to, in some
circumstances (e.g., where the field of view of the cameras has moved sufficiently to provide circumstances (e.g., where the field of view of the cameras has moved sufficiently to provide
enoughdata enough datatoto detect detect one one or or more planesin more planes in the the physical physical space) space) detect detect the the one one or or more more planes planes
in the physical space before displaying the augmented reality view, so that the calibration in the physical space before displaying the augmented reality view, SO that the calibration 2024200149
user interface does not need to be displayed. Forgoing display of the calibration user interface user interface does not need to be displayed. Forgoing display of the calibration user interface
object in accordance with a determination that the calibration criteria are met for the object in accordance with a determination that the calibration criteria are met for the
augmentedreality augmented realityview viewofofthe the physical physical environment environmentprovides provides visualfeedback visual feedback to to theuser the user (e.g., (e.g., the the absence absence ofofthe thecalibration calibration user user interface interface object object indicates indicates that calibration that calibration criteria criteria
have been have beenmet metand andmovement movement of the of the device device is not is not needed needed forfor calibration).Providing calibration). Providing improvedvisual improved visualfeedback feedbacktotothe theuser user enhances enhancesthe theoperability operability of of the the device device and makesthe and makes the user-device interface user-device interface more efficient (e.g., more efficient (e.g.,bybyhelping helpingthe user the to to user avoid unnecessary avoid unnecessarymovement movement
of the device for the purpose of calibration), which, additionally, reduces power usage and of the device for the purpose of calibration), which, additionally, reduces power usage and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00469]
[00469] In some In embodiments, some embodiments, thethe device device displays displays (17020) (17020) (e.g.,before (e.g., beforethe thecalibration calibration criteria are met) a textual object (e.g., a textual description describing the error condition that criteria are met) a textual object (e.g., a textual description describing the error condition that
is currently detected and/or a textual prompt requesting user action (e.g., to rectify the is currently detected and/or a textual prompt requesting user action (e.g., to rectify the
detected error condition)) in the first user interface region concurrently with the calibration detected error condition)) in the first user interface region concurrently with the calibration
user interface object that provides information about actions that can be taken by the user to user interface object that provides information about actions that can be taken by the user to
improve calibration of the augmented reality view (e.g., next to the calibration user interface improve calibration of the augmented reality view (e.g., next to the calibration user interface
object). In object). Insome some embodiments, thetextual embodiments, the textualobject object provides providesaa prompt prompttotoaauser user for for movement movement
of the device (e.g., with a currently detected error condition), such as “excessive movement,” of the device (e.g., with a currently detected error condition), such as "excessive movement,"
“low detail," "low detail,” “move closer,” etc. "move closer," etc. In Insome some embodiments, thedevice embodiments, the deviceupdates updatesthe thetextual textualobject object in accordance with the user’s actions during the calibration process and new error conditions in accordance with the user's actions during the calibration process and new error conditions
that are detected based on the user’s actions. Displaying text concurrently with the calibration that are detected based on the user's actions. Displaying text concurrently with the calibration
user interface object provides visual feedback to the user (e.g., providing a verbal indication user interface object provides visual feedback to the user (e.g., providing a verbal indication
of the of the type type of of movement needed movement needed forcalibration). for calibration). Providing Providingimproved improved visual visual feedback feedback to to thethe
user enhances the operability of the device and makes the user-device interface more efficient user enhances the operability of the device and makes the user-device interface more efficient
(e.g., (e.g.,helping helpingthe theuser usertoto provide proper provide properinputs and inputs andreducing reducinguser usermistakes mistakeswhen when
operating/interacting operating/interacting with with the the device), device),which, which, additionally, additionally,reduces reducespower power usage usage and and
171
1005066680
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and 10 Jan 2024
efficiently. efficiently.
[00470]
[00470] In some embodiments, in response to detecting that the calibration criteria are In some embodiments, in response to detecting that the calibration criteria are
met (e.g., criteria met before the calibration user interface object was ever displayed, or met (e.g., criteria met before the calibration user interface object was ever displayed, or
criteria met after the calibration user interface object was displayed and animated for a period criteria met after the calibration user interface object was displayed and animated for a period
of time), the device displays (17022) a visual indication of a plane (e.g., displaying an outline of time), the device displays (17022) a visual indication of a plane (e.g., displaying an outline
around the detected plane, or highlighting the detected plane) detected in the physical around the detected plane, or highlighting the detected plane) detected in the physical 2024200149
environment captured in the field of view of the one or more cameras (e.g., after ceasing to environment captured in the field of view of the one or more cameras (e.g., after ceasing to
display the calibration user interface object if the calibration user interface object was display the calibration user interface object if the calibration user interface object was
initially displayed). For example, in Figure 12J, a plane (floor surface 5038) is highlighted to initially displayed). For example, in Figure 12J, a plane (floor surface 5038) is highlighted to
indicate that the plane has been detected in the physical environment 5002 as captured in the indicate that the plane has been detected in the physical environment 5002 as captured in the
displayed field displayed field of of view view 6036 of the 6036 of the one one or or more cameras.Displaying more cameras. Displayinga avisual visualindication indication of of aa detected plane provides visual feedback (e.g., indicating that a plane has been detected in the detected plane provides visual feedback (e.g., indicating that a plane has been detected in the
physical environment physical environmentcaptured capturedbybythe thedevice devicecamera(s)). camera(s)).Providing Providing improved improved visual visual
feedbackto feedback to the the user user enhances the operability enhances the operability of of the the device device and and makes the user-device makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user interface more efficient (e.g., by helping the user to provide proper inputs and reducing user
mistakes when mistakes whenoperating/interacting operating/interactingwith withthe thedevice), device), which, which,additionally, additionally, reduces power reduces power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently.
[00471]
[00471] In some In embodiments, some embodiments, in in response response to to receivingthetherequest receiving requesttotodisplay displaythe the augmentedreality augmented realityview viewofofthe the physical physical environment: environment:ininaccordance accordancewith with thedetermination the determination that the calibration criteria are not met and before displaying the calibration user interface that the calibration criteria are not met and before displaying the calibration user interface
object, the device displays (17024) (e.g., via the display-generation component, and in the object, the device displays (17024) (e.g., via the display-generation component, and in the
first user interface region that includes the representation of the field of view of one or more first user interface region that includes the representation of the field of view of one or more
cameras (e.g., a blurred version of the field of view)) an animated prompt object (e.g., a scan cameras (e.g., a blurred version of the field of view)) an animated prompt object (e.g., a scan
promptobject, prompt object, such such as as aa bouncy cubeororaawireframe bouncy cube wireframeobject) object)that thatincludes includes aa representation representation of of the device moving relative to a representation of a plane (e.g., the movement of the the device moving relative to a representation of a plane (e.g., the movement of the
representation of the device relative to the representation of the plane indicates a required representation of the device relative to the representation of the plane indicates a required
movement movement of of thedevice the devicetotobebeeffected effectedbybythe theuser). user). For For example, example,the the animated animatedprompt prompt object object
includes representation includes representation 12004 of the 12004 of the device device 100 100that that moves relative to moves relative to representation representation 12010 12010
of aa plane, of plane, as asdescribed described with with regard regard to toFigures Figures12B-12D. In some 12B-12D. In someembodiments, embodiments,thethe device device
ceases to ceases to display display the the animated animated prompt object when prompt object whenthe thedevice devicedetects detectsmovement movement of the of the device device
(e.g., (e.g., indicating that the indicating that theuser userhas hasstarted startedtotomove move the device the device in athat in a way waywill thatenable will enable 172
1005066680
calibration to calibration toproceed). proceed).In Insome some embodiments, thedevice embodiments, the devicereplaces replacesdisplay displayofof the the animated animated 10 Jan 2024
promptobject prompt objectwith withthe the calibration calibration user user interface interfaceobject objectwhen when the the device device detects detectsmovement movement ofof
the device and before calibration has been completed to guide the user further with respect to the device and before calibration has been completed to guide the user further with respect to
calibration of calibration of the thedevice. device.For Forexample, example, as asdescribed described with with regard regard to toFigures Figures12C-12E, when 12C-12E, when
movement movement of of thedevice the deviceisisdetected detected(as (as shown shownininFigures Figures12C-12D), 12C-12D),an an animated animated prompt prompt that that
includes representation includes representation 12004 of the 12004 of the device device 100 100ceases ceasesto to be be displayed displayed and andcalibration calibration user user
interface object interface object 12014 12014 is is displayed displayed in in Figure Figure 12E. 12E. Displaying an animated Displaying an animatedprompt promptobject objectthat that 2024200149
includes a representation of the device moving relative to a representation of a plane provides includes a representation of the device moving relative to a representation of a plane provides
visual feedback to the user (e.g., to illustrate a type of movement of the device that is needed visual feedback to the user (e.g., to illustrate a type of movement of the device that is needed
for calibration). for calibration).Providing Providing improved visual feedback improved visual feedbacktoto the the user user enhances the operability enhances the operability of of
the device and makes the user-device interface more efficient (e.g., by helping the user to the device and makes the user-device interface more efficient (e.g., by helping the user to
move the device in a manner that provides information needed to meet calibration criteria), move the device in a manner that provides information needed to meet calibration criteria),
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00472]
[00472] In some In embodiments, some embodiments, adjusting adjusting at at leastone least onedisplay displayparameter parameterofofthe the calibration user interface object in accordance with the detected change in attitude of the one calibration user interface object in accordance with the detected change in attitude of the one
or more or camerasininthe more cameras thephysical physical environment environmentincludes includes(17026): (17026): moving moving the the calibration calibration user user
interface object interface object by by aafirst firstamount amountininaccordance accordance with with aafirst firstmagnitude magnitudeof ofmovement of the movement of the one or one or more camerasininthe more cameras thephysical physicalenvironment; environment;and and moving moving the the calibration calibration user user interface interface
object by object by a a second amountininaccordance second amount accordancewith witha asecond second magnitude magnitude of movement of movement of theofone theor one or more cameras in the physical environment, wherein the first amount is distinct from (e.g., more cameras in the physical environment, wherein the first amount is distinct from (e.g.,
greater than) the second amount, and the first magnitude of movement is distinct from (e.g., greater than) the second amount, and the first magnitude of movement is distinct from (e.g.,
greater than) greater than) the thesecond second magnitude of movement magnitude of movement (e.g.,the (e.g., thefirst first and and second magnitudesofofthe second magnitudes the movement movement areare measured measured based based on movement on movement in theinsame the same direction direction in theinphysical the physical environment).Moving environment). Movingthethe calibrationuser calibration userinterface interface object object by by an an amount amountthat thatcorresponds correspondstoto a magnitude a ofmovement magnitude of movementof of thethe oneone or or more more (device) (device) cameras cameras provides provides visual visual feedback feedback
(e.g., (e.g., indicating to the indicating to theuser userthat thatthe themovement movementof theofcalibration the calibration user interface user interface object object is a is a guide for guide for movement movement ofof thedevice the devicethat thatisis required required for for calibration). calibration).Providing Providing improved visual improved visual
feedbackto feedback to the the user user enhances the operability enhances the operability of of the thedevice device and and makes the user-device makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user interface more efficient (e.g., by helping the user to provide proper inputs and reducing user
mistakes when mistakes whenoperating/interacting operating/interactingwith withthe thedevice), device), which, which,additionally, additionally, reduces power reduces power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently.
173
1005066680
[00473]
[00473] In some In embodiments, some embodiments, adjusting adjusting at at leastone least onedisplay displayparameter parameterofofthe the 10 Jan 2024
calibration user interface object in accordance with the detected change in attitude of the one calibration user interface object in accordance with the detected change in attitude of the one
or more or camerasininthe more cameras thephysical physical environment environmentincludes includes(17028): (17028): inin accordance accordance with with a a determinationthat determination that the the detected detected change in attitude change in attitudeof ofthe theone oneorormore more cameras cameras corresponds to corresponds to
a first a firsttype typeofof movement (e.g., sideways movement (e.g., sideways movement, suchasasleftward, movement, such leftward,rightward, rightward,ororback backand and forth sideways forth movement) sideways movement) (and (and does does notnot correspond correspond to atosecond a second type type of of movement movement (e.g., (e.g.,
vertical movement, vertical suchasasupward, movement, such upward,downward, downward, or and or up up and downdown movement)), movement)), moving moving the the 2024200149
calibration user interface object based on the first type of movement (e.g., moving the calibration user interface object based on the first type of movement (e.g., moving the
calibration user interface object in a first manner (e.g., rotating the calibration user interface calibration user interface object in a first manner (e.g., rotating the calibration user interface
object around a vertical axis through the calibration user interface object)); and in accordance object around a vertical axis through the calibration user interface object)); and in accordance
with a determination that the detected change in attitude of the one or more cameras with a determination that the detected change in attitude of the one or more cameras
correspondsto corresponds to the the second type of second type of movement movement (and (and does does notnot correspond correspond to the to the firsttype first typeofof movement),forgoing movement), forgoing moving moving the the calibration calibration user user interfaceobject interface objectbased basedononthe thesecond secondtype typeofof movement (e.g., forgoing moving the calibration user interface object in the first manner or movement (e.g., forgoing moving the calibration user interface object in the first manner or
keepingthe keeping the calibration calibration user user interface interfaceobject objectstationary). ForFor stationary). example, example,sideways sidewaysmovement movement ofof
device 100 device 100that that includes includes one or more one or cameras(e.g., more cameras (e.g., as as described with regard described with regard to to Figures Figures 12F-1 12F-1
to -2G-1 and Figures 12F-2 to 12G-2) causes calibration user interface object 12014 to rotate, to -2G-1 and Figures 12F-2 to 12G-2) causes calibration user interface object 12014 to rotate,
whereasvertical whereas vertical movement movement of of device device 100 100 (e.g.,asasdescribed (e.g., describedwith withregard regardtotoFigures Figures12G-1 12G-1toto 12H-1 andFigures 12H-1 and Figures12G-2 12G-2to to 12H-2), 12H-2), does does notnot cause cause calibrationuser calibration userinterface interfaceobject object12014 12014toto rotate. Forgoing rotate. Forgoing movement movement ofof thecalibration the calibrationuser user interface interface object object in in accordance with aa accordance with
determination that the detected change in attitude of the device camera(s) corresponds to a determination that the detected change in attitude of the device camera(s) corresponds to a
second type of movement provides visual feedback (e.g., indicating to the user that the second type of movement provides visual feedback (e.g., indicating to the user that the
secondtype second type of of movement movement of of thetheoneone oror more more cameras cameras is not is not required required forfor calibration). calibration).
Providingimproved Providing improvedvisual visualfeedback feedbacktoto theuser the userenhances enhancesthetheoperability operabilityofofthe the device device and and makes the user-device interface more efficient (e.g., by helping the user to avoid providing makes the user-device interface more efficient (e.g., by helping the user to avoid providing
unnecessaryinput), unnecessary input), which, which, additionally, additionally, reduces reduces power usageand power usage andimproves improves battery battery lifeofof the life the device by enabling the user to use the device more quickly and efficiently. device by enabling the user to use the device more quickly and efficiently.
[00474]
[00474] In some In embodiments, some embodiments, adjusting adjusting at at leastone least onedisplay displayparameter parameterofofthe the calibration user interface object in accordance with the detected change in attitude of the one calibration user interface object in accordance with the detected change in attitude of the one
or more or camerasininthe more cameras thephysical physical environment environmentincludes includes(17030): (17030): moving moving the the calibration calibration user user
interface object (e.g., rotating and/or tilting) in accordance with the detected change in interface object (e.g., rotating and/or tilting) in accordance with the detected change in
attitude of attitude ofthe theone oneor ormore more cameras in the cameras in the physical physical environment withoutaltering environment without altering aa characteristic display location (e.g., a location of a geometric center, or an axis of the characteristic display location (e.g., a location of a geometric center, or an axis of the
174
1005066680
calibration user interface object on the display) of the calibration user interface object over calibration user interface object on the display) of the calibration user interface object over 10 Jan 2024
the first user interface region (e.g., the calibration user interface object is anchored to a fixed the first user interface region (e.g., the calibration user interface object is anchored to a fixed
location on location on the the display, display, while while the thephysical physicalenvironment moveswithin environment moves withinthe thefield field of of view of the view of the one or one or more camerasunderneath more cameras underneath thethe calibrationuser calibration userinterface interface object). object). For For example, in example, in
Figures 12E-1 to 12I-1, calibration user interface object 12014 rotates while remaining at a Figures 12E-1 to 12I-1, calibration user interface object 12014 rotates while remaining at a
fixed location relative to display 112. Moving the calibration user interface object without fixed location relative to display 112. Moving the calibration user interface object without
altering a characteristic display location of the calibration user interface object provides altering a characteristic display location of the calibration user interface object provides 2024200149
visual feedback (e.g., indicating that the calibration user interface object is distinct from a visual feedback (e.g., indicating that the calibration user interface object is distinct from a
virtual object that is placed at a location relative to a displayed augmented reality virtual object that is placed at a location relative to a displayed augmented reality
environment).Providing environment). Providingimproved improved visual visual feedback feedback to the to the user user enhances enhances the the operability operability of of the the
device and makes the user-device interface more efficient (e.g., by helping the user to avoid device and makes the user-device interface more efficient (e.g., by helping the user to avoid
provide proper provide proper inputs inputs and and reduce reduceuser user input input mistakes), mistakes), which, which, additionally, additionally, reduces reduces power power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently.
[00475]
[00475] In some In embodiments, some embodiments, adjusting adjusting at at leastone least onedisplay displayparameter parameterofofthe the calibration user interface object in accordance with the detected change in attitude of the one calibration user interface object in accordance with the detected change in attitude of the one
or more or camerasininthe more cameras thephysical physical environment environmentincludes includes(17032): (17032): rotatingthe rotating thecalibration calibration user user interface object about an axis that is perpendicular to a movement direction of the one or interface object about an axis that is perpendicular to a movement direction of the one or
more cameras in the physical environment (e.g., the calibration user interface object rotates more cameras in the physical environment (e.g., the calibration user interface object rotates
about the about the z-axis z-axis when the device when the device (e.g., (e.g., including including the thecameras) cameras) moves backand moves back andforth forthon onthe the X- x- y plane, or the calibration user interface object rotates about the y-axis when the device (e.g., y plane, or the calibration user interface object rotates about the y-axis when the device (e.g.,
including the cameras) moves from side-to-side along the x-axis (e.g., the x-axis is defined as including the cameras) moves from side-to-side along the x-axis (e.g., the x-axis is defined as
the horizontal direction relative to the physical environment and lies within the plane of the the horizontal direction relative to the physical environment and lies within the plane of the
touch-screen display, touch-screen display, for for example)). example)). For For example, in Figures example, in Figures 12E-1 12E-1toto12G-1, 12G-1,calibration calibrationuser user interface object 12014 rotates about a vertical axis that is perpendicular to the sideways interface object 12014 rotates about a vertical axis that is perpendicular to the sideways
movement movement of of device device shown shown in Figures in Figures 12E-2 12E-2 to 12G-2. to 12G-2. Rotating Rotating the calibration the calibration user user interface interface
object about object about an an axis axis that thatisisperpendicular perpendiculartotomovement of the movement of the device device camera(s) provides camera(s) provides
visual feedback (e.g., indicating to the user that the movement of the calibration user visual feedback (e.g., indicating to the user that the movement of the calibration user
interface object is a guide for movement of the device that is required for calibration). interface object is a guide for movement of the device that is required for calibration).
Providing improved Providing improvedvisual visualfeedback feedbacktoto theuser the userenhances enhancesthetheoperability operabilityofofthe the device device and and makes the user-device interface more efficient (e.g., by helping the user to provide proper makes the user-device interface more efficient (e.g., by helping the user to provide proper
inputs and inputs reducing user and reducing user mistakes mistakeswhen whenoperating/interacting operating/interactingwith withthe thedevice), device), which, which,
175
1005066680
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the 10 Jan 2024
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00476]
[00476] In some In embodiments, some embodiments, adjusting adjusting at at leastone least onedisplay displayparameter parameterofofthe the calibration user interface object in accordance with the detected change in attitude of the one calibration user interface object in accordance with the detected change in attitude of the one
or more or camerasininthe more cameras thephysical physical environment environmentincludes includes(17034): (17034): moving moving the the calibration calibration user user
interface object at a speed that is determined in accordance with a rate of change (e.g., interface object at a speed that is determined in accordance with a rate of change (e.g.,
movement movement speed speed of of thethe physicalenvironment) physical environment) detected detected in in thethe fieldofofview field viewofofthe theone oneorormore more 2024200149
cameras. Moving cameras. Moving thecalibration the calibrationuser userinterface interface object object at at aa speed speed determined in accordance determined in accordance
with a change in attitude of the device camera(s) provides visual feedback (e.g., indicating to with a change in attitude of the device camera(s) provides visual feedback (e.g., indicating to
the user that the movement of the calibration user interface object is a guide for movement of the user that the movement of the calibration user interface object is a guide for movement of
the device that is required for calibration). Providing improved visual feedback to the user the device that is required for calibration). Providing improved visual feedback to the user
enhancesthe enhances the operability operability of of the the device device and and makes the user-device makes the user-deviceinterface interface more efficient more efficient
(e.g., (e.g.,by byhelping helpingthe theuser usertoto provide provideproper properinputs and inputs andreducing reducinguser usermistakes mistakeswhen when
operating/interacting operating/interacting with with the the device), device),which, which, additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00477]
[00477] In some In embodiments, some embodiments, adjusting adjusting at at leastone least onedisplay displayparameter parameterofofthe the calibration user interface object in accordance with the detected change in attitude of the one calibration user interface object in accordance with the detected change in attitude of the one
or more or camerasininthe more cameras the physical physical environment environmentincludes includes(17036): (17036): moving moving the the calibration calibration user user
interface object in a direction that is determined in accordance with a direction of change interface object in a direction that is determined in accordance with a direction of change
(e.g., (e.g., movement speed movement speed of physical of the the physical environment) environment) detected detected in in of the field theview field of of theview of the one or one or
more cameras (e.g., the device rotates the calibration user interface object clockwise for more cameras (e.g., the device rotates the calibration user interface object clockwise for
movement of the device from right to left and rotates the calibration user interface object movement of the device from right to left and rotates the calibration user interface object
counterclockwise for movement of the device from left to right, or the device rotates the counterclockwise for movement of the device from left to right, or the device rotates the
calibration user calibration user interface interfaceobject objectcounterclockwise counterclockwise for formovement ofthe movement of the device devicefrom fromright right to to left and rotates the calibration user interface object clockwise for movement of the device left and rotates the calibration user interface object clockwise for movement of the device
from left to right). Moving the calibration user interface object in a direction that is from left to right). Moving the calibration user interface object in a direction that is
determinedinin accordance determined accordancewith witha achange changeinin attitudeof attitude of the the device camera(s)provides device camera(s) providesvisual visual feedback (e.g., indicating to the user that the movement of the calibration user interface feedback (e.g., indicating to the user that the movement of the calibration user interface
object is a guide for movement of the device that is required for calibration). Providing object is a guide for movement of the device that is required for calibration). Providing
improvedvisual improved visualfeedback feedbacktotothe theuser user enhances enhancesthe theoperability operability of of the the device device and makesthe and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and user-device interface more efficient (e.g., by helping the user to provide proper inputs and
reducing user reducing user mistakes mistakeswhen whenoperating/interacting operating/interactingwith withthe thedevice), device), which, which,additionally, additionally, 176
1005066680
reduces power reduces powerusage usageand andimproves improves battery battery lifeofofthe life thedevice deviceby byenabling enablingthe theuser userto to use use the the 10 Jan 2024
device more device morequickly quicklyand andefficiently. efficiently.
[00478]
[00478] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 17A-17D Figures 17A-17D have have been been described described is merely is merely an example an example andnot and is is not intended intended to indicate to indicate
that the described order is the only order in which the operations could be performed. One of that the described order is the only order in which the operations could be performed. One of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with 2024200149
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 1000, 1000, 16000, 16000,18000, 18000, 19000, and20000) 19000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 17000 17000 described described
abovewith above withrespect respect to to Figures 17A-17D.ForFor Figures 17A-17D. example, example, contacts, contacts, inputs,virtual inputs, virtualobjects, objects, user user
interface regions, fields of view, tactile outputs, movements, and/or animations described interface regions, fields of view, tactile outputs, movements, and/or animations described
abovewith above withreference referenceto to method method17000 17000 optionally optionally have have oneone or or more more of the of the characteristicsofof characteristics
the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs, the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs,
movements, movements, and/or and/or animations animations described described herein herein with with reference reference to to other other methods methods described described
herein (e.g., herein (e.g.,methods methods 800, 800, 900, 900, 1000, 16000, 18000, 1000, 16000, 18000,19000, 19000,and and20000). 20000). For For brevity,these brevity, these details are not repeated here. details are not repeated here.
[00479]
[00479] Figures 18A-18I Figures 18A-18Iare areflow flowdiagrams diagrams illustrating method illustrating method18000 18000 of of constraining constraining
rotation of a virtual object about an axis. Method 18000 is performed at an electronic device rotation of a virtual object about an axis. Method 18000 is performed at an electronic device
(e.g., device 300, Figure 3, or portable multifunction device 100, Figure 1A) having a display (e.g., device 300, Figure 3, or portable multifunction device 100, Figure 1A) having a display
generation component (e.g., a display, a projector, a heads up display or the like), one or generation component (e.g., a display, a projector, a heads up display or the like), one or
more input devices (e.g., a touch-sensitive surface, or a touch-screen display that serves both more input devices (e.g., a touch-sensitive surface, or a touch-screen display that serves both
as the as the display display generation generation component andthe component and thetouch-sensitive touch-sensitivesurface), surface), one one or or more morecameras cameras (e.g., one or more rear-facing cameras on a side of the device opposite from the display and (e.g., one or more rear-facing cameras on a side of the device opposite from the display and
the touch-sensitive surface), and one or more attitude sensors (e.g., accelerometers, the touch-sensitive surface), and one or more attitude sensors (e.g., accelerometers,
gyroscopes, and/or magnetometers) for detecting changes in attitude (e.g., orientation (e.g., gyroscopes, and/or magnetometers) for detecting changes in attitude (e.g., orientation (e.g.,
rotation, yaw, and/or tilt angles) and position relative to the surrounding physical rotation, yaw, and/or tilt angles) and position relative to the surrounding physical
environment)ofofthe environment) thedevice deviceincluding includingthe the one oneor or more morecameras. cameras.Some Some operations operations in in method method
18000 are, optionally, 18000 are, optionally, combined and/orthe combined and/or theorder order of of some someoperations operationsis, is, optionally, optionally, changed. changed.
[00480]
[00480] Thedevice The devicedisplays displays(18002), (18002),bybythe thedisplay display generation generationcomponent, component,a a representation of a first perspective of a virtual three-dimensional object in a first user representation of a first perspective of a virtual three-dimensional object in a first user
interface region (e.g., a staging user interface or an augmented reality user interface). For interface region (e.g., a staging user interface or an augmented reality user interface). For
177
1005066680
example,virtual example, virtual object object 11002 is shown 11002 is instaging shown in staging user user interface interface 6010, 6010, as as shown in Figure shown in Figure 10 Jan 2024
13B. 13B.
[00481]
[00481] While displaying the representation of the first perspective of the virtual three- While displaying the representation of the first perspective of the virtual three-
dimensional object in the first user interface region on the display, the device detects (18004) dimensional object in the first user interface region on the display, the device detects (18004)
a first input (e.g., a swipe input (e.g., by one or two finger contacts) on the touch-sensitive a first input (e.g., a swipe input (e.g., by one or two finger contacts) on the touch-sensitive
surface, or a pivot input (e.g., two finger rotation, or one finger contact pivots around another surface, or a pivot input (e.g., two finger rotation, or one finger contact pivots around another
finger contact)) that corresponds to a request to rotate the virtual three-dimensional object finger contact)) that corresponds to a request to rotate the virtual three-dimensional object 2024200149
relative to a display (e.g., a display plane corresponding to the display generation component, relative to a display (e.g., a display plane corresponding to the display generation component,
such as the plane of the touch-screen display) to display a portion of the virtual three- such as the plane of the touch-screen display) to display a portion of the virtual three-
dimensional object that is not visible from the first perspective of the virtual three- dimensional object that is not visible from the first perspective of the virtual three-
dimensional object. For example, the request is an input as described with regard to Figures dimensional object. For example, the request is an input as described with regard to Figures
13B-13C 13B-13C ororananinput inputasasdescribed describedwith withregard regardtotoFigures Figures13E-13F. 13E-13F.
[00482]
[00482] In response to detecting the first input (18006): in accordance with a In response to detecting the first input (18006): in accordance with a
determination that the first input corresponds to a request to rotate the three-dimensional determination that the first input corresponds to a request to rotate the three-dimensional
object about a first axis (e.g., a first axis that is parallel to the plane of the display (e.g., the x- object about a first axis (e.g., a first axis that is parallel to the plane of the display (e.g., the X-
y plane) in a horizontal direction, such as an x axis), the device rotates the virtual three- y plane) in a horizontal direction, such as an X axis), the device rotates the virtual three-
dimensional object relative to the first axis by an amount that is determined based on a dimensional object relative to the first axis by an amount that is determined based on a
magnitude of the first input (e.g., a speed and/or distance of a swipe input along a vertical magnitude of the first input (e.g., a speed and/or distance of a swipe input along a vertical
axis (e.g., y-axis) of the touch-sensitive surface (e.g., an corresponding x-y plane to the x-y axis (e.g., y-axis) of the touch-sensitive surface (e.g., an corresponding x-y plane to the x-y
plane of the display)) and is constrained by a limit on the movement restricting rotation of the plane of the display)) and is constrained by a limit on the movement restricting rotation of the
virtual three-dimensional object by more than a threshold amount of rotation relative to the virtual three-dimensional object by more than a threshold amount of rotation relative to the
first axis (e.g., the rotation around the first axis is restricted to a range of +/- 30 degree angle first axis (e.g., the rotation around the first axis is restricted to a range of +/- 30 degree angle
around the first axis, and rotation beyond the range is prohibited, irrespective of the around the first axis, and rotation beyond the range is prohibited, irrespective of the
magnitudeofofthe magnitude thefirst first input). input).For Forexample, example, as as described described with with regard regard to toFigures Figures13E-13G, 13E-13G,
rotation of the virtual object 11002 is constrained by a limit. In accordance with a rotation of the virtual object 11002 is constrained by a limit. In accordance with a
determination that the first input corresponds to a request to rotate the three-dimensional determination that the first input corresponds to a request to rotate the three-dimensional
object about a second axis (e.g., a second axis that is parallel to the plane of the display (e.g., object about a second axis (e.g., a second axis that is parallel to the plane of the display (e.g.,
the x-y plane) in a vertical direction, such as a y axis) that is different from the first axis, the the x-y plane) in a vertical direction, such as a y axis) that is different from the first axis, the
device rotates the virtual three-dimensional object relative to the second axis by an amount device rotates the virtual three-dimensional object relative to the second axis by an amount
that is determined based on a magnitude of the first input (e.g., a speed and/or distance of a that is determined based on a magnitude of the first input (e.g., a speed and/or distance of a
swipe input along a horizontal axis (e.g., an x axis) of the touch-sensitive surface (e.g., an swipe input along a horizontal axis (e.g., an X axis) of the touch-sensitive surface (e.g., an
corresponding x-y plane to the x-y plane of the display)), wherein, for an input with a corresponding x-y plane to the x-y plane of the display)), wherein, for an input with a
magnitude above a respective threshold, the device rotates the virtual three-dimensional magnitude above a respective threshold, the device rotates the virtual three-dimensional
178
1005066680
object relative to the second axis by more than the threshold amount of rotation. In some object relative to the second axis by more than the threshold amount of rotation. In some 10 Jan 2024
embodiments, for rotation relative to the second axis, the device imposes a constraint on embodiments, for rotation relative to the second axis, the device imposes a constraint on
rotation that is greater than the constraint on rotation relative to the first axis (e.g., the three- rotation that is greater than the constraint on rotation relative to the first axis (e.g., the three-
dimensional object is allowed to rotate 60 degrees instead of 30 degrees). In some dimensional object is allowed to rotate 60 degrees instead of 30 degrees). In some
embodiments, for rotation relative to the second axis, the device does not impose a constraint embodiments, for rotation relative to the second axis, the device does not impose a constraint
on the rotation, such that the three-dimensional object can rotate freely about the second axis on the rotation, such that the three-dimensional object can rotate freely about the second axis
(e.g., for an input with a high enough magnitude such as a fast or long swipe input that (e.g., for an input with a high enough magnitude such as a fast or long swipe input that 2024200149
includes movement includes movement of of one one or or more more contacts, contacts, thethethree-dimensional three-dimensional object object can can rotatebybymore rotate more than 360 degrees relative to the second axis). For example, a greater amount of rotation of the than 360 degrees relative to the second axis). For example, a greater amount of rotation of the
virtual object 11002 occurs about the y-axis in response to the input described with regard to virtual object 11002 occurs about the y-axis in response to the input described with regard to
Figures 13B-13C Figures 13B-13C than than theamount the amount of of rotationofofthe rotation thevirtual virtual object object 11002 11002about aboutthe thex-axis x-axisin in response to response to the the input input described described with with regard regard to to Figures Figures 13E-13G. Determining 13E-13G. Determining whether whether to to rotate an object by an amount that is constrained to a threshold amount or rotate the object by rotate an object by an amount that is constrained to a threshold amount or rotate the object by
morethan more thanthe the threshold threshold amount amountdepending dependingon on whether whether the the input input is is a requesttotorotate a request rotate the the object about a first axis or a second axis improves the ability to control different types of object about a first axis or a second axis improves the ability to control different types of
rotation operations. Providing additional control options without cluttering the user interface rotation operations. Providing additional control options without cluttering the user interface
with additional displayed controls enhances the operability of the device and makes the user- with additional displayed controls enhances the operability of the device and makes the user-
device interface more efficient. device interface more efficient.
[00483]
[00483] In some embodiments, in response to detecting the first input (18008): in In some embodiments, in response to detecting the first input (18008): in
accordance with a determination that the first input includes first movement of a contact accordance with a determination that the first input includes first movement of a contact
across a touch-sensitive surface in a first direction (e.g., y-direction, vertical direction on the across a touch-sensitive surface in a first direction (e.g., y-direction, vertical direction on the
touch-sensitive surface), and that the first movement of the contact in the first direction meets touch-sensitive surface), and that the first movement of the contact in the first direction meets
first criteria for rotating the representation of the virtual object with respect to the first axis, first criteria for rotating the representation of the virtual object with respect to the first axis,
wherein the first criteria include a requirement that the first input includes more than a first wherein the first criteria include a requirement that the first input includes more than a first
threshold amount of movement in the first direction in order for the first criteria to be met threshold amount of movement in the first direction in order for the first criteria to be met
(e.g., (e.g., the the device doesnotnot device does initiaterotation initiate rotation of of thethe three-dimensional three-dimensional object object about about the theaxis first first axis until the device detects more than a first threshold amount of movement in the first direction), until the device detects more than a first threshold amount of movement in the first direction),
the device determines that the first input corresponds to a request to rotate the three- the device determines that the first input corresponds to a request to rotate the three-
dimensional object about the first axis (e.g., x-axis, horizontal axis parallel to the display, or dimensional object about the first axis (e.g., x-axis, horizontal axis parallel to the display, or
horizontal-axis through the virtual object); and in accordance with a determination the first horizontal-axis through the virtual object); and in accordance with a determination the first
input includes input includes second movement second movement of of thethe contactacross contact acrossthe thetouch-sensitive touch-sensitivesurface surfaceinin aa second second direction (e.g., x-direction, horizontal direction on the touch-sensitive surface), and that the direction (e.g., x-direction, horizontal direction on the touch-sensitive surface), and that the
second movement of the contact in the second direction meets second criteria for rotating the second movement of the contact in the second direction meets second criteria for rotating the
179
1005066680
representation of the virtual object with respect to the second axis wherein the second criteria representation of the virtual object with respect to the second axis wherein the second criteria 10 Jan 2024
include aa requirement include that the requirement that the first firstinput includes input more includes morethan thana asecond secondthreshold thresholdamount amount of of
movement in the second direction in order for the second criteria to be met (e.g., the device movement in the second direction in order for the second criteria to be met (e.g., the device
does not initiate rotation of the three-dimensional object about the second axis until the does not initiate rotation of the three-dimensional object about the second axis until the
device detects device detects more than aa second more than secondthreshold thresholdamount amountofofmovement movement in the in the second second direction), direction),
the device determines that the first input corresponds to a request to rotate the three- the device determines that the first input corresponds to a request to rotate the three-
dimensional object about the second axis, (e.g., the vertical axis parallel to the display, or dimensional object about the second axis, (e.g., the vertical axis parallel to the display, or 2024200149
vertical axis through the virtual object), wherein the first threshold is greater than the second vertical axis through the virtual object), wherein the first threshold is greater than the second
threshold (e.g., the user needs to swipe in the vertical direction by a greater amount to trigger threshold (e.g., the user needs to swipe in the vertical direction by a greater amount to trigger
a rotation around the horizontal axis (e.g., tilt the object forward or backward relative to the a rotation around the horizontal axis (e.g., tilt the object forward or backward relative to the
user), than to swipe in the horizontal direction to trigger a rotation around the vertical axis user), than to swipe in the horizontal direction to trigger a rotation around the vertical axis
(e.g., (e.g., rotate rotate the the object)). Determining object)). Determining whether whether to rotate to rotate an object an object by an that by an amount amount is that is constrained to constrained to aa threshold threshold amount or to amount or to rotate rotate the theobject objectby bymore more than than the the threshold thresholdamount, amount,
depending on whether the input is a request to rotate the object about a first axis or a second depending on whether the input is a request to rotate the object about a first axis or a second
axis, improves the ability to control different types of rotation operations in response to an axis, improves the ability to control different types of rotation operations in response to an
input that corresponds to a request to rotate the object. Providing additional control options input that corresponds to a request to rotate the object. Providing additional control options
without cluttering the user interface with additional displayed controls enhances the without cluttering the user interface with additional displayed controls enhances the
operability of the device and makes the user-device interface more efficient. operability of the device and makes the user-device interface more efficient.
[00484]
[00484] In some In embodiments some embodiments (18010), (18010), rotation rotation of of thevirtual the virtualthree-dimensional three-dimensionalobject object relative to the first axis occurs with a first degree of correspondence between a characteristic relative to the first axis occurs with a first degree of correspondence between a characteristic
value of a first input parameter (e.g., a swipe distance, or swipe speed) of the first input and value of a first input parameter (e.g., a swipe distance, or swipe speed) of the first input and
an amount of rotation applied to the virtual three-dimensional object around the first axis, an amount of rotation applied to the virtual three-dimensional object around the first axis,
rotation of the virtual three-dimensional object relative to the second axis occurs with a rotation of the virtual three-dimensional object relative to the second axis occurs with a
seconddegree second degreeofofcorrespondence correspondence between between thethe characteristicvalue characteristic valueofofthe thefirst first input input parameter parameter
(e.g., (e.g., aa swipe distance,ororswipe swipe distance, swipe speed) speed) of second of the the second input gesture input gesture and anofamount and an amount rotation of rotation
applied to virtual three-dimensional object around the second axis, and the first degree of applied to virtual three-dimensional object around the second axis, and the first degree of
correspondence involves less rotation of the virtual three-dimensional object relative to the correspondence involves less rotation of the virtual three-dimensional object relative to the
first input parameter than the second degree of correspondence does (e.g., the rotation around first input parameter than the second degree of correspondence does (e.g., the rotation around
the first axis has more friction or catch than the rotation around the second axis). For the first axis has more friction or catch than the rotation around the second axis). For
example, a first amount of rotation of virtual object 11002 occurs in response to a swipe example, a first amount of rotation of virtual object 11002 occurs in response to a swipe
input, with a swipe distance d , for rotation about the y-axis (as described with regard to input, with a swipe distance d1, for 1 rotation about the y-axis (as described with regard to
Figures 13B-13C), and a second amount of rotation of virtual object 11002, less than the first Figures 13B-13C), and a second amount of rotation of virtual object 11002, less than the first
amount of rotation, occurs in response to a swipe input, with a swipe distance d , for rotation amount of rotation, occurs in response to a swipe input, with a swipe distance d1, for rotation 1
180
1005066680
about the x-axis (as described with regard to Figures 13E-13G). Rotating a virtual object with about the x-axis (as described with regard to Figures 13E-13G). Rotating a virtual object with 10 Jan 2024
a greater degree or a lesser degree of rotation in response to an input, depending on whether a greater degree or a lesser degree of rotation in response to an input, depending on whether
the input is a request to rotate the object about a first axis or a second axis, improves the the input is a request to rotate the object about a first axis or a second axis, improves the
ability to control different types of rotation operations in response to an input that ability to control different types of rotation operations in response to an input that
corresponds to a request to rotate the object. Providing additional control options without corresponds to a request to rotate the object. Providing additional control options without
cluttering the user interface with additional displayed controls enhances the operability of the cluttering the user interface with additional displayed controls enhances the operability of the
device and device and makes makesthe theuser-device user-deviceinterface interfacemore moreefficient. efficient. 2024200149
[00485]
[00485] In some In embodiments, some embodiments, thethe device device detects(18012) detects (18012) an an endend of of thethe firstinput first input (e.g., (e.g.,the theinput includes input movement includes of one movement of or more one or contacts on more contacts onthe the touch-sensitive touch-sensitive surface surface and and
detecting an end of the first input includes detecting liftoff of the one or more contacts from detecting an end of the first input includes detecting liftoff of the one or more contacts from
the touch-sensitive surface). After (e.g., in response to) detecting the end of the first input, the the touch-sensitive surface). After (e.g., in response to) detecting the end of the first input, the
device continues device continues (18014) (18014)toto rotate rotate the the three-dimensional object based three-dimensional object on aa magnitude based on magnitudeofofthe the first input prior to detecting the end of the input (e.g., based on a speed of movement of the first input prior to detecting the end of the input (e.g., based on a speed of movement of the
contact just prior to liftoff of the contact), including: in accordance with a determination that contact just prior to liftoff of the contact), including: in accordance with a determination that
the three-dimensional object is rotating relative to the first axis, slowing the rotation of the the three-dimensional object is rotating relative to the first axis, slowing the rotation of the
object relative to the first axis by a first amount that is proportional to the magnitude of the object relative to the first axis by a first amount that is proportional to the magnitude of the
rotation of the three-dimensional object relative to the first axis (e.g., slowing rotation of the rotation of the three-dimensional object relative to the first axis (e.g., slowing rotation of the
three-dimensional object around the first axis based on a first simulated physical parameter three-dimensional object around the first axis based on a first simulated physical parameter
such as a simulated friction with a first coefficient of friction); and in accordance with a such as a simulated friction with a first coefficient of friction); and in accordance with a
determination that the three-dimensional object is rotating relative to the second axis, slowing determination that the three-dimensional object is rotating relative to the second axis, slowing
the rotation of the object relative to the second axis by a second amount that is proportional to the rotation of the object relative to the second axis by a second amount that is proportional to
the magnitude of the rotation of the three-dimensional object relative to the second axis (e.g., the magnitude of the rotation of the three-dimensional object relative to the second axis (e.g.,
slowing rotation slowing rotation of of the the three-dimensional three-dimensional object object around the second around the axis based second axis based on onaa second second simulated physical parameter such as a simulated friction with a second coefficient of friction simulated physical parameter such as a simulated friction with a second coefficient of friction
that is less than the first coefficient of friction) wherein the second amount is different from that is less than the first coefficient of friction) wherein the second amount is different from
the first the firstamount. amount. For For example, in Figures example, in Figures 13C-13D, virtual object 13C-13D, virtual object 11002 11002continues continuestotorotate rotate after liftoff of contact 13002 that caused rotation of virtual object 11002 as described with after liftoff of contact 13002 that caused rotation of virtual object 11002 as described with
regard to regard to Figures Figures 13B-13C. 13B-13C. InInsome someembodiments, embodiments, the the second second amount amount is greater is greater thanthan the the first first
amount.InIn some amount. someembodiments, embodiments,the the second second amount amount is less is less thanthan the the firstamount. first amount. Slowing Slowing
rotation of a virtual object by a first amount or a second amount after detecting the end of an rotation of a virtual object by a first amount or a second amount after detecting the end of an
input, depending on whether the input is a request to rotate the object about a first axis or a input, depending on whether the input is a request to rotate the object about a first axis or a
second axis, provides visual feedback indicating that rotation operations are applied to the second axis, provides visual feedback indicating that rotation operations are applied to the
virtual object differently for rotation about the first axis and the second axis. Providing virtual object differently for rotation about the first axis and the second axis. Providing
181
1005066680
improvedvisual improved visualfeedback feedbacktotothe theuser userenhances enhancesthe theoperability operability of of the the device and makes device and makesthe the 10 Jan 2024
user-device interface more efficient (e.g., by helping the user to provide proper inputs and user-device interface more efficient (e.g., by helping the user to provide proper inputs and
avoid attempting to provide input for manipulating the virtual object prior to placement of the avoid attempting to provide input for manipulating the virtual object prior to placement of the
object at the second orientation that corresponds to the plane), which, additionally, reduces object at the second orientation that corresponds to the plane), which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00486]
[00486] In some In embodiments, some embodiments, thethe device device detects(18016) detects (18016) an an endend of of thethe firstinput first input 2024200149
(e.g., (e.g.,the theinput includes input movement includes of one movement of or more one or contacts on more contacts onthe the touch-sensitive touch-sensitive surface surface and and
detecting an end of the first input includes detecting liftoff of the one or more contacts from detecting an end of the first input includes detecting liftoff of the one or more contacts from
the touch-sensitive surface). After (e.g., in response to) detecting the end of the first input the touch-sensitive surface). After (e.g., in response to) detecting the end of the first input
(18018): in accordance (18018): in withaa determination accordance with determinationthat that the the three-dimensional object has three-dimensional object has been been rotated beyond a respective rotation threshold relative to the first axis, the device reverses at rotated beyond a respective rotation threshold relative to the first axis, the device reverses at
least a portion of the rotation of the three-dimensional object relative to the first axis; and, in least a portion of the rotation of the three-dimensional object relative to the first axis; and, in
accordancewith accordance withaadetermination determinationthat thatthe the three-dimensional three-dimensionalobject objecthas hasnot not been beenrotated rotated beyond the respective rotation threshold relative to the first axis, the device forgoes reversing beyond the respective rotation threshold relative to the first axis, the device forgoes reversing
the rotation of the three-dimensional object relative to the first axis. (e.g., ceasing rotation of the rotation of the three-dimensional object relative to the first axis. (e.g., ceasing rotation of
the three-dimensional object relative to the first axis and/or continuing rotation of the three- the three-dimensional object relative to the first axis and/or continuing rotation of the three-
dimensional object relative to the first axis in the direction of motion of the input by a dimensional object relative to the first axis in the direction of motion of the input by a
magnitude determined by a magnitude of the input prior to detecting the end of the input). magnitude determined by a magnitude of the input prior to detecting the end of the input).
For example, after virtual object 11002 rotates beyond a rotation threshold, as described with For example, after virtual object 11002 rotates beyond a rotation threshold, as described with
regard to Figures 13E-13G, the rotation of virtual object 11002 is reversed, as illustrated by regard to Figures 13E-13G, the rotation of virtual object 11002 is reversed, as illustrated by
Figures 13G-13H. Figures 13G-13H. InIn some some embodiments, embodiments, the amount the amount of reversing of reversing of rotation of the the rotation of the of the three- three-
dimensionalobject dimensional objectis is determined basedononhow determined based howfarfar thethree-dimensional the three-dimensional objecthas object hasrotated rotated beyond the respective rotation threshold (e.g., the rotation of the three-dimensional object is beyond the respective rotation threshold (e.g., the rotation of the three-dimensional object is
reversed by a greater amount relative to the first axis if the amount by which the rotation of reversed by a greater amount relative to the first axis if the amount by which the rotation of
the three-dimensional object rotated beyond the respective rotational threshold is greater as the three-dimensional object rotated beyond the respective rotational threshold is greater as
compared to a smaller amount of reversing the rotation relative to the first axis if the amount compared to a smaller amount of reversing the rotation relative to the first axis if the amount
by which the rotation of the three-dimensional object rotated beyond the respective rotational by which the rotation of the three-dimensional object rotated beyond the respective rotational
threshold is smaller). In some embodiments, the reversing of the rotation is driven by a threshold is smaller). In some embodiments, the reversing of the rotation is driven by a
simulated physical parameter such as an elastic effect that pulls with a greater force the simulated physical parameter such as an elastic effect that pulls with a greater force the
further the three-dimensional object is rotated beyond the respective rotation threshold further the three-dimensional object is rotated beyond the respective rotation threshold
relative to the first axis. In some embodiments, the reversing of rotation is in a direction of relative to the first axis. In some embodiments, the reversing of rotation is in a direction of
rotation that is determined based on the direction of rotation relative to the first axis that rotation that is determined based on the direction of rotation relative to the first axis that
182
1005066680
rotated beyond the respective rotation threshold (e.g., if the three-dimensional object was rotated beyond the respective rotation threshold (e.g., if the three-dimensional object was 10 Jan 2024
rotated so a top of the object moved backward into the display, the reversing of the rotation is rotated SO a top of the object moved backward into the display, the reversing of the rotation is
rotating the top of the object forward out of the display; if the three-dimensional object was rotating the top of the object forward out of the display; if the three-dimensional object was
rotated so that a top of the object was rotated forward out of the display, the reversing of the rotated SO that a top of the object was rotated forward out of the display, the reversing of the
rotation is rotating the top of the object backward into the display; if the three-dimensional rotation is rotating the top of the object backward into the display; if the three-dimensional
object was rotated so a right side of the object moved backward into the display, the reversing object was rotated SO a right side of the object moved backward into the display, the reversing
of the rotation is rotating the right side of the object forward out of the display; and/or if the of the rotation is rotating the right side of the object forward out of the display; and/or if the 2024200149
three-dimensional object was rotated so that a left side of the object was rotated forward out three-dimensional object was rotated SO that a left side of the object was rotated forward out
of the display, the reversing of the rotation is rotating the left side of the object backward into of the display, the reversing of the rotation is rotating the left side of the object backward into
the display). In some embodiments, for example, where rotation relative to the second axis is the display). In some embodiments, for example, where rotation relative to the second axis is
constrained to a respective range of angles, a similar rubberbanding (e.g., conditional constrained to a respective range of angles, a similar rubberbanding (e.g., conditional
reversing of reversing of rotation) rotation)isisperformed performed for forrotation rotationabout aboutthe second the secondaxis. axis.InIn some someembodiments, embodiments,
for example, where rotation relative to the second axis is not constrained such that the three- for example, where rotation relative to the second axis is not constrained such that the three-
dimensionalobject dimensional objectallowed allowedbybythe thedevice devicetotorotate rotate 360 360 degrees, degrees, rubberbanding rubberbandingisisnot not performedfor performed forrotation rotation about about the the second axis (e.g., second axis (e.g., because because the the device device does does not not impose a impose a
rotation threshold on rotation relative to the second axis). Reversing at least a portion of the rotation threshold on rotation relative to the second axis). Reversing at least a portion of the
rotation of the three-dimensional object relative to the first axis after detecting the end of an rotation of the three-dimensional object relative to the first axis after detecting the end of an
input, or forgoing reversing a portion of the rotation of the three-dimensional object relative input, or forgoing reversing a portion of the rotation of the three-dimensional object relative
to the first axis, depending on whether the object has been rotated beyond a rotation to the first axis, depending on whether the object has been rotated beyond a rotation
threshold, provides visual feedback indicating a rotation threshold applicable to rotation of threshold, provides visual feedback indicating a rotation threshold applicable to rotation of
the virtual object. Providing improved visual feedback to the user enhances the operability of the virtual object. Providing improved visual feedback to the user enhances the operability of
the device and makes the user-device interface more efficient (e.g., by helping the user to the device and makes the user-device interface more efficient (e.g., by helping the user to
avoid attempting to provide input for rotating the virtual object beyond the rotation avoid attempting to provide input for rotating the virtual object beyond the rotation
threshold), which, additionally, reduces power usage and improves battery life of the device threshold), which, additionally, reduces power usage and improves battery life of the device
by enabling the user to use the device more quickly and efficiently. by enabling the user to use the device more quickly and efficiently.
[00487]
[00487] In some In embodiments some embodiments (18020), (18020), in in accordance accordance withwith a determination a determination thatthat the the
first input corresponds to a request to rotate the three-dimensional object about a third axis first input corresponds to a request to rotate the three-dimensional object about a third axis
(e.g., a third axis that is perpendicular to the plane of the display (e.g., the x-y plane), such as (e.g., a third axis that is perpendicular to the plane of the display (e.g., the x-y plane), such as
a z axis) that is different from the first axis and the second axis, the device forgoes rotating a Z axis) that is different from the first axis and the second axis, the device forgoes rotating
the virtual three-dimensional object relative to the third axis (e.g., the rotation around the z- the virtual three-dimensional object relative to the third axis (e.g., the rotation around the Z-
axis is forbidden and the request to rotate the object around the z-axis is disregarded by the axis is forbidden and the request to rotate the object around the z-axis is disregarded by the
device). In some embodiments, the device provides an alert (e.g., a tactile output to indicate device). In some embodiments, the device provides an alert (e.g., a tactile output to indicate
failure of the input). Forgoing rotation of a virtual object in accordance with a determination failure of the input). Forgoing rotation of a virtual object in accordance with a determination
183
1005066680
that a rotation input corresponds to a request to rotate the virtual object about a third axis that a rotation input corresponds to a request to rotate the virtual object about a third axis 10 Jan 2024
provides visual feedback indicating that rotation about the third axis is restricted. Providing provides visual feedback indicating that rotation about the third axis is restricted. Providing
improvedvisual improved visualfeedback feedbacktotothe theuser user enhances enhancesthe theoperability operability of of the the device device and makesthe and makes the user-device interface more efficient (e.g., by helping the user to avoid attempting to provide user-device interface more efficient (e.g., by helping the user to avoid attempting to provide
input for rotating the virtual object about the third axis), which, additionally, reduces power input for rotating the virtual object about the third axis), which, additionally, reduces power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently. 2024200149
[00488]
[00488] In some In embodiments, some embodiments, thethe device device displays displays (18022) (18022) a representation a representation of of a a shadow cast by the virtual three-dimensional object while displaying the representation of the shadow cast by the virtual three-dimensional object while displaying the representation of the
first perspective of the virtual three-dimensional object in the first user interface region (e.g., first perspective of the virtual three-dimensional object in the first user interface region (e.g.,
the staging user interface). The device varies a shape of the representation of the shadow in the staging user interface). The device varies a shape of the representation of the shadow in
accordance with the rotation of the virtual three-dimensional object relative to the first axis accordance with the rotation of the virtual three-dimensional object relative to the first axis
and/or second and/or secondaxis. axis. For example,aa shape For example, shapeofof shadow shadow13006 13006 of of virtualobject virtual object11002 11002 varies varies
fromFigures from Figures13B-13F 13B-13Fas as thevirtual the virtualobject object 11002 11002rotates. rotates. In In some embodiments, some embodiments, thethe shadow shadow
shifts and changes shape to indicate a current orientation of the virtual object relative to an shifts and changes shape to indicate a current orientation of the virtual object relative to an
invisible ground plane in the staging user interface that supports a predefined bottom side of invisible ground plane in the staging user interface that supports a predefined bottom side of
the virtual object. In some embodiments, the surface of the virtual three-dimensional object the virtual object. In some embodiments, the surface of the virtual three-dimensional object
appears to reflects light from a simulated light source located in a predefined direction in a appears to reflects light from a simulated light source located in a predefined direction in a
virtual space represented in the staging user interface. Varying a shape of a shadow in virtual space represented in the staging user interface. Varying a shape of a shadow in
accordance with rotation of a virtual object provides visual feedback (e.g., indicating a virtual accordance with rotation of a virtual object provides visual feedback (e.g., indicating a virtual
plane (e.g., a stage of a staging view) relative to which the virtual object is oriented). plane (e.g., a stage of a staging view) relative to which the virtual object is oriented).
Providingimproved Providing improvedvisual visualfeedback feedbacktoto theuser the userenhances enhancesthetheoperability operabilityofofthe the device device and and makes the user-device interface more efficient (e.g., by helping the user determine the proper makes the user-device interface more efficient (e.g., by helping the user determine the proper
direction for a swipe input to cause rotation about the first axis or the second axis), which, direction for a swipe input to cause rotation about the first axis or the second axis), which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00489]
[00489] In some In embodiments, some embodiments, while while rotating rotating thevirtual the virtualthree-dimensional three-dimensionalobject objectinin the first user interface region (18024): in accordance with a determination that the virtual the first user interface region (18024): in accordance with a determination that the virtual
three-dimensional object is displayed with a second perspective that reveals a predefined three-dimensional object is displayed with a second perspective that reveals a predefined
bottom of the virtual three-dimensional object, the device forgoes display of the bottom of the virtual three-dimensional object, the device forgoes display of the
representation of the shadow with the representation of the second perspective of the virtual representation of the shadow with the representation of the second perspective of the virtual
three-dimensionalobject. three-dimensional object. For For example, example,the thedevice devicedoes doesnot notdisplay displaythe the shadow shadowofofthe thevirtual virtual object when the virtual object is being viewed from below (e.g., as described with regard to object when the virtual object is being viewed from below (e.g., as described with regard to
184
1005066680
Figures 13G-13I). Figures 13G-13I).Forgoing Forgoingdisplay displayofofa ashadow shadowof of a a virtualobject virtual object in in accordance withaa accordance with 10 Jan 2024
determination that the bottom of the virtual object is displayed provides visual feedback (e.g., determination that the bottom of the virtual object is displayed provides visual feedback (e.g.,
indicating thatthe indicating that theobject objecthashas rotated rotated toposition to a a position thatthat no longer no longer corresponds corresponds to a plane to a virtual virtual plane (e.g., a stage of a staging view)). Providing improved visual feedback to the user enhances (e.g., a stage of a staging view)). Providing improved visual feedback to the user enhances
the operability of the device and makes the user-device interface more efficient, which, the operability of the device and makes the user-device interface more efficient, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by enabling the by enabling the user to use the device more quickly and efficiently. user to use the device more quickly and efficiently. 2024200149
[00490]
[00490] In some embodiments, after rotating the virtual three-dimensional object in the In some embodiments, after rotating the virtual three-dimensional object in the
first user interface region (e.g., the staging view), the device detects (18026) a second input first user interface region (e.g., the staging view), the device detects (18026) a second input
that corresponds to a request to reset the virtual three-dimensional object (e.g., the second that corresponds to a request to reset the virtual three-dimensional object (e.g., the second
input is a double tap on the first user interface region) in the first user interface region. In input is a double tap on the first user interface region) in the first user interface region. In
response to detecting the second input, the device displays (18028) (e.g., through rotating and response to detecting the second input, the device displays (18028) (e.g., through rotating and
resizing the virtual object) a representation of a predefined original perspective (e.g., the first resizing the virtual object) a representation of a predefined original perspective (e.g., the first
perspective, or a default starting perspective that is distinct from the first perspective (e.g., perspective, or a default starting perspective that is distinct from the first perspective (e.g.,
when the first perspective is the displayed perspective after user manipulation in the staging when the first perspective is the displayed perspective after user manipulation in the staging
user interface)) of the virtual three-dimensional object in the first user interface region (e.g., user interface)) of the virtual three-dimensional object in the first user interface region (e.g.,
in response to a double tap, the device resets the orientation of the virtual object to a in response to a double tap, the device resets the orientation of the virtual object to a
predefined original orientation (e.g., upright with a front side facing the user, with a bottom predefined original orientation (e.g., upright with a front side facing the user, with a bottom
side resting on a predefined ground plane)). For example, Figures 13I-13J illustrate an input side resting on a predefined ground plane)). For example, Figures 13I-13J illustrate an input
that causes the perspective of virtual object 11002 to change from an altered perspective (as a that causes the perspective of virtual object 11002 to change from an altered perspective (as a
result of the rotation input described with regard to Figures 13B-13G) to an original result of the rotation input described with regard to Figures 13B-13G) to an original
perspective in perspective in Figure Figure 13J 13J (which is the (which is the same as the same as the perspective perspective virtual virtualobject object11002 11002 shown in shown in
Figure 13A). Figure 13A).In In some someembodiments, embodiments,in in response response to to detecting detecting thethe second second input input that that
corresponds to the instruction to reset the virtual three-dimensional object, the device also corresponds to the instruction to reset the virtual three-dimensional object, the device also
resizes the virtual three-dimensional object to reflect a default display size of the virtual resizes the virtual three-dimensional object to reflect a default display size of the virtual
three-dimensionalobject. three-dimensional object. In In some embodiments, some embodiments, a double a double taptap input input resetsboth resets boththe theorientation orientation and the size of the virtual object in the staging user interface, while a double tap input resets and the size of the virtual object in the staging user interface, while a double tap input resets
only the size, but not the orientation of the virtual object in the augmented reality user only the size, but not the orientation of the virtual object in the augmented reality user
interface. In some embodiments, the device requires that the double tap be directed to the interface. In some embodiments, the device requires that the double tap be directed to the
virtual object in order to reset the size of the virtual object in the augmented reality user virtual object in order to reset the size of the virtual object in the augmented reality user
interface, while the device resets the orientation and size of the virtual object in response to interface, while the device resets the orientation and size of the virtual object in response to
double taps detected on the virtual object and double taps detected around the virtual object. double taps detected on the virtual object and double taps detected around the virtual object.
In the augmented reality view, a single finger swipe drags the virtual object, rather than In the augmented reality view, a single finger swipe drags the virtual object, rather than
185
1005066680
rotates the virtual object (e.g., unlike in the staging view). Displaying a predefined original rotates the virtual object (e.g., unlike in the staging view). Displaying a predefined original 10 Jan 2024
perspective of a virtual object in response to detecting a request to reset the virtual object perspective of a virtual object in response to detecting a request to reset the virtual object
enhances the operability of the device and makes the user-device interface more efficient (e.g. enhances the operability of the device and makes the user-device interface more efficient (e.g.
by providing an option to reset the object rather than requiring the user to estimate when by providing an option to reset the object rather than requiring the user to estimate when
input provided to adjust properties of the object returns the object to the predefined original input provided to adjust properties of the object returns the object to the predefined original
perspective). Reducing perspective). the number Reducing the numberofofinputs inputsneeded neededtotoperform performan an operation operation improves improves thethe
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life 2024200149
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00491]
[00491] In some In embodiments, some embodiments, while while displaying displaying thethe virtualthree-dimensional virtual three-dimensional object object inin
the first user interface region (e.g., the staging user interface), the device detects (18030) a the first user interface region (e.g., the staging user interface), the device detects (18030) a
third input that corresponds to a request to resize the virtual three-dimensional object (e.g., third input that corresponds to a request to resize the virtual three-dimensional object (e.g.,
the third input is a pinch or de-pinch gesture directed to the virtual object represented on the the third input is a pinch or de-pinch gesture directed to the virtual object represented on the
first user interface region, the third input having a magnitude that meets the criteria (e.g., first user interface region, the third input having a magnitude that meets the criteria (e.g.,
original or augmented criteria (as described in greater detail below with reference to method original or augmented criteria (as described in greater detail below with reference to method
19000)) forinitiating 19000)) for initiatingthetheresize resize operation.). operation.). In response In response to detecting to detecting the input, the third third input, the the device adjusts (18032) a size of the representation of the virtual three-dimensional object in device adjusts (18032) a size of the representation of the virtual three-dimensional object in
the first user interface region in accordance with a magnitude of the input. For example, in the first user interface region in accordance with a magnitude of the input. For example, in
response to an input that includes a de-pinch gesture (e.g., as described with regard to Figures response to an input that includes a de-pinch gesture (e.g., as described with regard to Figures
6N-6O),the 6N-60), thesize size of of virtual virtual object object11002 11002 is isdecreased. decreased. In Insome some embodiments, thedevice embodiments, the device displays an indicator to indicate the current zoom level of the virtual object when the size of displays an indicator to indicate the current zoom level of the virtual object when the size of
the representation of the virtual three-dimensional object is adjusted. In some embodiments, the representation of the virtual three-dimensional object is adjusted. In some embodiments,
the device ceases to display the indicator of zoom level upon termination of the third input. the device ceases to display the indicator of zoom level upon termination of the third input.
Adjusting a size of a virtual object in accordance with a magnitude of an input for resizing Adjusting a size of a virtual object in accordance with a magnitude of an input for resizing
the object enhances the operability of the device (e.g., by providing the option to resize the the object enhances the operability of the device (e.g., by providing the option to resize the
object by object by a a desired desired amount). Reducing amount). Reducing thenumber the number of of inputs inputs needed needed to to perform perform an operation an operation
improves the operability of the device and makes the user-device interface more efficient, improves the operability of the device and makes the user-device interface more efficient,
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00492]
[00492] In some In embodiments, some embodiments, while while adjusting adjusting thethe sizeofofthe size therepresentation representationof of the the virtual three-dimensional object in the first user interface region (e.g., the staging user virtual three-dimensional object in the first user interface region (e.g., the staging user
interface), the device detects (18034) that the size of the virtual three-dimensional object has interface), the device detects (18034) that the size of the virtual three-dimensional object has
reached a predefined default display size of the virtual three-dimensional object. In response reached a predefined default display size of the virtual three-dimensional object. In response
to detecting that the size of the virtual three-dimensional object has reached the predefined to detecting that the size of the virtual three-dimensional object has reached the predefined
186
1005066680
default display size of the virtual three-dimensional object, the device generates (18036) a default display size of the virtual three-dimensional object, the device generates (18036) a 10 Jan 2024
tactile output (e.g., a discrete tactile output) to indicate that the virtual three-dimensional tactile output (e.g., a discrete tactile output) to indicate that the virtual three-dimensional
object is displayed at the predefined default display size. Figure 11O provides an example of object is displayed at the predefined default display size. Figure 110 provides an example of
a tactile output 11024 that is provided in response to detecting that a size of virtual object a tactile output 11024 that is provided in response to detecting that a size of virtual object
11002 has 11002 has reached reached a previous a previous predefined predefined size of size of virtual virtual object object 11002 11002 (e.g., (e.g., as described as described with with regard to regard to Figures Figures 11M-11O. 11M-110. InIn some some embodiments, embodiments, the device the device generates generates the same the same tactile tactile
output when the size of the virtual object is reset to the default display size in response to a output when the size of the virtual object is reset to the default display size in response to a 2024200149
double tap input. Generating a tactile output in accordance with a determination that the size double tap input. Generating a tactile output in accordance with a determination that the size
of the virtual object has reached a predefined default display size provides the user with of the virtual object has reached a predefined default display size provides the user with
feedback (e.g., indicating that no further input is needed to return the simulated size of the feedback (e.g., indicating that no further input is needed to return the simulated size of the
virtual object to the predefined size). Providing improved tactile feedback enhances the virtual object to the predefined size). Providing improved tactile feedback enhances the
operability of the device (e.g., by providing sensory information that allows a user to perceive operability of the device (e.g., by providing sensory information that allows a user to perceive
that the predefined simulated physical size of the virtual object has been reached without that the predefined simulated physical size of the virtual object has been reached without
cluttering the user interface with displayed information), which, additionally, reduces power cluttering the user interface with displayed information), which, additionally, reduces power
usage and improves battery life of the device by enabling the user to use the device more usage and improves battery life of the device by enabling the user to use the device more
quickly and efficiently. quickly and efficiently.
[00493]
[00493] In some embodiments, a visual indication of a zoom level (e.g., a slider that In some embodiments, a visual indication of a zoom level (e.g., a slider that
indicates a value that corresponds to a current zoom level) is displayed in the first user indicates a value that corresponds to a current zoom level) is displayed in the first user
interface region (e.g., the staging user interface). As the size of the representation of the interface region (e.g., the staging user interface). As the size of the representation of the
virtual three-dimensional object is adjusted, the visual indication of the zoom level is adjusted virtual three-dimensional object is adjusted, the visual indication of the zoom level is adjusted
in accordance with the adjusted size of the representation of the virtual three-dimensional in accordance with the adjusted size of the representation of the virtual three-dimensional
object. object.
[00494]
[00494] In some In embodiments, some embodiments, while while displaying displaying a representation a representation of of a a thirdperspective third perspective of the virtual three-dimensional object in the first user interface region (e.g., the staging user of the virtual three-dimensional object in the first user interface region (e.g., the staging user
interface), the device detects (18042) a fourth input that corresponds to a request for interface), the device detects (18042) a fourth input that corresponds to a request for
displaying the virtual three-dimensional object in a second user interface region (e.g., an displaying the virtual three-dimensional object in a second user interface region (e.g., an
augmented reality user interface) that includes a field of view of one or more cameras (e.g., augmented reality user interface) that includes a field of view of one or more cameras (e.g.,
the cameras embedded in the device). In response to detecting the fourth input, the device the cameras embedded in the device). In response to detecting the fourth input, the device
displays (18044), via the display generation component, a representation of the virtual object displays (18044), via the display generation component, a representation of the virtual object
over at least a portion of the field of view of the one or more cameras that is included the over at least a portion of the field of view of the one or more cameras that is included the
second user interface region (e.g., the field of view of the one or more cameras are displayed second user interface region (e.g., the field of view of the one or more cameras are displayed
in response to the request to display the virtual object in the second user interface region), in response to the request to display the virtual object in the second user interface region),
whereinthe wherein the field field of of view view of the the one one or or more more cameras is aa view cameras is of a physical view of physical environment in environment in 187
1005066680
which the one or more cameras are located. Displaying the representation of the virtual object which the one or more cameras are located. Displaying the representation of the virtual object 10 Jan 2024
includes: rotating the virtual three-dimensional object about the first axis (e.g., the axis that is includes: rotating the virtual three-dimensional object about the first axis (e.g., the axis that is
parallel to the plane of the display (e.g., the x-y plane) in the horizontal direction, such as an parallel to the plane of the display (e.g., the x-y plane) in the horizontal direction, such as an
x axis) to a predefined angle (e.g., to a default yaw angle, such as 0 degree; or to an angle that X axis) to a predefined angle (e.g., to a default yaw angle, such as 0 degree; or to an angle that
is aligned (e.g., parallel) with a plane that is detected in the physical environment captured in is aligned (e.g., parallel) with a plane that is detected in the physical environment captured in
the field the fieldof ofview view of ofthe theone oneor ormore more cameras). cameras). In In some embodiments, some embodiments, thedevice the devicedisplays displaysanan animation of the three-dimensional object gradually rotating relative to the first axis to the animation of the three-dimensional object gradually rotating relative to the first axis to the 2024200149
predefined angle. maintaining a current angle of the virtual three-dimensional object relative predefined angle. maintaining a current angle of the virtual three-dimensional object relative
to the second axis (e.g., the axis that is parallel to the plane of the display (e.g., the x-y plane) to the second axis (e.g., the axis that is parallel to the plane of the display (e.g., the x-y plane)
in the vertical direction, such as a y axis). Rotating a virtual object about the first axis to a in the vertical direction, such as a y axis). Rotating a virtual object about the first axis to a
predefined angle in response to a request to display the virtual object in the field of view of predefined angle in response to a request to display the virtual object in the field of view of
the one or more cameras (e.g., without requiring further input to reposition the virtual object the one or more cameras (e.g., without requiring further input to reposition the virtual object
to a predefined orientation relative to a plane) enhances the operability of the device. to a predefined orientation relative to a plane) enhances the operability of the device.
Reducingthe Reducing thenumber numberof of inputsneeded inputs needed to to perform perform an an operation operation improves improves the the operability operability of of the device and makes the user-device interface more efficient, which, additionally, reduces the device and makes the user-device interface more efficient, which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00495]
[00495] In some In embodiments, some embodiments, while while displaying displaying a representation a representation of of a a fourth fourth
perspective of the virtual three-dimensional object in the first user interface region (e.g., the perspective of the virtual three-dimensional object in the first user interface region (e.g., the
staging user interface), the device detects (18046) a fifth input that corresponds to a request staging user interface), the device detects (18046) a fifth input that corresponds to a request
for returning for returning to toaatwo-dimensional user interface two-dimensional user interface including including aa two-dimensional representation two-dimensional representation
of the virtual three-dimensional object. In response to detecting the fifth input, the device of the virtual three-dimensional object. In response to detecting the fifth input, the device
(18048): rotates (e.g., before displaying the two-dimensional representation of the virtual (18048): rotates (e.g., before displaying the two-dimensional representation of the virtual
three-dimensional object and the two-dimensional user interface) the virtual three- three-dimensional object and the two-dimensional user interface) the virtual three-
dimensional object to show a perspective of the virtual three-dimensional object that dimensional object to show a perspective of the virtual three-dimensional object that
correspondstoto the corresponds the two-dimensional two-dimensionalrepresentation representationofofthe thevirtual virtual three-dimensional object; three-dimensional object;
and displays the two-dimensional representation of the virtual three-dimensional object after and displays the two-dimensional representation of the virtual three-dimensional object after
the virtual three-dimensional object is rotated to show the respective perspective that the virtual three-dimensional object is rotated to show the respective perspective that
corresponds to the two-dimensional representation of the virtual three-dimensional object. In corresponds to the two-dimensional representation of the virtual three-dimensional object. In
someembodiments, some embodiments,thethe device device displays displays an an animation animation of of thethe three-dimensional three-dimensional object object
gradually rotating to show the perspective of the virtual three-dimensional object that gradually rotating to show the perspective of the virtual three-dimensional object that
corresponds to the two-dimensional representation of the virtual three-dimensional object. In corresponds to the two-dimensional representation of the virtual three-dimensional object. In
someembodiments, some embodiments,thethe device device also also resizesthe resizes thevirtual virtual three-dimensional three-dimensionalobject objectduring duringthe the 188
1005066680
rotation or after the rotation to match the size of the two-dimensional representation of the rotation or after the rotation to match the size of the two-dimensional representation of the 10 Jan 2024
virtual three-dimensional object that is displayed in the two-dimensional user interface. In virtual three-dimensional object that is displayed in the two-dimensional user interface. In
someembodiments, some embodiments,an an animated animated transition transition is is displayed displayed toto show show thethe rotatedvirtual rotated virtualthree- three- dimensionalobject dimensional objectmoving moving toward toward thethe positionofofthe position thetwo-dimensional two-dimensional representation representation (e.g., (e.g.,
the thumbnail image of the virtual object) in the two-dimensional user interface, and settling the thumbnail image of the virtual object) in the two-dimensional user interface, and settling
into that position. Rotating a virtual three-dimensional object to a perspective that into that position. Rotating a virtual three-dimensional object to a perspective that
correspondsto corresponds to aa two-dimensional two-dimensionalrepresentation representationofofthe thevirtual virtual three-dimensional object in three-dimensional object in 2024200149
response to an input for returning to displaying the two-dimensional representation of the response to an input for returning to displaying the two-dimensional representation of the
virtual three-dimensional object provides visual feedback (e.g., to indicate that the displayed virtual three-dimensional object provides visual feedback (e.g., to indicate that the displayed
object is object is two two dimensional). dimensional). Providing improvedvisual Providing improved visualfeedback feedbacktotothe theuser userenhances enhancesthe the operability of the device and makes the user-device interface more efficient (e.g., by helping operability of the device and makes the user-device interface more efficient (e.g., by helping
the user to provide proper inputs and avoid attempting to provide input for rotating the two- the user to provide proper inputs and avoid attempting to provide input for rotating the two-
dimensionalobject dimensional objectalong alongaxis axis for for which rotation of which rotation of the the two-dimensional objectis two-dimensional object is unavailable), which, unavailable), additionally, reduces which, additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device by enabling the user to use the device more quickly and efficiently. device by enabling the user to use the device more quickly and efficiently.
[00496]
[00496] In some embodiments, prior to displaying the representation of the first In some embodiments, prior to displaying the representation of the first
perspective of the virtual three-dimensional object, the device displays (18050) a user perspective of the virtual three-dimensional object, the device displays (18050) a user
interface that includes a representation of the virtual three-dimensional object (e.g., a interface that includes a representation of the virtual three-dimensional object (e.g., a
thumbnail or icon) that includes a representation of a view the virtual three-dimensional thumbnail or icon) that includes a representation of a view the virtual three-dimensional
object from a respective perspective (e.g., a static representation such as a two dimensional object from a respective perspective (e.g., a static representation such as a two dimensional
image that corresponds to the virtual three-dimensional object). While displaying the image that corresponds to the virtual three-dimensional object). While displaying the
representation of the virtual three-dimensional object, the device detects (18052) a request to representation of the virtual three-dimensional object, the device detects (18052) a request to
display the virtual three-dimensional object (e.g., a tap input or other selection input directed display the virtual three-dimensional object (e.g., a tap input or other selection input directed
to the representation of the virtual three-dimensional object). In response to detecting the to the representation of the virtual three-dimensional object). In response to detecting the
request to display the virtual three-dimensional object, the device replaces (18054) display of request to display the virtual three-dimensional object, the device replaces (18054) display of
the representation of the virtual three-dimensional object with the virtual three-dimensional the representation of the virtual three-dimensional object with the virtual three-dimensional
object rotated to match the respective perspective of the representation of the virtual three- object rotated to match the respective perspective of the representation of the virtual three-
dimensionalobject. dimensional object. Figures Figures 11A-11E 11A-11E provide provide an an example example of aofuser a user interface interface 5060 5060 that that
displays a representation of virtual object 11002. In response to a request to display virtual displays a representation of virtual object 11002. In response to a request to display virtual
object 11002, as described with regard to Figure 11A, display of user interface 5060 is object 11002, as described with regard to Figure 11A, display of user interface 5060 is
replaced by display of virtual object 11002 in a staging user interface 6010, as shown in replaced by display of virtual object 11002 in a staging user interface 6010, as shown in
Figure 11E. Figure 11E. The Theperspective perspectiveofofvirtual virtual object object 11002 in Figure 11002 in Figure 11E 11Eisis the the same as the same as the perspective of the representation of virtual object 11002 in Figure 11A. In some perspective of the representation of virtual object 11002 in Figure 11A. In some
189
1005066680
embodiments, the representation of the virtual three-dimensional object is scaled up (e.g., to a embodiments, the representation of the virtual three-dimensional object is scaled up (e.g., to a 10 Jan 2024
size that matches a size of the virtual three-dimensional object) before it is replaced with the size that matches a size of the virtual three-dimensional object) before it is replaced with the
virtual three-dimensional virtual three-dimensional object. object. In Insome some embodiments, thevirtual embodiments, the virtual three-dimensional three-dimensionalobject object is initially displayed at a size of the representation of the virtual three-dimensional object and is initially displayed at a size of the representation of the virtual three-dimensional object and
is subsequently is scaled up. subsequently scaled up. In In some embodiments, some embodiments, during during a transitionfrom a transition fromthe therepresentation representation of the virtual three-dimensional object to the virtual three-dimensional object, the device of the virtual three-dimensional object to the virtual three-dimensional object, the device
gradually enlarges the representation of the virtual three-dimensional object, cross fades the gradually enlarges the representation of the virtual three-dimensional object, cross fades the 2024200149
representation of the virtual three-dimensional object with the virtual three-dimensional representation of the virtual three-dimensional object with the virtual three-dimensional
object and then gradually enlarges the virtual three-dimensional object so as to create a object and then gradually enlarges the virtual three-dimensional object SO as to create a
smooth transition between the representation of the virtual three-dimensional object and the smooth transition between the representation of the virtual three-dimensional object and the
virtual three-dimensional object. In some embodiments, the initial location of the virtual virtual three-dimensional object. In some embodiments, the initial location of the virtual
three-dimensional object is selected to correspond to the location of the representation of the three-dimensional object is selected to correspond to the location of the representation of the
virtual three-dimensional object. In some embodiments, the representation of the virtual virtual three-dimensional object. In some embodiments, the representation of the virtual
three-dimensional object is shifted to a location selected to correspond to the location in three-dimensional object is shifted to a location selected to correspond to the location in
which the virtual three-dimensional object will be displayed. Replacing display of the (two- which the virtual three-dimensional object will be displayed. Replacing display of the (two-
dimensional) representation of a virtual three-dimensional object with the virtual three- dimensional) representation of a virtual three-dimensional object with the virtual three-
dimensionalobject dimensional objectrotated rotated to to match the perspective match the perspective of of the the (two-dimensional) representation (two-dimensional) representation
provides visual feedback (e.g., to indicate the three-dimensional object is the same object as provides visual feedback (e.g., to indicate the three-dimensional object is the same object as
the two-dimensional the representationofofthe two-dimensional representation the virtual virtual three-dimensional object). Providing three-dimensional object). Providing
improvedvisual improved visualfeedback feedbacktotothe theuser user enhances enhancesthe theoperability operability of of the the device device and makesthe and makes the user-device interface user-device interface more efficient, which, more efficient, which, additionally, additionally,reduces reducespower power usage usage and improves and improves
battery life of the device by enabling the user to use the device more quickly and efficiently. battery life of the device by enabling the user to use the device more quickly and efficiently.
[00497]
[00497] In some embodiments, prior to displaying the first user interface, the device In some embodiments, prior to displaying the first user interface, the device
displays (18056) displays (18056) aa two-dimensional two-dimensionaluser userinterface interfaceincluding includingaatwo-dimensional two-dimensional representation of representation of the the virtual virtualthree-dimensional three-dimensional object. object.While While displaying displaying the thetwo-dimensional two-dimensional
user interface including the two-dimensional representation of the virtual three-dimensional user interface including the two-dimensional representation of the virtual three-dimensional
object, the device detects (18058) a first portion of a touch input (e.g., an increase in intensity object, the device detects (18058) a first portion of a touch input (e.g., an increase in intensity
of a contact) that meets preview criteria (e.g., the preview criteria require that an intensity of of a contact) that meets preview criteria (e.g., the preview criteria require that an intensity of
the press input exceeds a first intensity threshold (e.g., a light press intensity threshold) the press input exceeds a first intensity threshold (e.g., a light press intensity threshold)
and/or the preview criteria require that a duration of the press input exceeds a first duration and/or the preview criteria require that a duration of the press input exceeds a first duration
threshold) at a location on the touch-sensitive surface that corresponds to the two- threshold) at a location on the touch-sensitive surface that corresponds to the two-
dimensional representation of the virtual three-dimensional object. In response to detecting dimensional representation of the virtual three-dimensional object. In response to detecting
the first portion of the touch input that meets the preview criteria, the device displays (18060) the first portion of the touch input that meets the preview criteria, the device displays (18060)
190
1005066680
a preview of the virtual three-dimensional object that is larger than the two-dimensional a preview of the virtual three-dimensional object that is larger than the two-dimensional 10 Jan 2024
representation of the virtual three-dimensional object (e.g., the preview is animated to show representation of the virtual three-dimensional object (e.g., the preview is animated to show
different perspectives different perspectives of ofthe thevirtual virtualthree-dimensional three-dimensionalobject); object);In In some someembodiments, the embodiments, the
device displays an animation of the three-dimensional object gradually enlarging (e.g., based device displays an animation of the three-dimensional object gradually enlarging (e.g., based
on a duration or pressure of the input or based on a predetermined rate of animation). on a duration or pressure of the input or based on a predetermined rate of animation).
Displaying a preview of the virtual three-dimensional object (e.g., without replacing display Displaying a preview of the virtual three-dimensional object (e.g., without replacing display
of the currently displayed user interface with a different user interface) enhances the of the currently displayed user interface with a different user interface) enhances the 2024200149
operability of the device (e.g., by enabling the user to display the virtual three-dimensional operability of the device (e.g., by enabling the user to display the virtual three-dimensional
object and return to viewing the two-dimensional representation of the virtual three- object and return to viewing the two-dimensional representation of the virtual three-
dimensionalobject dimensional objectwithout withouthaving havingprovide provideinput inputfor fornavigating navigatingbetween between userinterfaces). user interfaces). Reducingthe Reducing thenumber numberof of inputsneeded inputs needed to to perform perform an an operation operation improves improves the the operability operability of of the device the device which, additionally, reduces which, additionally, reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device
by enabling the user to use the device more quickly and efficiently. by enabling the user to use the device more quickly and efficiently.
[00498]
[00498] In some In embodiments, some embodiments, while while displaying displaying thethe preview preview of of thethe virtualthree- virtual three- dimensional object, the device detects (18062) a second portion of the touch input (e.g., by dimensional object, the device detects (18062) a second portion of the touch input (e.g., by
the same the continuouslymaintained same continuously maintainedcontact). contact).InInresponse responsetotodetecting detecting the the second secondportion portionof of the the touch input touch input (18064): in accordance (18064): in withaa determination accordance with determinationthat that the the second secondportion portion of of the the touch touch
input meets menu-display criteria (e.g., the menu-display criteria require that the contact input meets menu-display criteria (e.g., the menu-display criteria require that the contact
movesbybymore moves more than than a thresholdamount a threshold amount in in a predefined a predefined direction direction (e.g.,upward)), (e.g., upward)),the thedevice device displays a plurality of selectable options (e.g., a sharing menu) corresponding a plurality of displays a plurality of selectable options (e.g., a sharing menu) corresponding a plurality of
operations associated with the virtual object (e.g., sharing options, such as various means of operations associated with the virtual object (e.g., sharing options, such as various means of
sharing thevirtual sharing the virtualobject object with with another another device device or user); or user); and inand in accordance accordance with a with a determination that the second portion of the touch input meets staging criteria (e.g., the determination that the second portion of the touch input meets staging criteria (e.g., the
staging criteria require that the intensity of the contact exceeds a second threshold intensity staging criteria require that the intensity of the contact exceeds a second threshold intensity
(e.g., (e.g., aa deep pressintensity deep press intensitythreshold) threshold) that that is is greater greater thanthan the the first first threshold threshold intensity), intensity), the the
device replaces device replaces display display of of the the two-dimensional user interface two-dimensional user interface including including the the two-dimensional two-dimensional
representation of the virtual three-dimensional object with the first user interface including representation of the virtual three-dimensional object with the first user interface including
the virtual three-dimensional object. Displaying a menu associated with the virtual object or the virtual three-dimensional object. Displaying a menu associated with the virtual object or
replacing display replacing display of of aa two-dimensional userinterface two-dimensional user interface including including the the two-dimensional two-dimensional
representation of the virtual three-dimensional object with the first user interface including representation of the virtual three-dimensional object with the first user interface including
the virtual three-dimensional object, depending on whether staging criteria are met, enables the virtual three-dimensional object, depending on whether staging criteria are met, enables
the performance the ofmultiple performance of multipledifferent different types types of of operations operations in in response response to toan aninput. input. Enabling Enabling
the performance of multiple different types of operations with the first type of input increases the performance of multiple different types of operations with the first type of input increases
191
1005066680
the efficiency with which the user is able to perform these operations, thereby enhancing the the efficiency with which the user is able to perform these operations, thereby enhancing the 10 Jan 2024
operability of the device, which, additionally, reduces power usage and improves battery life operability of the device, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00499]
[00499] In some embodiments, the first user interface includes (18066) a plurality of In some embodiments, the first user interface includes (18066) a plurality of
controls (e.g., buttons for switching to the world view, for going back, etc.). Prior to controls (e.g., buttons for switching to the world view, for going back, etc.). Prior to
displaying the first user interface, the device displays (18068) a two-dimensional user displaying the first user interface, the device displays (18068) a two-dimensional user
interface including a two-dimensional representation of the virtual three-dimensional object. interface including a two-dimensional representation of the virtual three-dimensional object. 2024200149
In response to detecting a request to display the virtual three-dimensional object in the first In response to detecting a request to display the virtual three-dimensional object in the first
user interface, the device (18070) displays the virtual three-dimensional object in the first user interface, the device (18070) displays the virtual three-dimensional object in the first
user interface without displaying a set of one or more controls associated with the virtual user interface without displaying a set of one or more controls associated with the virtual
three-dimensional object; and after displaying the virtual three-dimensional object in the first three-dimensional object; and after displaying the virtual three-dimensional object in the first
user interface, the device displays the set of one or more controls. For example, as described user interface, the device displays the set of one or more controls. For example, as described
with regard to Figures 11A-11E, display of a user interface 5060 that includes a two- with regard to Figures 11A-11E, display of a user interface 5060 that includes a two-
dimensional representation of virtual object 11002 is displayed prior to staging user interface dimensional representation of virtual object 11002 is displayed prior to staging user interface
6010. In response to a request to display virtual object 11002 in staging user interface 6010 6010. In response to a request to display virtual object 11002 in staging user interface 6010
(as described with regard to Figure 11A), virtual object 11002 is displayed (as shown in (as described with regard to Figure 11A), virtual object 11002 is displayed (as shown in
Figures 11B-11C) Figures 11B-11C)without without controls6016, controls 6016, 6018, 6018, andand 6020 6020 of staging of staging user user interface6010. interface 6010. In In
Figures 11D-11E, Figures 11D-11E,controls controls6016, 6016,6018, 6018,andand 6020 6020 of of staging staging user user interface6010 interface 6010 fade fade into into
view in view in the the user user interface. interface.InInsome some embodiments, theset embodiments, the set of of one one or or more controls include more controls include aa control for displaying the virtual three-dimensional object in an augmented reality control for displaying the virtual three-dimensional object in an augmented reality
environment where the virtual three-dimensional object is placed in a fixed position relative environment where the virtual three-dimensional object is placed in a fixed position relative
to a plane detected in a field of view of one or more cameras of the device. In some to a plane detected in a field of view of one or more cameras of the device. In some
embodiments, in response to detecting the request to display the virtual three-dimensional embodiments, in response to detecting the request to display the virtual three-dimensional
object in the first user interface: in accordance with a determination that the virtual three- object in the first user interface: in accordance with a determination that the virtual three-
dimensional object is not ready be displayed in the first user interface (e.g., three-dimensional dimensional object is not ready be displayed in the first user interface (e.g., three-dimensional
model of the virtual object is not completely loaded at the time when the first user interface is model of the virtual object is not completely loaded at the time when the first user interface is
ready to be displayed) (e.g., loading time of the virtual object is more than a threshold ready to be displayed) (e.g., loading time of the virtual object is more than a threshold
amount of time (e.g., significant and perceivable to the user)), the device displays a portion of amount of time (e.g., significant and perceivable to the user)), the device displays a portion of
the first user interface (e.g., a background window of the first user interface) without the first user interface (e.g., a background window of the first user interface) without
displaying the plurality of controls on the first user interface; and in accordance with a displaying the plurality of controls on the first user interface; and in accordance with a
determination that the virtual three-dimensional object is ready to be displayed in the first determination that the virtual three-dimensional object is ready to be displayed in the first
user interface (e.g., after the portion of the first user interface is displayed without the user interface (e.g., after the portion of the first user interface is displayed without the
controls), the device displays (e.g., fading in) the virtual three-dimensional object in the first controls), the device displays (e.g., fading in) the virtual three-dimensional object in the first
192
1005066680
user interface; and the device displays (e.g., fading in) the controls after the virtual three- user interface; and the device displays (e.g., fading in) the controls after the virtual three- 10 Jan 2024
dimensional object is displayed in the first user interface. In response to detecting the request dimensional object is displayed in the first user interface. In response to detecting the request
to display the virtual three-dimensional object in the first user interface and in accordance to display the virtual three-dimensional object in the first user interface and in accordance
with a determination that the virtual three-dimensional object is ready to be displayed (e.g., with a determination that the virtual three-dimensional object is ready to be displayed (e.g.,
the three-dimensional model of the virtual object has been loaded when the first user interface the three-dimensional model of the virtual object has been loaded when the first user interface
is ready to be displayed (e.g., loading time of the virtual object is less than the threshold is ready to be displayed (e.g., loading time of the virtual object is less than the threshold
amount of time (e.g., negligible and not perceivable to the user)): the device displays the first amount of time (e.g., negligible and not perceivable to the user)): the device displays the first 2024200149
user interface with the plurality of controls on the first user interface; and the device displays user interface with the plurality of controls on the first user interface; and the device displays
(e.g., (e.g., no fadingin) no fading in)the thevirtual virtualthree-dimensional three-dimensional object object in theinfirst the first user user interface interface with the with the
plurality of controls. In some embodiments, when existing the staging user interface to return plurality of controls. In some embodiments, when existing the staging user interface to return
to the two-dimensional user interface (e.g., in response to a request to “go back”), the to the two-dimensional user interface (e.g., in response to a request to "go back"), the
controls fade out first, before the virtual three-dimensional object is transformed into the two- controls fade out first, before the virtual three-dimensional object is transformed into the two-
dimensional representation of the virtual three-dimensional object. Displaying controls after dimensional representation of the virtual three-dimensional object. Displaying controls after
displaying a virtual three-dimensional object in a user interface provides visual feedback displaying a virtual three-dimensional object in a user interface provides visual feedback
(e.g., indicating that controls to manipulate a virtual object are unavailable during an amount (e.g., indicating that controls to manipulate a virtual object are unavailable during an amount
of time required to load the virtual object). Providing improved visual feedback to the user of time required to load the virtual object). Providing improved visual feedback to the user
enhancesthe enhances theoperability operability of of the the device device and and makes the user-device makes the user-deviceinterface interface more efficient more efficient
(e.g., by helping the user avoid providing input to manipulate the object while manipulation (e.g., by helping the user avoid providing input to manipulate the object while manipulation
operations are unavailable during a loading time for the virtual object), which, additionally, operations are unavailable during a loading time for the virtual object), which, additionally,
reduces power reduces powerusage usageand andimproves improves battery battery lifeofofthe life thedevice deviceby byenabling enablingthe theuser userto to use use the the device more device morequickly quicklyand andefficiently. efficiently.
[00500]
[00500] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 18A-18I Figures 18A-18Ihave havebeen been described described is is merely merely anan example example and and is not is not intended intended to to indicate indicate
that the described order is the only order in which the operations could be performed. One of that the described order is the only order in which the operations could be performed. One of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 1000, 1000, 16000, 16000,17000, 17000, 19000, and20000) 19000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 18000 18000 described described
above with respect to Figures 18A-18I. For example, contacts, inputs, virtual objects, user above with respect to Figures 18A-18I. For example, contacts, inputs, virtual objects, user
interface regions, fields of view, tactile outputs, movements, and/or animations described interface regions, fields of view, tactile outputs, movements, and/or animations described
abovewith above withreference referenceto to method method18000 18000 optionally optionally have have oneone or or more more of the of the characteristicsofof characteristics
the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs, the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs,
movements, movements, and/or and/or animations animations described described herein herein with with reference reference to to othermethods other methods described described
193
1005066680
herein (e.g., herein (e.g.,methods methods 800, 900, 1000, 17000, 18000, 1000, 17000, 18000,19000, 19000,and and20000). 20000). For For brevity,these brevity, these 10 Jan 2024
details are not repeated here. details are not repeated here.
[00501]
[00501] Figures 19A-19H Figures 19A-19H areare flow flow diagrams diagrams illustratingmethod illustrating method 19000 19000 of, of, in in accordancewith accordance withaadetermination determinationthat thataa first first threshold thresholdmagnitude of movement magnitude of movement isismet metfor fora a first object first objectmanipulation manipulation behavior, behavior, increasing increasing aasecond second threshold threshold magnitude of movement magnitude of movement required for required for aa second second object object manipulation behavior. Method manipulation behavior. Method 19000 19000 is is performed performed at at an an electronic device (e.g., device 300, Figure 3, or portable multifunction device 100, Figure electronic device (e.g., device 300, Figure 3, or portable multifunction device 100, Figure 2024200149
1A) having 1A) having a display a display generation generation component component (e.g., a (e.g., a display, display, a projector, a projector, a heads upa display heads upor display or
the like) and a touch-sensitive surface (e.g., a touch-sensitive surface, or a touch-screen the like) and a touch-sensitive surface (e.g., a touch-sensitive surface, or a touch-screen
display that serves both as the display generation component and the touch-sensitive surface. display that serves both as the display generation component and the touch-sensitive surface.
Someoperations Some operationsininmethod method 19000 19000 are, are, optionally,combined optionally, combined and/or and/or the the order order of of some some
operations is, optionally, changed. operations is, optionally, changed.
[00502]
[00502] Thedevice The devicedisplays displays(19002), (19002),via via the the display display generation generation component, component,a afirst first user user
interface region that includes a user interface object (e.g., a user interface region including a interface region that includes a user interface object (e.g., a user interface region including a
representation of a virtual object) that is associated with a plurality of object manipulation representation of a virtual object) that is associated with a plurality of object manipulation
behaviors, including a first object manipulation behavior (e.g., rotation of the user interface behaviors, including a first object manipulation behavior (e.g., rotation of the user interface
object around a respective axis) that is performed in response to inputs that meet first gesture- object around a respective axis) that is performed in response to inputs that meet first gesture-
recognition criteria (e.g., rotation criteria) and a second object manipulation behavior (e.g., recognition criteria (e.g., rotation criteria) and a second object manipulation behavior (e.g.,
one of translation of the user interface object or scaling of the user interface object) that is one of translation of the user interface object or scaling of the user interface object) that is
performed in response to inputs that meet second gesture-recognition criteria (e.g., one of performed in response to inputs that meet second gesture-recognition criteria (e.g., one of
translation criteria and scaling criteria). For example, a displayed virtual object 11002 is translation criteria and scaling criteria). For example, a displayed virtual object 11002 is
associated with manipulation behaviors that include rotation around a respective axis (e.g., as associated with manipulation behaviors that include rotation around a respective axis (e.g., as
described with regard to Figures 14B-14E), translation (e.g., as described with regard to described with regard to Figures 14B-14E), translation (e.g., as described with regard to
Figures 14K-14M), Figures 14K-14M), andand scaling scaling (e.g.,asas described (e.g., describedwith withregard regardtoto Figures Figures 14G-14I). 14G-14I).
[00503]
[00503] While displaying the first user interface region, the device detects (19004) a While displaying the first user interface region, the device detects (19004) a
first portion of an input directed to the user interface object (e.g., the device detects one or first portion of an input directed to the user interface object (e.g., the device detects one or
more contacts at locations on the touch-sensitive surface that correspond to display location more contacts at locations on the touch-sensitive surface that correspond to display location
of the user interface object), including detecting movement of one or more contacts across the of the user interface object), including detecting movement of one or more contacts across the
touch-sensitive surface, and while the one or more contacts are detected on the touch- touch-sensitive surface, and while the one or more contacts are detected on the touch-
sensitive surface, sensitive surface,the thedevice deviceevaluates evaluatesmovement ofthe movement of the one one or or more morecontacts contactswith withrespect respect to to both the first gesture-recognition criteria and the second gesture-recognition criteria. both the first gesture-recognition criteria and the second gesture-recognition criteria.
194
1005066680
[00504]
[00504] In response to detecting the first portion of the input, the device updates an In response to detecting the first portion of the input, the device updates an 10 Jan 2024
appearance of the user interface object based on the first portion of the input, including appearance of the user interface object based on the first portion of the input, including
(19006): in accordance with a determination that the first portion of the input meets the first (19006): in accordance with a determination that the first portion of the input meets the first
gesture-recognition criteria gesture-recognition criteria before before meeting meeting the second the second gesture-recognition gesture-recognition criteria: changing criteria: changing
the appearance of the user interface object (e.g., rotating the user interface object) in the appearance of the user interface object (e.g., rotating the user interface object) in
accordance with the first object manipulation behavior based on the first portion of the input accordance with the first object manipulation behavior based on the first portion of the input
(e.g., based on a direction and/or magnitude of the first portion of the input); and (e.g., (e.g., based on a direction and/or magnitude of the first portion of the input); and (e.g., 2024200149
without changing without changingananappearance appearanceofof theuser the userinterface interface object object in in accordance withthe accordance with the second second object manipulation object behavior)updating manipulation behavior) updatingthe thesecond secondgesture-recognition gesture-recognitioncriteria criteria by by increasing increasing a threshold for the second gesture-recognition criteria (e.g., increasing a threshold required a threshold for the second gesture-recognition criteria (e.g., increasing a threshold required
for aa movement for parameter movement parameter (e.g.,movement (e.g., movement distance, distance, speed, speed, etc.)ininthe etc.) thesecond secondgesture- gesture- recognition criteria). For example, in Figure 14E, virtual object 1102 has rotated in recognition criteria). For example, in Figure 14E, virtual object 1102 has rotated in
accordance with a determination that rotation criteria have been met (before scaling criteria accordance with a determination that rotation criteria have been met (before scaling criteria
have been met), and a threshold ST for the scaling criteria is increased to ST’. In some have been met), and a threshold ST for the scaling criteria is increased to ST'. In some
embodiments, before the criteria for recognizing a gesture for rotating the object are met, it is embodiments, before the criteria for recognizing a gesture for rotating the object are met, it is
relatively easy to initiate a translation or scaling operation on the object by meeting the relatively easy to initiate a translation or scaling operation on the object by meeting the
criteria for recognizing a gesture for translation or scaling (assuming that the criteria for criteria for recognizing a gesture for translation or scaling (assuming that the criteria for
translating or scaling have not been met before). Once the criteria for recognizing the gesture translating or scaling have not been met before). Once the criteria for recognizing the gesture
for rotating the object are met, it becomes harder to initiate the translation or scaling for rotating the object are met, it becomes harder to initiate the translation or scaling
operation on the object (e.g., the criteria for translation and scaling are updated with operation on the object (e.g., the criteria for translation and scaling are updated with
increased thresholds increased thresholds for for the the movement parameter),and movement parameter), andthe theobject objectmanipulation manipulationisisbiased biased towardthe toward the manipulation manipulationbehavior behaviorcorresponding corresponding to to thegesture the gesturethat thatisis already already recognized recognizedand and used to manipulate this object. In accordance with a determination that the input meets the used to manipulate this object. In accordance with a determination that the input meets the
second gesture-recognition criteria before meeting the first gesture-recognition criteria: the second gesture-recognition criteria before meeting the first gesture-recognition criteria: the
device changes the appearance of the user interface object (e.g., translating the user interface device changes the appearance of the user interface object (e.g., translating the user interface
object or resizing the user interface object) in accordance with the second object object or resizing the user interface object) in accordance with the second object
manipulation behavior based on the first portion of the input (e.g., based on a direction and/or manipulation behavior based on the first portion of the input (e.g., based on a direction and/or
magnitude of the first portion of the input); and (e.g., without changing an appearance of the magnitude of the first portion of the input); and (e.g., without changing an appearance of the
user interface object in accordance with the first object manipulation behavior) updates the user interface object in accordance with the first object manipulation behavior) updates the
first gesture-recognition criteria by increasing a threshold for the first gesture-recognition first gesture-recognition criteria by increasing a threshold for the first gesture-recognition
criteria (e.g., criteria (e.g.,increasing a threshold increasing required a threshold for for required a movement a movementparameter parameter(e.g., (e.g.,movement movement
distance, speed, etc.) in the first gesture-recognition criteria). For example, in Figure 14I, the distance, speed, etc.) in the first gesture-recognition criteria). For example, in Figure 14I, the
size of virtual object 1102 has increased in accordance with a determination that scaling size of virtual object 1102 has increased in accordance with a determination that scaling
195
1005066680
criteria have been met (before rotation criteria have been met), and a threshold RT for the criteria have been met (before rotation criteria have been met), and a threshold RT for the 10 Jan 2024
rotation criteria is increased to RT’. In some embodiments, before the criteria for recognizing rotation criteria is increased to RT'. In some embodiments, before the criteria for recognizing
a gesture for translating or scaling the object is met, it is relatively easy to initiate rotation a gesture for translating or scaling the object is met, it is relatively easy to initiate rotation
operation on the object by meeting the criteria for recognizing a gesture for rotation operation on the object by meeting the criteria for recognizing a gesture for rotation
(assuming that (assuming that thethe criteria criteria forfor recognizing recognizing a gesture a gesture for rotating for rotating the object the object has not has beennot met been met
before). Once the criteria for recognizing the gesture for translating or scaling the object is before). Once the criteria for recognizing the gesture for translating or scaling the object is
met, it becomes harder to initiate the rotation operation on the object (e.g., the criteria for met, it becomes harder to initiate the rotation operation on the object (e.g., the criteria for 2024200149
rotating the rotating the object objectare areupdated updated with with increased increased threshold threshold for forthe themovement parameter),and movement parameter), andthe the object manipulation object behaviorisis biased manipulation behavior biased toward towardthe the manipulation manipulationbehavior behaviorcorresponding corresponding to to the gesture that is already recognized and used to manipulate this object. In some the gesture that is already recognized and used to manipulate this object. In some
embodiments,thetheappearance embodiments, appearance of of thetheuser userinterface interfaceobject objectis is changed dynamically changed dynamically and and
continuously (e.g., showing different sizes, positions, perspectives, reflections, shadows, etc.) continuously (e.g., showing different sizes, positions, perspectives, reflections, shadows, etc.)
in accordance in withthe accordance with the values values of of the the respective respective movement parameter movement parameter of of theinput. the input.InInsome some embodiments,thethedevice embodiments, devicefollows followsa apreset presetcorrespondence correspondence (e.g.,respective (e.g., respectivecorrespondence correspondenceforfor
each type each type of of manipulation behavior)between manipulation behavior) betweenthethemovement movement parameter parameter (e.g., (e.g., a respective a respective
movement movement parameter parameter forfor each each type type of of manipulation manipulation behavior) behavior) and and the the changes changes mademade to to the the appearance of the user interface object (e.g., a respective aspect of the appearance for each appearance of the user interface object (e.g., a respective aspect of the appearance for each
type of manipulation behavior). Increasing a first threshold for input movement required for a type of manipulation behavior). Increasing a first threshold for input movement required for a
first object first objectmanipulation manipulation when input movement when input movement increases increases above above a second a second threshold threshold forfor a a second object manipulation enhances the operability of the device (e.g., by helping the user to second object manipulation enhances the operability of the device (e.g., by helping the user to
avoid accidentally avoid accidentally performing performing aa second secondobject objectmanipulation manipulationwhile whileattempting attempting to to provide provide
input for performing a first object manipulation). Improving the user’s ability to control input for performing a first object manipulation). Improving the user's ability to control
different types different types of ofobject objectmanipulation manipulation enhances the operability enhances the operability of ofthe thedevice deviceand and makes the makes the
user-device interface more efficient. user-device interface more efficient.
[00505]
[00505] In some In embodiments, some embodiments, afterupdating after updating theappearance the appearance of of thethe userinterface user interface object based on the first portion of the input, the device detects (19008) a second portion of object based on the first portion of the input, the device detects (19008) a second portion of
the input (e.g., by the same continuously maintained contacts in the first portion of the input, the input (e.g., by the same continuously maintained contacts in the first portion of the input,
or different contacts detected after termination (e.g., lift-off) of the contacts in the first or different contacts detected after termination (e.g., lift-off) of the contacts in the first
portion of the input). In some embodiments, the second portion of the input is detected based portion of the input). In some embodiments, the second portion of the input is detected based
on continuously detected inputs that are directed to the user interface object. In response to on continuously detected inputs that are directed to the user interface object. In response to
detecting the second portion of the input, the device updates (19010) the appearance of the detecting the second portion of the input, the device updates (19010) the appearance of the
user interface object based on the second portion of the input, including: in accordance with a user interface object based on the second portion of the input, including: in accordance with a
determination that the first portion of the input met the first gesture-recognition criteria and determination that the first portion of the input met the first gesture-recognition criteria and
196
1005066680
the second portion of the input does not meet the updated second gesture-recognition criteria: the second portion of the input does not meet the updated second gesture-recognition criteria: 10 Jan 2024
(e.g., (e.g., without regardtotowhether without regard whether or not or not the second the second portion portion of themeets of the input inputthemeets first the first gesture- gesture-
recognition criteria or the original second gesture-recognition criteria) changing the recognition criteria or the original second gesture-recognition criteria) changing the
appearance of the user interface object in accordance with the first object manipulation appearance of the user interface object in accordance with the first object manipulation
behavior based on the second portion of the input (e.g., based on a direction and/or magnitude behavior based on the second portion of the input (e.g., based on a direction and/or magnitude
of the second portion of the input) without changing the appearance of the user interface of the second portion of the input) without changing the appearance of the user interface
object in object in accordance with the accordance with the second object manipulation second object manipulationbehavior behavior(e.g., (e.g., even even if if the the second second 2024200149
portion of the input does meet the original second gesture-recognition criteria before they portion of the input does meet the original second gesture-recognition criteria before they
were updated); in accordance with a determination that the first portion of the input met the were updated); in accordance with a determination that the first portion of the input met the
second gesture-recognition criteria and the second portion of the input does not meet the second gesture-recognition criteria and the second portion of the input does not meet the
updated first gesture-recognition criteria: (e.g., without regard to whether or not the second updated first gesture-recognition criteria: (e.g., without regard to whether or not the second
portion of the input meets the second gesture-recognition criteria or the original first gesture- portion of the input meets the second gesture-recognition criteria or the original first gesture-
recognition criteria) changing the appearance of the user interface object in accordance with recognition criteria) changing the appearance of the user interface object in accordance with
the second the object manipulation second object manipulationbehavior behaviorbased basedononthe thesecond second portionofofthe portion theinput input(e.g., (e.g., based based
on aa direction and/or on and/or magnitude of the magnitude of the second portion of second portion of the the input) without without changing the changing the
appearance of the user interface object in accordance with the first object manipulation appearance of the user interface object in accordance with the first object manipulation
behavior (e.g., even if the second portion of the input does meet the original first gesture- behavior (e.g., even if the second portion of the input does meet the original first gesture-
recognition before it was updated). recognition before it was updated).
[00506]
[00506] In some In embodiments some embodiments (19012), (19012), while while the the appearance appearance of the of the user user interface interface
object is object is changed in accordance changed in with the accordance with the first first object objectmanipulation manipulation behavior behavior based on the based on the second portion of the input after the first portion of the input met the first gesture-recognition second portion of the input after the first portion of the input met the first gesture-recognition
criteria, the second portion of the input includes input that meets the second gesture- criteria, the second portion of the input includes input that meets the second gesture-
recognition criteria before the second gesture-recognition criteria were updated (e.g., with the recognition criteria before the second gesture-recognition criteria were updated (e.g., with the
original threshold(s) for the movement parameter(s) of the input in the second gesture- original threshold(s) for the movement parameter(s) of the input in the second gesture-
recognition criteria before the threshold(s) are increased) (e.g., the second portion of the input recognition criteria before the threshold(s) are increased) (e.g., the second portion of the input
does not include input that meets the updated second gesture-recognition criteria). does not include input that meets the updated second gesture-recognition criteria).
[00507]
[00507] In some In embodiments some embodiments (19014), (19014), while while the the appearance appearance of the of the user user interface interface
object is object is changed in accordance changed in with the accordance with the second secondobject object manipulation manipulationbehavior behaviorbased based onon the the
second portion of the input after the first portion of the input met the second gesture- second portion of the input after the first portion of the input met the second gesture-
recognition criteria, the second portion of the input includes input that meets the first gesture- recognition criteria, the second portion of the input includes input that meets the first gesture-
recognition criteria before the first gesture-recognition criteria were updated (e.g., with the recognition criteria before the first gesture-recognition criteria were updated (e.g., with the
original threshold(s) for the movement parameter(s) of the input in the first gesture- original threshold(s) for the movement parameter(s) of the input in the first gesture-
197
1005066680
recognition criteria before the threshold(s) are increased) (e.g., the second portion of the input recognition criteria before the threshold(s) are increased) (e.g., the second portion of the input 10 Jan 2024
does not include input that meets the updated first gesture-recognition criteria). does not include input that meets the updated first gesture-recognition criteria).
[00508]
[00508] In some In embodiments some embodiments (19016), (19016), while while the the appearance appearance of the of the user user interface interface
object is object is changed in accordance changed in with the accordance with the first first object objectmanipulation manipulation behavior behavior based on the based on the second portion of the input after the first portion of the input met the first gesture-recognition second portion of the input after the first portion of the input met the first gesture-recognition
criteria, the second portion of the input does not include input that meets the first gesture- criteria, the second portion of the input does not include input that meets the first gesture-
recognition criteria (e.g., with the original threshold(s) for the movement parameter(s) of the recognition criteria (e.g., with the original threshold(s) for the movement parameter(s) of the 2024200149
input in the first gesture-recognition criteria). For example, after the first gesture-recognition input in the first gesture-recognition criteria). For example, after the first gesture-recognition
criteria are met once, the input no long needs to continue to meet the first gesture-recognition criteria are met once, the input no long needs to continue to meet the first gesture-recognition
criteria in order to cause the first object manipulation behavior. criteria in order to cause the first object manipulation behavior.
[00509]
[00509] In some In embodiments some embodiments (19018), (19018), while while the the appearance appearance of the of the user user interface interface
object is object is changed in accordance changed in with the accordance with the second secondobject object manipulation manipulationbehavior behaviorbased based onon the the
second portion of the input after the first portion of the input met the second gesture- second portion of the input after the first portion of the input met the second gesture-
recognition criteria, the second portion of the input does not include input that meets the recognition criteria, the second portion of the input does not include input that meets the
second gesture-recognition criteria (e.g., with the original threshold(s) for the movement second gesture-recognition criteria (e.g., with the original threshold(s) for the movement
parameter(s) of the input in the second gesture-recognition criteria). For example, after the parameter(s) of the input in the second gesture-recognition criteria). For example, after the
second gesture-recognition criteria are met once, the input no long needs to continue to meet second gesture-recognition criteria are met once, the input no long needs to continue to meet
the second gesture-recognition criteria in order to cause the second object manipulation the second gesture-recognition criteria in order to cause the second object manipulation
behavior. Performing behavior. Performinga afirst first object object manipulation behavior when manipulation behavior whena asecond secondportion portionofofthe theinput input includes movement includes movement thatincreases that increasesabove aboveanan increased increased threshold threshold enhances enhances thethe operability operability ofof
the device (e.g., by providing the user with the ability to intentionally perform a second the device (e.g., by providing the user with the ability to intentionally perform a second
object manipulation object after performing manipulation after performing aa first first object objectmanipulation manipulation by by meeting the increased meeting the increased criteria, without requiring the user to provide a new input). Reducing the number of inputs criteria, without requiring the user to provide a new input). Reducing the number of inputs
neededtoto perform needed performananoperation operationimproves improves theoperability the operabilityofofthe the device deviceand andmakes makes theuser- the user- device interface device interface more efficient, which, more efficient, which, additionally, additionally,reduces reducespower power usage usage and improves and improves
battery life of the device by enabling the user to use the device more quickly and efficiently. battery life of the device by enabling the user to use the device more quickly and efficiently.
[00510]
[00510] In some In embodiments, some embodiments, updating updating thethe appearance appearance of the of the user user interface interface object object
based on based on the the second secondportion portionof of the the input input includes includes (19020): (19020): in in accordance with aa determination accordance with determination that the first portion of the input met the second gesture-recognition criteria and the second that the first portion of the input met the second gesture-recognition criteria and the second
portion of the input meets the updated first gesture-recognition criteria: changing the portion of the input meets the updated first gesture-recognition criteria: changing the
appearance of the user interface object in accordance with the first object manipulation appearance of the user interface object in accordance with the first object manipulation
behavior based behavior basedononthe thesecond secondportion portionofofthe the input; input; and changingthe and changing the appearance appearanceofofthe theuser user
198
1005066680
interface object interface object in inaccordance accordance with with the the second second object object manipulation behaviorbased manipulation behavior basedononthe the 10 Jan 2024
second portion of the input; and, in accordance with a determination that the first portion of second portion of the input; and, in accordance with a determination that the first portion of
the input met the first gesture-recognition criteria and the second portion of the input meets the input met the first gesture-recognition criteria and the second portion of the input meets
the updated second gesture-recognition criteria: changing the appearance of the user interface the updated second gesture-recognition criteria: changing the appearance of the user interface
object in object in accordance with the accordance with the first firstobject objectmanipulation manipulation behavior behavior based based on on the the second portion second portion
of the input; and changing the appearance of the user interface object in accordance with the of the input; and changing the appearance of the user interface object in accordance with the
secondobject second object manipulation manipulationbehavior behaviorbased basedonon thesecond the second portion portion ofof theinput. the input.For Forexample, example, 2024200149
after the first gesture-recognition criteria were met first, and the input then met the updated after the first gesture-recognition criteria were met first, and the input then met the updated
second gesture-recognition criteria, the input can now cause both the first and the second second gesture-recognition criteria, the input can now cause both the first and the second
object manipulation behaviors. For example, after the second gesture-recognition criteria object manipulation behaviors. For example, after the second gesture-recognition criteria
were met first, and the input then met the updated first gesture-recognition criteria, the input were met first, and the input then met the updated first gesture-recognition criteria, the input
can now can nowcause causeboth boththe thefirst first and and the the second object manipulation second object behaviors.Updating manipulation behaviors. Updatingthe the object in object in accordance with first accordance with first object objectmanipulation manipulation behavior behavior and the second and the object second object
manipulation behavior in response to a portion of the input detected after the second gesture- manipulation behavior in response to a portion of the input detected after the second gesture-
recognition criteria and updated first gesture-recognition criteria are met enhances the recognition criteria and updated first gesture-recognition criteria are met enhances the
operability of the device (e.g., by providing the user with the ability to freely manipulate the operability of the device (e.g., by providing the user with the ability to freely manipulate the
object using first object manipulation and second object manipulation after satisfying an object using first object manipulation and second object manipulation after satisfying an
increased threshold increased threshold without without requiring requiring the the user user to to provide provide aa new new input). input). Reducing thenumber Reducing the number of inputs of inputs needed to perform needed to an operation perform an operation improves improvesthe theoperability operabilityof of the the device and makes device and makes the user-device the user-device interface interface more efficient, which, more efficient, which,additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00511]
[00511] In some In embodiments, some embodiments, afterupdating after updating theappearance the appearance of of thethe userinterface user interface object based on the second portion of the input (e.g., after both the first gesture-recognition object based on the second portion of the input (e.g., after both the first gesture-recognition
criteria and updated second gesture-recognition criteria are met, or after both the second criteria and updated second gesture-recognition criteria are met, or after both the second
gesture-recognition criteria and the updated first gesture-recognition criteria are met), the gesture-recognition criteria and the updated first gesture-recognition criteria are met), the
device detects (19022) a third portion of the input (e.g., by the same continuously maintained device detects (19022) a third portion of the input (e.g., by the same continuously maintained
contacts in the first and second portion of the input, or different contacts detected after contacts in the first and second portion of the input, or different contacts detected after
termination (e.g., lift-off) of the contacts in the first portion and second portion of the input). termination (e.g., lift-off) of the contacts in the first portion and second portion of the input).
In response to detecting the third portion of the input, the device updates (19024) the In response to detecting the third portion of the input, the device updates (19024) the
appearance of the user interface object based on the third portion of the input, including: appearance of the user interface object based on the third portion of the input, including:
changing the appearance of the user interface object in accordance with the first object changing the appearance of the user interface object in accordance with the first object
manipulationbehavior manipulation behaviorbased basedononthe thethird thirdportion portionof of the the input; input; and and changing the appearance changing the appearanceofof 199
1005066680
the user the user interface interfaceobject objectininaccordance accordance with with the thesecond second object object manipulation manipulation behavior based behavior based 10 Jan 2024
on the third portion of the input. For example, after both the first gesture-recognition criteria on the third portion of the input. For example, after both the first gesture-recognition criteria
and updated second gesture-recognition criteria were met, or after both the second gesture- and updated second gesture-recognition criteria were met, or after both the second gesture-
recognition criteria and the updated first gesture-recognition criteria were met, the input can recognition criteria and the updated first gesture-recognition criteria were met, the input can
cause both cause both the the first firstand andthe thesecond second object objectmanipulation manipulation behaviors behaviors subsequently without subsequently without
regard to the thresholds in the original or updated first and second gesture-recognition regard to the thresholds in the original or updated first and second gesture-recognition
criteria. Updating the object in accordance with first object manipulation behavior and second criteria. Updating the object in accordance with first object manipulation behavior and second 2024200149
object manipulation behavior in response to a portion of the input detected after the second object manipulation behavior in response to a portion of the input detected after the second
gesture-recognition criteria and the updated first gesture-recognition criteria are met enhances gesture-recognition criteria and the updated first gesture-recognition criteria are met enhances
the operability of the device (e.g., by providing the user with the ability to freely manipulate the operability of the device (e.g., by providing the user with the ability to freely manipulate
the object the object using using first firstobject objectmanipulation manipulationand andsecond second object object manipulation after demonstrating manipulation after demonstrating
an intention to perform the first object manipulation type by satisfying an increased threshold, an intention to perform the first object manipulation type by satisfying an increased threshold,
without requiring without requiring the the user user to to provide provide aa new new input). input). Reducing thenumber Reducing the numberofof inputsneeded inputs neededto to
performan perform anoperation operationimproves improvesthe theoperability operabilityofofthe the device device and andmakes makesthe theuser-device user-device interface more efficient, which, additionally, reduces power usage and improves battery life interface more efficient, which, additionally, reduces power usage and improves battery life
of the device by enabling the user to use the device more quickly and efficiently. of the device by enabling the user to use the device more quickly and efficiently.
[00512]
[00512] In some In embodiments some embodiments (19026), (19026), thethe third third portionofofthe portion theinput inputdoes doesnot notinclude include input that meets the first gesture-recognition criteria or input that meets the second gesture- input that meets the first gesture-recognition criteria or input that meets the second gesture-
recognition criteria. For example, after both the first gesture-recognition criteria and updated recognition criteria. For example, after both the first gesture-recognition criteria and updated
second gesture-recognition criteria are met, or after both the second gesture-recognition second gesture-recognition criteria are met, or after both the second gesture-recognition
criteria and the updated first gesture-recognition criteria are met, the input can cause both the criteria and the updated first gesture-recognition criteria are met, the input can cause both the
first and first andthe thesecond second object objectmanipulation manipulation behaviors subsequentlywithout behaviors subsequently withoutregard regardtotothe the thresholds in the original or updated first and second gesture-recognition criteria. Updating an thresholds in the original or updated first and second gesture-recognition criteria. Updating an
object in object in accordance with the accordance with the first firstobject objectmanipulation manipulation behavior behavior and and the the second object second object
manipulation behavior in response to a portion of the input detected after the second gesture- manipulation behavior in response to a portion of the input detected after the second gesture-
recognition criteria and the updated first gesture-recognition criteria are met enhances the recognition criteria and the updated first gesture-recognition criteria are met enhances the
operability of the device (e.g., by providing the user with the ability to freely manipulate the operability of the device (e.g., by providing the user with the ability to freely manipulate the
object using first object manipulation and second object manipulation after satisfying object using first object manipulation and second object manipulation after satisfying
heightenedcriteria, heightened criteria, without without requiring requiring the theuser usertoto provide providea a new newinput). input).Reducing Reducing the thenumber number
of inputs of inputs needed to perform needed to an operation perform an operation improves improvesthe theoperability operabilityof of the the device and makes device and makes the user-device the user-device interface interface more efficient, which, more efficient, which,additionally, additionally,reduces reducespower power usage usage and and
improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
200
1005066680
[00513]
[00513] In some In embodiments, some embodiments, thethe pluralityofofobject plurality objectmanipulation manipulationbehaviors behaviorsincludes includes 10 Jan 2024
(19028) (19028) a a thirdobject third object manipulation manipulation behavior behavior (e.g., rotation (e.g., rotation of the of the user user interface interface object around object around
a respective axis) that is performed in response to inputs that meet third gesture-recognition a respective axis) that is performed in response to inputs that meet third gesture-recognition
criteria (e.g., scaling criteria). Updating the appearance of the user interface object based on criteria (e.g., scaling criteria). Updating the appearance of the user interface object based on
the first portion of the input includes (19030): in accordance with a determination that the the first portion of the input includes (19030): in accordance with a determination that the
first portion of the input meets the first gesture-recognition criteria before meeting the second first portion of the input meets the first gesture-recognition criteria before meeting the second
gesture-recognition criteria or meeting the third gesture-recognition criteria: changing the gesture-recognition criteria or meeting the third gesture-recognition criteria: changing the 2024200149
appearance of the user interface object (e.g., rotating the user interface object) in accordance appearance of the user interface object (e.g., rotating the user interface object) in accordance
with the first object manipulation behavior based on the first portion of the input (e.g., based with the first object manipulation behavior based on the first portion of the input (e.g., based
on a direction and/or magnitude of the first portion of the input); and (e.g., without changing on a direction and/or magnitude of the first portion of the input); and (e.g., without changing
an appearance an appearanceofofthe the user user interface interface object object in inaccordance accordance with with the the second second object object manipulation manipulation
behavior) updating the second gesture-recognition criteria by increasing the threshold for the behavior) updating the second gesture-recognition criteria by increasing the threshold for the
second gesture-recognition criteria (e.g., increasing a threshold required for a movement second gesture-recognition criteria (e.g., increasing a threshold required for a movement
parameter (e.g., movement distance, speed, etc.) in the second gesture-recognition criteria). parameter (e.g., movement distance, speed, etc.) in the second gesture-recognition criteria).
For example, before the criteria for recognizing a gesture for rotating the object are met, it is For example, before the criteria for recognizing a gesture for rotating the object are met, it is
relatively easy to initiate a translation or scaling operation on the object by meeting the relatively easy to initiate a translation or scaling operation on the object by meeting the
criteria for recognizing a gesture for translation or scaling (assuming that the criteria for criteria for recognizing a gesture for translation or scaling (assuming that the criteria for
translating or scaling have not been met before). Once the criteria for recognizing the gesture translating or scaling have not been met before). Once the criteria for recognizing the gesture
for rotating the object are met, it becomes harder to initiate the translation or scaling for rotating the object are met, it becomes harder to initiate the translation or scaling
operation on the object (e.g., the criteria for translation and scaling are updated with operation on the object (e.g., the criteria for translation and scaling are updated with
increased thresholds increased thresholds for for the the movement parameter),and movement parameter), andthe theobject objectmanipulation manipulationisisbiased biased towardthe toward the manipulation manipulationbehavior behaviorcorresponding corresponding to to thegesture the gesturethat thatisis already already recognized recognizedand and used to manipulate this object. The device updates the third gesture-recognition criteria by used to manipulate this object. The device updates the third gesture-recognition criteria by
increasing a threshold for the third gesture-recognition criteria (e.g., increasing a threshold increasing a threshold for the third gesture-recognition criteria (e.g., increasing a threshold
required for a movement parameter (e.g., movement distance, speed, etc.) in the third gesture- required for a movement parameter (e.g., movement distance, speed, etc.) in the third gesture-
recognition criteria). For example, before the criteria for recognizing a gesture for rotating the recognition criteria). For example, before the criteria for recognizing a gesture for rotating the
object are met, it is relatively easy to initiate a translation or scaling operation on the object object are met, it is relatively easy to initiate a translation or scaling operation on the object
by meeting the criteria for recognizing a gesture for translation or scaling (assuming that the by meeting the criteria for recognizing a gesture for translation or scaling (assuming that the
criteria for translating or scaling have not been met before). Once the criteria for recognizing criteria for translating or scaling have not been met before). Once the criteria for recognizing
the gesture for rotating the object are met, it becomes harder to initiate the translation or the gesture for rotating the object are met, it becomes harder to initiate the translation or
scaling operation on the object (e.g., the criteria for translation and scaling are updated with scaling operation on the object (e.g., the criteria for translation and scaling are updated with
increased thresholds increased thresholds for for the the movement parameter),and movement parameter), andthe theobject objectmanipulation manipulationisisbiased biased towardthe toward the manipulation manipulationbehavior behaviorcorresponding corresponding to to thegesture the gesturethat thatisis already already recognized recognizedand and
201
1005066680
used to manipulate this object. In accordance with a determination that the input meets the used to manipulate this object. In accordance with a determination that the input meets the 10 Jan 2024
second gesture-recognition criteria before meeting the first gesture-recognition criteria or second gesture-recognition criteria before meeting the first gesture-recognition criteria or
meeting the third gesture-recognition criteria: the device changes the appearance of the user meeting the third gesture-recognition criteria: the device changes the appearance of the user
interface object (e.g., translating the user interface object or resizing the user interface object) interface object (e.g., translating the user interface object or resizing the user interface object)
in accordance in withthe accordance with the second secondobject objectmanipulation manipulationbehavior behaviorbased based onon thethe first portion first portion of of the the input (e.g., based on a direction and/or magnitude of the first portion of the input); and (e.g., input (e.g., based on a direction and/or magnitude of the first portion of the input); and (e.g.,
without changing an appearance of the user interface object in accordance with the first without changing an appearance of the user interface object in accordance with the first 2024200149
object manipulation behavior) updates the first gesture-recognition criteria by increasing a object manipulation behavior) updates the first gesture-recognition criteria by increasing a
threshold for the first gesture-recognition criteria (e.g., increasing a threshold required for a threshold for the first gesture-recognition criteria (e.g., increasing a threshold required for a
movement movement parameter parameter (e.g.,movement (e.g., movement distance, distance, speed, speed, etc.) etc.) in in thefirst the first gesture-recognition gesture-recognition criteria). For example, before the criteria for recognizing a gesture for translating or scaling criteria). For example, before the criteria for recognizing a gesture for translating or scaling
the object is met, it is relatively easy to initiate rotation operation on the object by meeting the object is met, it is relatively easy to initiate rotation operation on the object by meeting
the criteria for recognizing a gesture for rotation (assuming that the criteria for recognizing a the criteria for recognizing a gesture for rotation (assuming that the criteria for recognizing a
gesture for rotating the object has not been met before). Once the criteria for recognizing the gesture for rotating the object has not been met before). Once the criteria for recognizing the
gesture for translating or scaling the object are met, it becomes harder to initiate the rotation gesture for translating or scaling the object are met, it becomes harder to initiate the rotation
operation on the object (e.g., the criteria for rotating the object are updated with increased operation on the object (e.g., the criteria for rotating the object are updated with increased
threshold for threshold for the the movement parameter),and movement parameter), andthe theobject objectmanipulation manipulation behavior behavior is is biased biased
towardthe toward the manipulation manipulationbehavior behaviorcorresponding corresponding to to thegesture the gesturethat thatisis already already recognized recognizedand and used to used to manipulate this object. manipulate this object. In Insome some embodiments, theappearance embodiments, the appearanceof of theuser the userinterface interface object is changed dynamically and continuously (e.g., showing different sizes, positions, object is changed dynamically and continuously (e.g., showing different sizes, positions,
perspectives, reflections, shadows, etc.) in accordance with the values of the respective perspectives, reflections, shadows, etc.) in accordance with the values of the respective
movement movement parameter parameter of of thethe input.InInsome input. some embodiments, embodiments, the the device device follows follows a preset a preset
correspondence(e.g., correspondence (e.g., respective respective correspondence foreach correspondence for eachtype typeofof manipulation manipulationbehavior) behavior) betweenthe between themovement movement parameter parameter (e.g., (e.g., a respectivemovement a respective movement parameter parameter for each for each type type of of manipulationbehavior) manipulation behavior)and andthe thechanges changesmade made to to thethe appearance appearance of of thethe user user interfaceobject interface object (e.g., a respective aspect of the appearance for each type of manipulation behavior). The (e.g., a respective aspect of the appearance for each type of manipulation behavior). The
device updates the third gesture-recognition criteria by increasing a threshold for the third device updates the third gesture-recognition criteria by increasing a threshold for the third
gesture-recognition criteria (e.g., increasing a threshold required for a movement parameter gesture-recognition criteria (e.g., increasing a threshold required for a movement parameter
(e.g., (e.g., movement distance, movement distance, speed, speed, etc.)etc.) in third in the the third gesture-recognition gesture-recognition criteria). criteria). For example, For example,
before the criteria for recognizing a gesture for rotating the object are met, it is relatively easy before the criteria for recognizing a gesture for rotating the object are met, it is relatively easy
to initiate a translation or scaling operation on the object by meeting the criteria for to initiate a translation or scaling operation on the object by meeting the criteria for
recognizing a gesture for translation or scaling (assuming that the criteria for translating or recognizing a gesture for translation or scaling (assuming that the criteria for translating or
scaling have not been met before). Once the criteria for recognizing the gesture for rotating scaling have not been met before). Once the criteria for recognizing the gesture for rotating
202
1005066680
the object are met, it becomes harder to initiate the translation or scaling operation on the the object are met, it becomes harder to initiate the translation or scaling operation on the 10 Jan 2024
object (e.g., the criteria for translation and scaling are updated with increased thresholds for object (e.g., the criteria for translation and scaling are updated with increased thresholds for
the movement the parameter), movement parameter), and and thethe objectmanipulation object manipulation is is biasedtoward biased toward thethe manipulation manipulation
behavior corresponding to the gesture that is already recognized and used to manipulate this behavior corresponding to the gesture that is already recognized and used to manipulate this
object. In accordance with a determination that the input meets the third gesture-recognition object. In accordance with a determination that the input meets the third gesture-recognition
criteria before meeting the first gesture-recognition criteria or meeting the second gesture- criteria before meeting the first gesture-recognition criteria or meeting the second gesture-
recognition criteria: the device changes the appearance of the user interface object (e.g., recognition criteria: the device changes the appearance of the user interface object (e.g., 2024200149
resizing the user interface object) in accordance with the third object manipulation behavior resizing the user interface object) in accordance with the third object manipulation behavior
based on the first portion of the input (e.g., based on a direction and/or magnitude of the first based on the first portion of the input (e.g., based on a direction and/or magnitude of the first
portion of the input); and (e.g., without changing an appearance of the user interface object in portion of the input); and (e.g., without changing an appearance of the user interface object in
accordancewith accordance withthe thefirst first object object manipulation behavior and manipulation behavior andthe the second secondobject objectmanipulation manipulation behavior) the device updates the first gesture-recognition criteria by increasing a threshold for behavior) the device updates the first gesture-recognition criteria by increasing a threshold for
the first gesture-recognition criteria (e.g., increasing a threshold required for a movement the first gesture-recognition criteria (e.g., increasing a threshold required for a movement
parameter (e.g., movement distance, speed, etc.) in the first gesture-recognition criteria). For parameter (e.g., movement distance, speed, etc.) in the first gesture-recognition criteria). For
example, before the criteria for recognizing a gesture for translating or scaling the object are example, before the criteria for recognizing a gesture for translating or scaling the object are
met, it is relatively easy to initiate rotation operation on the object by meeting the criteria for met, it is relatively easy to initiate rotation operation on the object by meeting the criteria for
recognizing a gesture for rotation (assuming that the criteria for recognizing a gesture for recognizing a gesture for rotation (assuming that the criteria for recognizing a gesture for
rotating the object has not been met before). Once the criteria for recognizing the gesture for rotating the object has not been met before). Once the criteria for recognizing the gesture for
translating or scaling the object are met, it becomes harder to initiate the rotation operation on translating or scaling the object are met, it becomes harder to initiate the rotation operation on
the object (e.g., the criteria for rotating the object are updated with increased threshold for the the object (e.g., the criteria for rotating the object are updated with increased threshold for the
movement movement parameter), parameter), andand thethe object object manipulation manipulation behavior behavior is biased is biased toward toward the the
manipulationbehavior manipulation behaviorcorresponding correspondingto to thegesture the gesturethat thatis is already already recognized andused recognized and usedtoto manipulatethis manipulate this object. object. In In some embodiments,the some embodiments, theappearance appearanceof of theuser the userinterface interfaceobject objectis is changeddynamically changed dynamically and and continuously continuously (e.g.,showing (e.g., showing differentsizes, different sizes,positions, positions, perspectives, perspectives, reflections, shadows, reflections, shadows, etc.) etc.)ininaccordance accordance with with the thevalues valuesof ofthe therespective movement respective movement
parameterof parameter of the the input. input. In In some embodiments,thethedevice some embodiments, devicefollows followsa apreset presetcorrespondence correspondence (e.g., (e.g.,respective respectivecorrespondence correspondence for for each each type type of of manipulation manipulation behavior) betweenthe behavior) between the movement movement parameter parameter (e.g.,a arespective (e.g., respectivemovement movement parameter parameter for for eacheach typetype of manipulation of manipulation
behavior) and the changes made to the appearance of the user interface object (e.g., a behavior) and the changes made to the appearance of the user interface object (e.g., a
respective aspect respective aspect of of the the appearance for each appearance for each type type of of manipulation behavior). The manipulation behavior). Thedevice device updates the second gesture-recognition criteria by increasing the threshold for the second updates the second gesture-recognition criteria by increasing the threshold for the second
gesture-recognition criteria gesture-recognition criteria (e.g., (e.g., increasing increasing a threshold a threshold required required for a movement for a movement parameter parameter
(e.g., (e.g., movement distance, movement distance, speed, speed, etc.)etc.) insecond in the the second gesture-recognition gesture-recognition criteria).criteria). For For
203
1005066680
example, before the criteria for recognizing a gesture for rotating the object are met, it is example, before the criteria for recognizing a gesture for rotating the object are met, it is 10 Jan 2024
relatively easy to initiate a translation or scaling operation on the object by meeting the relatively easy to initiate a translation or scaling operation on the object by meeting the
criteria for recognizing a gesture for translation or scaling (assuming that the criteria for criteria for recognizing a gesture for translation or scaling (assuming that the criteria for
translating or scaling have not been met before). Once the criteria for recognizing the gesture translating or scaling have not been met before). Once the criteria for recognizing the gesture
for rotating the object are met, it becomes harder to initiate the translation or scaling for rotating the object are met, it becomes harder to initiate the translation or scaling
operation on the object (e.g., the criteria for translation and scaling are updated with operation on the object (e.g., the criteria for translation and scaling are updated with
increased thresholds increased thresholds for for the the movement parameter),and movement parameter), andthe theobject objectmanipulation manipulationisisbiased biased 2024200149
towardthe toward the manipulation manipulationbehavior behaviorcorresponding corresponding to to thegesture the gesturethat thatisis already already recognized recognizedand and used to manipulate this object. Updating the object in accordance with a third object used to manipulate this object. Updating the object in accordance with a third object
manipulationbehavior manipulation behaviorininresponse responsetotoaa portion portion of of the the input input detected detected only only when corresponding when corresponding
third gesture-recognition criteria are met enhances the operability of the device (e.g., by third gesture-recognition criteria are met enhances the operability of the device (e.g., by
helping the helping the user user to to avoid avoid accidentally accidentally performing a third performing a third object objectmanipulation manipulation while while
attempting to provide input for performing a first object manipulation or a second object attempting to provide input for performing a first object manipulation or a second object
manipulation). Reducing manipulation). Reducingthethe number number of inputs of inputs needed needed to perform to perform an operation an operation improves improves the the operability of the device and makes the user-device interface more efficient, which, operability of the device and makes the user-device interface more efficient, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00514]
[00514] In some In embodiments, some embodiments, thethe pluralityofofobject plurality objectmanipulation manipulationbehaviors behaviorsinclude include (19032) (19032) a a thirdobject third object manipulation manipulation behavior behavior that isthat is performed performed in to in response response to inputs inputs that meet that meet third gesture-recognition criteria, the first portion of the input did not meet the third gesture- third gesture-recognition criteria, the first portion of the input did not meet the third gesture-
recognition criteria before meeting the first gesture-recognition criteria or the second gesture- recognition criteria before meeting the first gesture-recognition criteria or the second gesture-
recognition criteria, the device updated the third gesture-recognition criteria by increasing a recognition criteria, the device updated the third gesture-recognition criteria by increasing a
threshold for the third gesture-recognition criteria after the first portion of the input met the threshold for the third gesture-recognition criteria after the first portion of the input met the
first gesture-recognition criteria or the second gesture-recognition criteria, the second portion first gesture-recognition criteria or the second gesture-recognition criteria, the second portion
of the input did not meet the updated third gesture-recognition criteria (e.g., the device of the input did not meet the updated third gesture-recognition criteria (e.g., the device
updated the third gesture-recognition criteria by increasing a threshold for the third gesture- updated the third gesture-recognition criteria by increasing a threshold for the third gesture-
recognition criteria after the first portion of the input had met one of the first or second recognition criteria after the first portion of the input had met one of the first or second
gesture-recognition criteria) before meeting the updated first gesture-recognition criteria or gesture-recognition criteria) before meeting the updated first gesture-recognition criteria or
the updated second gesture-recognition criteria. In response to detecting the third portion of the updated second gesture-recognition criteria. In response to detecting the third portion of
the input (19034): in accordance with a determination that the third portion of the input meets the input (19034): in accordance with a determination that the third portion of the input meets
the updated third gesture-recognition criteria (e.g., without regard to whether or not the third the updated third gesture-recognition criteria (e.g., without regard to whether or not the third
portion of the input meets the first or second gesture-recognition criteria (e.g., updated or portion of the input meets the first or second gesture-recognition criteria (e.g., updated or
original)), the device changes the appearance of the user interface object in accordance with original)), the device changes the appearance of the user interface object in accordance with
204
1005066680
the third object manipulation behavior based on the third portion of the input (e.g., based on a the third object manipulation behavior based on the third portion of the input (e.g., based on a 10 Jan 2024
direction and/or magnitude of the third portion of the input) (e.g., while changing the direction and/or magnitude of the third portion of the input) (e.g., while changing the
appearance of the user interface object in accordance with the first and second object appearance of the user interface object in accordance with the first and second object
manipulation behaviors (e.g., even if the third portion of the input does not meet the original manipulation behaviors (e.g., even if the third portion of the input does not meet the original
first and second gesture-recognition criteria)). In accordance with a determination that the first and second gesture-recognition criteria)). In accordance with a determination that the
third portion of the input does not meet the updated third gesture-recognition criteria, the third portion of the input does not meet the updated third gesture-recognition criteria, the
device forgoes device forgoes changing changingthe theappearance appearanceofofthe theuser userinterface interface object object in in accordance with the accordance with the 2024200149
third object manipulation behavior based on the third portion of the input (e.g., while third object manipulation behavior based on the third portion of the input (e.g., while
changing the appearance of the user interface object in accordance with the first and second changing the appearance of the user interface object in accordance with the first and second
object manipulation behaviors (e.g., even if the third portion of the input does not meet the object manipulation behaviors (e.g., even if the third portion of the input does not meet the
original first and second gesture-recognition criteria). Updating the object in accordance with original first and second gesture-recognition criteria). Updating the object in accordance with
a first object manipulation behavior, a second object manipulation behavior, and a third a first object manipulation behavior, a second object manipulation behavior, and a third
object manipulation behavior in response to a portion of the input detected after second object manipulation behavior in response to a portion of the input detected after second
gesture-recognition criteria, gesture-recognition criteria, updated updated first first gesture-recognition gesture-recognition criteria, criteria, and updated and updated third third gesture-recognition criteria are met enhances the operability of the device (e.g., by providing gesture-recognition criteria are met enhances the operability of the device (e.g., by providing
the user with the ability to freely manipulate the object using first, second, and third object the user with the ability to freely manipulate the object using first, second, and third object
manipulation types after establishing an intention to perform all three object manipulation manipulation types after establishing an intention to perform all three object manipulation
types by satisfying the increased thresholds, without requiring the user to provide a new types by satisfying the increased thresholds, without requiring the user to provide a new
input). Reducing input). thenumber Reducing the numberof of inputsneeded inputs neededto to perform perform an an operation operation improves improves the the
operability of the device and makes the user-device interface more efficient, which, operability of the device and makes the user-device interface more efficient, which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00515]
[00515] In some In embodiments some embodiments (19036), (19036), thethe third third portionofofthe portion theinput inputmet metthe theupdated updated third gesture-recognition criteria. After updating the appearance of the user interface object third gesture-recognition criteria. After updating the appearance of the user interface object
based on the third portion of the input (e.g., after both the first gesture-recognition criteria based on the third portion of the input (e.g., after both the first gesture-recognition criteria
and updated second and third gesture-recognition criteria are met, or after both the second and updated second and third gesture-recognition criteria are met, or after both the second
gesture-recognition criteria and the updated first and third gesture-recognition criteria are gesture-recognition criteria and the updated first and third gesture-recognition criteria are
met), the device detects (19038) a fourth portion of the input (e.g., by the same continuously met), the device detects (19038) a fourth portion of the input (e.g., by the same continuously
maintained contacts in the first, second, and third portions of the input, or different contacts maintained contacts in the first, second, and third portions of the input, or different contacts
detected after termination (e.g., lift-off) of the contacts in the first portion, second, and third detected after termination (e.g., lift-off) of the contacts in the first portion, second, and third
portions of the input). In response to detecting the fourth portion of the input, the device portions of the input). In response to detecting the fourth portion of the input, the device
updates (19040) the appearance of the user interface object based on the fourth portion of the updates (19040) the appearance of the user interface object based on the fourth portion of the
input, including: changing the appearance of the user interface object in accordance with the input, including: changing the appearance of the user interface object in accordance with the
205
1005066680
first object manipulation behavior based on the fourth portion of the input; changing the first object manipulation behavior based on the fourth portion of the input; changing the 10 Jan 2024
appearanceofofthe appearance the user user interface interface object object in inaccordance accordance with with the the second second object object manipulation manipulation
behavior based behavior basedononthe thefourth fourth portion portion of of the input; input;and and changing the appearance changing the of the appearance of the user interface object in accordance with the third object manipulation behavior based on the fourth interface object in accordance with the third object manipulation behavior based on the fourth
portion of the input. For example, after the first gesture-recognition criteria and updated portion of the input. For example, after the first gesture-recognition criteria and updated
second and third gesture-recognition criteria are met, or after the second gesture-recognition second and third gesture-recognition criteria are met, or after the second gesture-recognition
criteria and the updated first and third gesture-recognition criteria are met, the input can cause criteria and the updated first and third gesture-recognition criteria are met, the input can cause 2024200149
all three types of manipulation behaviors subsequently without regard to the thresholds in the all three types of manipulation behaviors subsequently without regard to the thresholds in the
original or updated first, second, and third gesture-recognition criteria. original or updated first, second, and third gesture-recognition criteria.
[00516]
[00516] In some In embodiments, some embodiments, thethe fourthportion fourth portionofofthe theinput inputdoes doesnot notinclude include (19042): input that meets the first gesture-recognition criteria, input that meets the second (19042): input that meets the first gesture-recognition criteria, input that meets the second
gesture-recognition criteria, or input that meets the third gesture-recognition criteria. For gesture-recognition criteria, or input that meets the third gesture-recognition criteria. For
example, after the first gesture-recognition criteria and updated second and third gesture- example, after the first gesture-recognition criteria and updated second and third gesture-
recognition criteria are met, or after the second gesture-recognition criteria and the updated recognition criteria are met, or after the second gesture-recognition criteria and the updated
first and third gesture-recognition criteria are met, the input can cause all three types of first and third gesture-recognition criteria are met, the input can cause all three types of
manipulationbehaviors manipulation behaviorssubsequently subsequently without without regard regard to to thethresholds the thresholdsininthe theoriginal original or or
updated first, second, and third gesture-recognition criteria. Requiring a number of updated first, second, and third gesture-recognition criteria. Requiring a number of
concurrently detected contacts for a gesture enhances the operability of the device (e.g., by concurrently detected contacts for a gesture enhances the operability of the device (e.g., by
helping the helping the user user to to avoid avoid accidentally accidentally performing an object performing an object manipulation whileproviding manipulation while providing input with input with less less than than the therequired requirednumber of concurrently number of detected contacts). concurrently detected contacts). Reducing the Reducing the
numberofofinputs number inputsneeded neededtotoperform performananoperation operationimproves improves thethe operabilityofofthe operability thedevice deviceand and makesthe makes theuser-device user-deviceinterface interface more moreefficient, efficient, which, which, additionally, additionally, reduces reduces power usageand power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00517]
[00517] In some In embodiments some embodiments (19044), (19044), thethe firstgesture-recognition first gesture-recognitioncriteria criteria and and the the second gesture-recognition criteria (and the third gesture-recognition criteria) both require a second gesture-recognition criteria (and the third gesture-recognition criteria) both require a
first number of concurrently detected contacts (e.g., two contacts) in order to be met. In some first number of concurrently detected contacts (e.g., two contacts) in order to be met. In some
embodiments, a single finger gesture can also be used for translation, and the single finger embodiments, a single finger gesture can also be used for translation, and the single finger
translation threshold translation threshold is islower lowerthan thanthe thetwo-finger two-fingertranslation threshold. translation In In threshold. some embodiments, some embodiments,
the original the original and and updated updated movement thresholds movement thresholds setfor set foraatwo-finger two-fingertranslation translation gesture gesture are are 40 40
points and points 70 points and 70 points movement movement by by centroid centroid ofof thecontacts, the contacts,respectively. respectively. In In some some
embodiments,thetheoriginal embodiments, originaland andupdated updatedmovement movement thresholds thresholds set set for for a two-finger a two-finger rotation rotation
gesture are 12 gesture 12 degrees degrees and 18 degrees and 18 degreesof of rotational rotational movement movement byby thecontacts, the contacts,respectively. respectively. 206
1005066680
In some In embodiments, some embodiments, thethe originaland original andupdated updated movement movement thresholds thresholds set for set for a two-finger a two-finger 10 Jan 2024
scaling gesture are 50 points (contact-to-contact distance) and 90 points, respectively. In scaling gesture are 50 points (contact-to-contact distance) and 90 points, respectively. In
some embodiments, the threshold set for single finger drag gesture is 30 points. some embodiments, the threshold set for single finger drag gesture is 30 points.
[00518]
[00518] In some In embodiments some embodiments (19046), (19046), thethe firstobject first objectmanipulation manipulationbehavior behavior changes changes
a zoom level or displayed size of the user interface object (e.g., resizing the object by a pinch a zoom level or displayed size of the user interface object (e.g., resizing the object by a pinch
gesture (e.g., movement of contacts toward one another, e.g., after the pinch gesture is gesture (e.g., movement of contacts toward one another, e.g., after the pinch gesture is
recognized based on the first gesture-recognition criteria (e.g., original or updated))) and the recognized based on the first gesture-recognition criteria (e.g., original or updated))) and the 2024200149
second object manipulation behavior changes a rotational angle of the user interface object second object manipulation behavior changes a rotational angle of the user interface object
(e.g., changing a viewing perspective of the user interface object around an external or (e.g., changing a viewing perspective of the user interface object around an external or
internal axis internal axisby by aatwist/pivot twist/pivotgesture (e.g., gesture movement (e.g., movementof ofcontacts contactsaround aroundaacommon locus, common locus,
e.g., after the twist/pivot gesture is recognized by the second gesture-recognition criteria (e.g., e.g., after the twist/pivot gesture is recognized by the second gesture-recognition criteria (e.g.,
original or updated))). For example, the first object manipulation behavior changes a original or updated))). For example, the first object manipulation behavior changes a
displayed size displayed size of of virtual virtualobject object11002 11002 as asdescribed described with with regard regard to toFigures Figures14G-14I and the 14G-14I and the secondobject second object manipulation manipulationbehavior behaviorchanges changes a rotationalangle a rotational angleofofvirtual virtual object object 11002 as 11002 as
described with described with regard regard to to Figures 14B-14E.InInsome Figures 14B-14E. some embodiments, embodiments, the the second second object object
manipulationbehavior manipulation behaviorchanges changesa a zoom zoom level level or or displayed displayed sizeofofthe size theuser userinterface interface object object (e.g., (e.g., resizing the object resizing the objectbybya apinch pinch gesture gesture (e.g., (e.g., movement movement of contacts of contacts toward toward one one another, another,
e.g., after the pinch gesture is recognized based on the second gesture-recognition criteria e.g., after the pinch gesture is recognized based on the second gesture-recognition criteria
(e.g., (e.g., original or updated))) original or updated)))andand thethe first first object object manipulation manipulation behavior behavior changes changes a rotational a rotational
angle of the user interface object (e.g., changing a viewing perspective of the user interface angle of the user interface object (e.g., changing a viewing perspective of the user interface
object around an external or internal axis by a twist/pivot gesture (e.g., movement of contacts object around an external or internal axis by a twist/pivot gesture (e.g., movement of contacts
around a common locus, e.g., after the twist/pivot gesture is recognized by the first gesture- around a common locus, e.g., after the twist/pivot gesture is recognized by the first gesture-
recognition criteria (e.g., original or updated))). recognition criteria (e.g., original or updated))).
[00519]
[00519] In some In embodiments some embodiments (19048), (19048), thethe firstobject first objectmanipulation manipulationbehavior behavior changes changes
a zoom level or displayed size of the user interface object (e.g., resizing the object by a pinch a zoom level or displayed size of the user interface object (e.g., resizing the object by a pinch
gesture (e.g., movement of contacts toward one another, e.g., after the pinch gesture is gesture (e.g., movement of contacts toward one another, e.g., after the pinch gesture is
recognized based on the first gesture-recognition criteria (e.g., original or updated))) and the recognized based on the first gesture-recognition criteria (e.g., original or updated))) and the
second object manipulation behavior changes a position of the user interface object in the second object manipulation behavior changes a position of the user interface object in the
first user interface region (e.g., dragging the user interface object by a one-finger or two- first user interface region (e.g., dragging the user interface object by a one-finger or two-
finger drag gesture (e.g., movement of contacts in a respective direction, e.g., after the drag finger drag gesture (e.g., movement of contacts in a respective direction, e.g., after the drag
gesture is recognized by the second gesture-recognition criteria (e.g., original or updated))). gesture is recognized by the second gesture-recognition criteria (e.g., original or updated))).
For example, the first object manipulation behavior changes a displayed size of virtual object For example, the first object manipulation behavior changes a displayed size of virtual object
11002 as described 11002 as describedwith withregard regardto to Figures Figures 14G-14I 14G-14Iand andthe thesecond second objectmanipulation object manipulation 207
1005066680
behavior changes a position of virtual object 11002 in a user interface as described with behavior changes a position of virtual object 11002 in a user interface as described with 10 Jan 2024
regard to regard to Figures Figures 14B-14E. Insome 14B-14E. In someembodiments, embodiments, the the second second object object manipulation manipulation behavior behavior
changes a zoom level or displayed size of the user interface object (e.g., resizing the object by changes a zoom level or displayed size of the user interface object (e.g., resizing the object by
a pinch gesture (e.g., movement of contacts toward one another, e.g., after the pinch gesture a pinch gesture (e.g., movement of contacts toward one another, e.g., after the pinch gesture
is recognized based on the second gesture-recognition criteria (e.g., original or updated))) and is recognized based on the second gesture-recognition criteria (e.g., original or updated))) and
the first object manipulation behavior changes a position of the user interface object in the the first object manipulation behavior changes a position of the user interface object in the
first user interface region (e.g., dragging the user interface object by a one-finger or two- first user interface region (e.g., dragging the user interface object by a one-finger or two- 2024200149
finger drag gesture (e.g., movement of contacts in a respective direction, e.g., after the drag finger drag gesture (e.g., movement of contacts in a respective direction, e.g., after the drag
gesture is recognized by the first gesture-recognition criteria (e.g., original or updated))). gesture is recognized by the first gesture-recognition criteria (e.g., original or updated))).
[00520]
[00520] In some In embodiments some embodiments (19050), (19050), thethe firstobject first objectmanipulation manipulationbehavior behavior changes changes
a position of the user interface object in the first user interface region (e.g., dragging the a position of the user interface object in the first user interface region (e.g., dragging the
object by a one-finger or two-finger drag gesture (e.g., movement of contacts in a respective object by a one-finger or two-finger drag gesture (e.g., movement of contacts in a respective
direction, e.g., after the drag gesture is recognized by the first gesture-recognition criteria direction, e.g., after the drag gesture is recognized by the first gesture-recognition criteria
(e.g., (e.g., original or updated))) original or updated)))andand thethe second second object object manipulation manipulation behaviorbehavior changes a changes rotationala rotational
angle of the user interface object (e.g., changing a viewing perspective of the user interface angle of the user interface object (e.g., changing a viewing perspective of the user interface
object around an external or internal axis by a twist/pivot gesture (e.g., movement of contacts object around an external or internal axis by a twist/pivot gesture (e.g., movement of contacts
around a common locus, e.g., after the twist/pivot gesture is recognized by the second around a common locus, e.g., after the twist/pivot gesture is recognized by the second
gesture-recognition criteria gesture-recognition criteria (e.g., (e.g., original original or or updated))). updated))). For example, For example, theobject the first first object manipulation behavior changes a position of virtual object 11002 in a user interface as manipulation behavior changes a position of virtual object 11002 in a user interface as
described with described with regard regard to to Figures 14B-14Eand Figures 14B-14E and thesecond the second object object manipulation manipulation behavior behavior
changes a rotational angle of virtual object 11002 as described with regard to Figures 14B- changes a rotational angle of virtual object 11002 as described with regard to Figures 14B-
14E. In some 14E. In embodiments, some embodiments, thethe second second object object manipulation manipulation behavior behavior changes changes a position a position of of
the user interface object in the first user interface region (e.g., dragging the object by a one- the user interface object in the first user interface region (e.g., dragging the object by a one-
finger or two-finger drag gesture (e.g., movement of contacts in a respective direction, e.g., finger or two-finger drag gesture (e.g., movement of contacts in a respective direction, e.g.,
after the drag gesture is recognized by the second gesture-recognition criteria (e.g., original or after the drag gesture is recognized by the second gesture-recognition criteria (e.g., original or
updated))) and the first object manipulation behavior changes a rotational angle of the user updated))) and the first object manipulation behavior changes a rotational angle of the user
interface object (e.g., changing a viewing perspective of the user interface object around an interface object (e.g., changing a viewing perspective of the user interface object around an
external or internal axis by a twist/pivot gesture (e.g., movement of contacts around a external or internal axis by a twist/pivot gesture (e.g., movement of contacts around a
common locus, e.g., after the twist/pivot gesture is recognized by the first gesture-recognition common locus, e.g., after the twist/pivot gesture is recognized by the first gesture-recognition
criteria (e.g., original or updated))). criteria (e.g., original or updated))).
[00521]
[00521] In some In embodiments some embodiments (19052), (19052), thethe firstportion first portionofofthe the input input and and the the second second portion of the input are provided by a plurality of continuously maintained contacts. The portion of the input are provided by a plurality of continuously maintained contacts. The
device re-establishes (19054) the first gesture-recognition criteria and the second gesture- device re-establishes (19054) the first gesture-recognition criteria and the second gesture-
208
1005066680
recognition criteria (e.g., with the original thresholds) to initiate additional first and second recognition criteria (e.g., with the original thresholds) to initiate additional first and second 10 Jan 2024
object-manipulation behaviors after detecting lift-off of the plurality of continuously object-manipulation behaviors after detecting lift-off of the plurality of continuously
maintained contacts. For example, after lift-off of the contacts, the device reestablishes the maintained contacts. For example, after lift-off of the contacts, the device reestablishes the
gesture-recognition thresholds gesture-recognition thresholds for rotation, for rotation, translation, translation, and scaling and scaling for adetected for a newly newly detected touch touch input. Re-establishing a threshold for input movement after an input is ended by lift-off of the input. Re-establishing a threshold for input movement after an input is ended by lift-off of the
contacts enhances the operability of the device (e.g., by reducing the extent of input required contacts enhances the operability of the device (e.g., by reducing the extent of input required
for performing for an object performing an object manipulation manipulationbybyresetting resetting increased increased movement movement thresholds thresholds each each time time 2024200149
a new a input is new input is provided). provided). Reducing the extent Reducing the extent of of input input needed to perform needed to anoperation perform an operation improves the operability of the device and makes the user-device interface more efficient, improves the operability of the device and makes the user-device interface more efficient,
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00522]
[00522] In some In embodiments some embodiments (19056), (19056), thethe firstgesture-recognition first gesture-recognitioncriteria criteria correspond to rotation in around a first axis, and the second gesture-recognition criteria correspond to rotation in around a first axis, and the second gesture-recognition criteria
correspond to rotation around a second axis that is orthogonal to the first axis. In some correspond to rotation around a second axis that is orthogonal to the first axis. In some
embodiments, instead of updating thresholds for different types of gestures, the updating also embodiments, instead of updating thresholds for different types of gestures, the updating also
applies to thresholds set for different sub-types of manipulation behavior (e.g., rotation applies to thresholds set for different sub-types of manipulation behavior (e.g., rotation
around a first axis vs. rotation around a different axis) within a type of manipulation behavior around a first axis VS. rotation around a different axis) within a type of manipulation behavior
corresponding to a recognized gesture type (e.g., twist/pivot gesture). For example, once corresponding to a recognized gesture type (e.g., twist/pivot gesture). For example, once
rotation around a first axis is recognized and performed, the threshold set of rotation around a rotation around a first axis is recognized and performed, the threshold set of rotation around a
different axis is updated (e.g., increased) and has to be overcome by the subsequent input in different axis is updated (e.g., increased) and has to be overcome by the subsequent input in
order to trigger rotation around the different axis. Increasing a threshold for input movement order to trigger rotation around the different axis. Increasing a threshold for input movement
required for required for rotating rotatingan anobject objectabout aboutaafirst axis first when axis input when movement input movement increases increases above above aa
threshold for threshold for input input movement requiredfor movement required forrotating rotating an an object object about about aa second secondaxis axis enhances enhancesthe the operability of the device (e.g., by helping the user to avoid accidentally rotating an object operability of the device (e.g., by helping the user to avoid accidentally rotating an object
about a second axis while attempting to rotating the object about a first axis). Reducing the about a second axis while attempting to rotating the object about a first axis). Reducing the
numberofofinputs number inputsneeded neededtotoperform performananoperation operationimproves improves thethe operabilityofofthe operability thedevice deviceand and makesthe makes theuser-device user-deviceinterface interface more moreefficient, efficient, which, which, additionally, additionally, reduces reduces power usageand power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00523]
[00523] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 19A-19H Figures 19A-19H have have been been described described is merely is merely an example an example andnot and is is not intended intended to indicate to indicate
that the described order is the only order in which the operations could be performed. One of that the described order is the only order in which the operations could be performed. One of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
209
1005066680
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with 10 Jan 2024
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 1000, 1000, 16000, 16000,17000, 17000, 18000, and20000) 18000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 19000 19000 described described
abovewith above withrespect respect to to Figures 19A-19H.ForFor Figures 19A-19H. example, example, contacts, contacts, inputs,virtual inputs, virtualobjects, objects, user interface regions, fields of view, tactile outputs, movements, and/or animations described interface regions, fields of view, tactile outputs, movements, and/or animations described
abovewith above withreference referenceto to method method19000 19000 optionally optionally have have oneone or or more more of the of the characteristicsofof characteristics
the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs, the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs, 2024200149
movements,and/or movements, and/oranimations animations described described herein herein with with reference reference to to othermethods other methods described described
herein (e.g., herein (e.g.,methods methods 800, 900, 1000, 16000, 17000, 1000, 16000, 17000,18000, 18000,and and20000). 20000). For For brevity,these brevity, these details are not repeated here. details are not repeated here.
[00524]
[00524] Figures 20A-20F Figures 20A-20F areflow are flowdiagrams diagrams illustratingmethod illustrating method 20000 20000 of generating of generating
an audio an audio alert alert ininaccordance accordance with with a a determination that movement determination that movement ofofa adevice devicecauses causesa avirtual virtual object to object to move outside of move outside of aa displayed field of displayed field ofview view of of one one or or more more device cameras. Method device cameras. Method 20000 is performed at an electronic device (e.g., device 300, Figure 3, or portable 20000 is performed at an electronic device (e.g., device 300, Figure 3, or portable
multifunction device multifunction device 100, 100, Figure Figure1A) 1A)having havinga adisplay displaygeneration generationcomponent component (e.g.,a adisplay, (e.g., display, a projector, a heads up display or the like), one or more input devices (e.g., a touch-sensitive a projector, a heads up display or the like), one or more input devices (e.g., a touch-sensitive
surface, or surface, or aatouch-screen touch-screen display display that thatserves servesboth bothasasthe display the generation display component generation component and and
the touch-sensitive the touch-sensitive surface), surface),one one or ormore more audio audio output output generators, generators, and and one one or or more cameras. more cameras.
Someoperations Some operationsininmethod method 20000 20000 are, are, optionally,combined optionally, combined and/or and/or the the order order of of some some
operations is, optionally, changed. operations is, optionally, changed.
[00525]
[00525] The device displays (20002) (e.g., in response to a request to place a virtual The device displays (20002) (e.g., in response to a request to place a virtual
object in object in an an augmented reality view augmented reality of aa physical view of physical environment surroundingthethedevice environment surrounding device including the camera (e.g., in response to a tap on the “world” button displayed with the including the camera (e.g., in response to a tap on the "world" button displayed with the
staging view of the virtual object)), via the display generation component, a representation of staging view of the virtual object)), via the display generation component, a representation of
a virtual object in a first user interface region that includes a representation of a field of view a virtual object in a first user interface region that includes a representation of a field of view
of one or more cameras (e.g., the first user interface region is a user interface displaying the of one or more cameras (e.g., the first user interface region is a user interface displaying the
augmentedreality augmented realityview viewofofthe the physical physical environment environmentsurrounding surrounding thethe device device including including thethe
camera), wherein the displaying includes maintaining a first spatial relationship between the camera), wherein the displaying includes maintaining a first spatial relationship between the
representation of the virtual object and a plane detected within a physical environment that is representation of the virtual object and a plane detected within a physical environment that is
captured in the field of view of the one or more cameras (e.g., the virtual object is displayed captured in the field of view of the one or more cameras (e.g., the virtual object is displayed
with an orientation and a position on the display such that a fixed angle between the with an orientation and a position on the display such that a fixed angle between the
representation of the virtual object and the plane is maintained (e.g., the virtual object appears representation of the virtual object and the plane is maintained (e.g., the virtual object appears
to stay at a fixed location on the plane or roll along the field of view plane). For example, as to stay at a fixed location on the plane or roll along the field of view plane). For example, as
210
1005066680
shown in Figure 15V, virtual object 11002 is displayed in a user interface region that includes shown in Figure 15V, virtual object 11002 is displayed in a user interface region that includes 10 Jan 2024
field of field ofview view 6036 of one 6036 of or more one or cameras. more cameras.
[00526]
[00526] Thedevice The devicedetects detects (20004) (20004)movement movement of the of the device device (e.g.,lateral (e.g., lateral movement movement and/or rotation of the device, including the one or more cameras) that adjusts the field of and/or rotation of the device, including the one or more cameras) that adjusts the field of
view of view of the the one or more one or cameras.For more cameras. Forexample, example,asasdescribed describedwith withregard regardtotoFigures Figures15V- 15V- 15W, movement 15W, movement of device of device 100100 adjusts adjusts thethe fieldofofview field viewofof one one oror more more cameras. cameras. 2024200149
[00527]
[00527] In response to detecting movement of the device that adjusts the field of view In response to detecting movement of the device that adjusts the field of view
of the one or more cameras (20006): the device adjusts display of the representation of the of the one or more cameras (20006): the device adjusts display of the representation of the
virtual object in the first user interface region in accordance with the first spatial relationship virtual object in the first user interface region in accordance with the first spatial relationship
(e.g., orientation and/or position) between the virtual object and the plane detected within the (e.g., orientation and/or position) between the virtual object and the plane detected within the
field of view of the one or more cameras as the field of view of the one or more cameras is field of view of the one or more cameras as the field of view of the one or more cameras is
adjusted, and, adjusted, and, in in accordance accordance with with a a determination that the determination that the movement movement ofofthe thedevice devicecauses causes morethan more thanaa threshold threshold amount amount(e.g., (e.g., 100%, 100%,50%, 50%,or or 20%) 20%) of of thethe virtualobject virtual objecttotomove move outside of a displayed portion of the field of view of the one or more cameras (e.g., because outside of a displayed portion of the field of view of the one or more cameras (e.g., because
the spatial relationship between the representation of the virtual object and the plane detected the spatial relationship between the representation of the virtual object and the plane detected
within the physical environment that is captured in the field of view of the one or more within the physical environment that is captured in the field of view of the one or more
camerasremains cameras remainsfixed fixedduring duringmovement movement of the of the device device relative relative to to thephysical the physicalenvironment), environment), the device generates, via the one or more audio output generators, a first audio alert (e.g., a the device generates, via the one or more audio output generators, a first audio alert (e.g., a
voice announcement voice announcement indicating indicating thatmore that more than than a a thresholdamount threshold amount of of thethe virtualobject virtual objectisis no no longer displayed longer displayed in in the the camera view). For camera view). For example, example,asasdescribed describedwith withregard regardtotoFigure Figure15W, 15W, in response in to movement response to movement ofof device100 device 100 thatcauses that causesvirtual virtualobject object 11002 11002totomove move outside outside ofof a a displayed portion of the field of view 6036 of the one or more cameras, audio alert 15118 is displayed portion of the field of view 6036 of the one or more cameras, audio alert 15118 is
generated. Generating generated. anaudio Generating an audiooutput outputinin accordance accordancewith witha adetermination determinationthat thatmovement movement of of a a device causes device causes aa virtual virtual object objectto tomove move outside outside of of aadisplayed displayed augmented reality view augmented reality provides view provides
the user the user with with feedback indicating an feedback indicating an extent extent to to which which movement movement of of thedevice the devicehas hasaffected affected display of the virtual object relative to the augmented reality view. Providing improved display of the virtual object relative to the augmented reality view. Providing improved
feedback to the user enhances the operability of the device (e.g., by providing information feedback to the user enhances the operability of the device (e.g., by providing information
that allows a user to perceive whether a virtual object has moved off of the display without that allows a user to perceive whether a virtual object has moved off of the display without
cluttering the display with additional displayed information and without requiring the user to cluttering the display with additional displayed information and without requiring the user to
view the display) and makes the user-device interface more efficient which, additionally, view the display) and makes the user-device interface more efficient which, additionally,
reduces power reduces powerusage usageand andimproves improves battery battery lifeofofthe life thedevice deviceby byenabling enablingthe theuser userto to use use the the device more quickly and efficiently. device more quickly and efficiently.
211
1005066680
[00528]
[00528] In some In embodiments, some embodiments, outputting outputting thethe firstaudio first audioalert alert includes includes (20008) (20008) 10 Jan 2024
generating an audio output that indicates an amount of the virtual object that remains visible generating an audio output that indicates an amount of the virtual object that remains visible
on the displayed portion of the field of view of the one or more cameras (e.g., the amount of on the displayed portion of the field of view of the one or more cameras (e.g., the amount of
the virtual object that remains visible is measured relative to the total size of the virtual object the virtual object that remains visible is measured relative to the total size of the virtual object
from the current viewing perspective (e.g., 20%, 25%, 50%, etc.)) (e.g., the audio output from the current viewing perspective (e.g., 20%, 25%, 50%, etc.)) (e.g., the audio output
says, “object says, "object xX is is20% 20% visible.”). visible.").For Forexample, example, in inresponse responseto tomovement of device movement of device100 100that that causes virtual object 11002 to move partially outside of a displayed portion of the field of causes virtual object 11002 to move partially outside of a displayed portion of the field of 2024200149
view 6036 view 6036ofofthe the one oneor or more morecameras, cameras,asasdescribed describedwith withregard regardtotoFigures Figures15X-15Y, 15X-15Y, audio audio
alert 15126 alert 15126 is is generated generated that thatincludes includesannouncement 15128 announcement 15128 indicating,"chair indicating, “chairisis 90 90 percent percent visible, occupying visible, occupying 20 percent of 20 percent of screen.” screen." Generating an audio Generating an audio output output that that indicates indicates an an amount amount
of a virtual object visible in a displayed augmented reality view provides the user with of a virtual object visible in a displayed augmented reality view provides the user with
feedback(e.g., feedback (e.g., indicating indicatingan anextent extenttotowhich whichmovement movement ofofthe thedevice devicechanged changedthe thedegree degreetoto which the virtual object is visible). Providing improved feedback to the user (e.g., by which the virtual object is visible). Providing improved feedback to the user (e.g., by
providing information that allows a user to perceive whether a virtual object has moved off of providing information that allows a user to perceive whether a virtual object has moved off of
the display without cluttering the display with additional displayed information and without the display without cluttering the display with additional displayed information and without
requiring the user to view the display) enhances the operability of the device and makes the requiring the user to view the display) enhances the operability of the device and makes the
user-device interface user-device interface more efficient which, more efficient which, additionally, additionally,reduces reduces power power usage and improves usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. battery life of the device by enabling the user to use the device more quickly and efficiently.
[00529]
[00529] In some In embodiments, some embodiments, outputting outputting thethe firstaudio first audioalert alert includes includes (20010) (20010)
generating an audio output that indicates an amount of the displayed portion of the field of generating an audio output that indicates an amount of the displayed portion of the field of
view that is occluded by the virtual object (e.g., the amount of the augmented reality view of view that is occluded by the virtual object (e.g., the amount of the augmented reality view of
the physical environment that is occupied by the virtual object (e.g., 20%, 25%, 50%, etc.)) the physical environment that is occupied by the virtual object (e.g., 20%, 25%, 50%, etc.))
(e.g., the (e.g., theaudio audiooutput outputincludes includesan anannouncement says, "object announcement says, “object Xx occupying 15% occupying 15% of of theworld the world view”). In view"). In some embodiments, some embodiments, thethe audio audio output output also also includes includes a a descriptionofofthe description theaction action perform by the user that caused the changes in the display state of the virtual object. For perform by the user that caused the changes in the display state of the virtual object. For
example,the example, the audio audiooutput outputincludes includesan an announcement announcement that that says,"device says, “device moved moved to the to the left; left;
object xX is object is20% visible, occupying 20% visible, 15%ofofthe occupying 15% theworld worldview." view.”For Forexample, example, in in Figure Figure 15Y, 15Y,
audio alert audio alert 15126 is generated 15126 is generated that that includes includes announcement 15128 announcement 15128 indicating,"chair indicating, “chairisis 90 90 percent visible, occupying 20 percent of screen.” Generating an audio output that indicates an percent visible, occupying 20 percent of screen." Generating an audio output that indicates an
amount of the augmented reality view that is occluded by the virtual object provides the user amount of the augmented reality view that is occluded by the virtual object provides the user
with feedback with feedback(e.g., (e.g., indicating indicatingan anextent extenttotowhich which movement movement ofofthe thedevice devicechanged changedthe the degree to degree to which augmented which augmented realityview reality viewisisoccluded). occluded).Providing Providing improved improved feedback feedback to to the the user enhances the operability of the device (e.g., by providing information that allows a user user enhances the operability of the device (e.g., by providing information that allows a user
212
1005066680
to perceive the size of the virtual object relative to the display without cluttering the display to perceive the size of the virtual object relative to the display without cluttering the display 10 Jan 2024
with additional with additional displayed displayed information andwithout information and withoutrequiring requiringthe the user user to to view the display) view the display) and and
makesthe makes theuser-device user-deviceinterface interface more moreefficient efficient which, additionally, reduces which, additionally, reduces power usageand power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00530]
[00530] In some In embodiments, some embodiments, thethe device device detects(20012) detects (20012) an an input input by by a contactatata a a contact
location on the touch-sensitive surface that corresponds to the representation of the field of location on the touch-sensitive surface that corresponds to the representation of the field of 2024200149
view of the one or more cameras (e.g., detecting a tap input or double tap input on a portion view of the one or more cameras (e.g., detecting a tap input or double tap input on a portion
of the touch-screen that displays the augmented reality view of the physical environment). In of the touch-screen that displays the augmented reality view of the physical environment). In
response to detecting the input, and in accordance with a determination that the input is response to detecting the input, and in accordance with a determination that the input is
detected at a first location on the touch-sensitive surface that corresponds to a first portion of detected at a first location on the touch-sensitive surface that corresponds to a first portion of
the field of view of the one or more cameras that is not occupied by the virtual object, the the field of view of the one or more cameras that is not occupied by the virtual object, the
device generates (20014) a second audio alert (e.g., a click or buzz that indicates a failure to device generates (20014) a second audio alert (e.g., a click or buzz that indicates a failure to
locate the virtual object in the tapped region). For example, as described with regard to locate the virtual object in the tapped region). For example, as described with regard to
Figure 15Z, in response to an input detected at a location on touch screen 112 that Figure 15Z, in response to an input detected at a location on touch screen 112 that
corresponds to a portion of field of view 6036 of the one or more cameras that is not occupied corresponds to a portion of field of view 6036 of the one or more cameras that is not occupied
by virtual by virtual object object 11002, 11002, the the device device generates generates an an audio audio alert alert15130. 15130. In Insome some embodiments, embodiments, inin
response to detecting the input, in accordance with a determination that the input is detected response to detecting the input, in accordance with a determination that the input is detected
at a second location that corresponds to a second portion of the field of view of the one or at a second location that corresponds to a second portion of the field of view of the one or
morecameras more camerasthat thatisis occupied occupiedbybythe thevirtual virtual object, object, forgoing forgoing generating generating the the second second audio audio
alert. In some embodiments, instead of generating the second audio alert to indicate the user’s alert. In some embodiments, instead of generating the second audio alert to indicate the user's
failure to locate the virtual object, the device generates a different audio alert indicating that failure to locate the virtual object, the device generates a different audio alert indicating that
the user the user has has located located the thevirtual virtualobject. object.In In some embodiments, some embodiments, instead instead of of generating generating the the second second
audio alert, audio alert, the thedevice deviceoutputs outputsan anaudio audioannouncement describingananoperation announcement describing operationthat thatis is performed on the virtual object (e.g., “Object x selected.” “Object x is resized to a default performed on the virtual object (e.g., "Object X selected." "Object X is resized to a default
size” “Object x is rotated to a default orientation.” etc.) or the state of the virtual object (e.g., size" "Object X is rotated to a default orientation." etc.) or the state of the virtual object (e.g.,
Object X, Object x, 20% visible, occupying 20% visible, 15% occupying 15% of of theworld the world view.”).Generating view."). Generating an an audio audio output output in in response to an input detected at a location that corresponds to a part of the displayed response to an input detected at a location that corresponds to a part of the displayed
augmentedreality augmented realityview viewnot notoccupied occupiedbybythe thevirtual virtual object object provides provides the the user user with with feedback feedback (e.g., (e.g., indicating that the indicating that theinput inputmust mustbe be provided provided at a different at a different location location (e.g., (e.g., to obtain to obtain
information about information aboutthe the virtual virtual object object and/or and/or perform perform an an operation)). operation)). Providing improved Providing improved
feedback to the user enhances the operability of the device (e.g., by providing information feedback to the user enhances the operability of the device (e.g., by providing information
that allows a user to perceive whether the input successfully connected with a virtual object that allows a user to perceive whether the input successfully connected with a virtual object
213
1005066680
without cluttering the display with additional displayed information and without requiring the without cluttering the display with additional displayed information and without requiring the 10 Jan 2024
user to view the display), and makes the user-device interface more efficient which, user to view the display), and makes the user-device interface more efficient which,
additionally, reduces additionally, reduces power usageand power usage andimproves improvesbattery batterylife life of of the the device device by by enabling the enabling the
user to use the device more quickly and efficiently. user to use the device more quickly and efficiently.
[00531]
[00531] In some In embodiments, some embodiments, outputting outputting thethe firstaudio first audioalert alert includes includes generating generating
(20016) (20016) anan audio audio output output that that indicates indicates an operation an operation that isthat is performed performed withtorespect with respect the to the virtual object (e.g., before generating the audio output, the device determines the currently virtual object (e.g., before generating the audio output, the device determines the currently 2024200149
selected operation and performs the operation in response to an input (e.g., a double tap) selected operation and performs the operation in response to an input (e.g., a double tap)
confirming the user’s intent to execute the currently selected operation) and a resulting state confirming the user's intent to execute the currently selected operation) and a resulting state
of the virtual object after the performance of the operation. For example, the audio output of the virtual object after the performance of the operation. For example, the audio output
includes an announcement that says, “device moved to the left; object x is 20% visible, includes an announcement that says, "device moved to the left; object X is 20% visible,
occupying15% occupying 15%of of theworld the world view,” view," “object "object x isrotated X is rotatedclockwise clockwisebyby3030degrees; degrees;object objectisis50 50 degrees rotated degrees rotated around y-axis,” or the y-axis," around the or “object "object xX enlarged enlarged by by 20% andoccupies 20% and occupies50% 50%of of thethe world view." world view.”For Forexample, example,asasdescribed describedwith withregard regardtotoFigures Figures15AH-15AI, 15AH-15AI, in response in response to to performance of a rotation operation with respect to virtual object 11002, audio alert 15190 is performance of a rotation operation with respect to virtual object 11002, audio alert 15190 is
generated that generated that includes includes announcement 15192 announcement 15192 indicating indicating “Chair "Chair is is rotatedbybyfive rotated fivedegrees degrees counterclockwise.Chair counterclockwise. nowrotated Chairisis now rotatedby byzero zerodegrees degreesrelative relative to to the the screen.” screen." Generating Generating an an
audio output that indicates an operation performed on the virtual object provides the user with audio output that indicates an operation performed on the virtual object provides the user with
feedbackindicating feedback indicating how howprovided providedinput inputaffects affectsaa virtual virtual object. object.Providing Providing improved feedback improved feedback
to the user enhances the operability of the device (e.g., by providing information that allows a to the user enhances the operability of the device (e.g., by providing information that allows a
user to perceive how an operation has altered a virtual object without cluttering the display user to perceive how an operation has altered a virtual object without cluttering the display
with additional with additional displayed displayed information andwithout information and withoutrequiring requiringthe the user user to to view the display) view the display) and and
makesthe makes theuser-device user-deviceinterface interface more moreefficient efficient which, additionally, reduces which, additionally, reduces power usageand power usage and improves battery life of the device by enabling the user to use the device more quickly and improves battery life of the device by enabling the user to use the device more quickly and
efficiently. efficiently.
[00532]
[00532] In some embodiments (20018), the resulting state of the virtual object after In some embodiments (20018), the resulting state of the virtual object after
performance of the operation is described in the audio output in the first audio alert in relation performance of the operation is described in the audio output in the first audio alert in relation
to aa reference to reference frame frame corresponding to the corresponding to the physical physical environment capturedininthe environment captured thefield field of of view view
of the one or more cameras (e.g., after manipulating object (e.g., in response to a touch-based of the one or more cameras (e.g., after manipulating object (e.g., in response to a touch-based
gesture or gesture or movement movement ofof thedevice), the device),the the device devicegenerates generatesaa voice voice over over describing describing the the new new state of object (e.g., rotated 30 degrees, rotated 60 degrees, or moved left, relative to the state of object (e.g., rotated 30 degrees, rotated 60 degrees, or moved left, relative to the
initial position/orientation of the virtual object when it was initially placed into the initial position/orientation of the virtual object when it was initially placed into the
augmentedreality augmented realityview viewofofthe the physical physical environment)). environment)).For Forexample, example,asasdescribed describedwith withregard regard 214
1005066680
to Figures to Figures 15AH-15AI, 15AH-15AI, in in response response toto performance performance of of a rotationoperation a rotation operationwith with respecttoto respect 10 Jan 2024
virtual object virtual object11002, 11002, audio audio alert alert15190 15190 is isgenerated generated that thatincludes includesannouncement 15192 announcement 15192
indicating “Chair is rotated by five degrees counterclockwise. Chair is now rotated by zero indicating "Chair is rotated by five degrees counterclockwise. Chair is now rotated by zero
degrees relative degrees relative to tothe thescreen.” screen."InIn some someembodiments, the operation embodiments, the operation includes includes movement movement of of the device relative to the physical environment (e.g., causing movement of the virtual object the device relative to the physical environment (e.g., causing movement of the virtual object
relative to the representation of the portion of the physical environment captured in the field relative to the representation of the portion of the physical environment captured in the field
of view of the one or more cameras), and the voice over describes the new state of the virtual of view of the one or more cameras), and the voice over describes the new state of the virtual 2024200149
object in object in response response to to the themovement ofthe movement of the device devicerelative relative to to the the physical physicalenvironment. environment.
Generating an audio output that indicates a state of the virtual object after an operation is Generating an audio output that indicates a state of the virtual object after an operation is
performedononthe performed theobject objectprovides providesthe theuser user with with feedback feedbackthat that allows allowsaa user user to to perceive perceive how an how an
operation has operation has altered altered aa virtual virtualobject. object.Providing Providingimproved feedbackto improved feedback to the the user user enhances the enhances the
operability of the device (e.g., by providing information that allows a user to perceive how an operability of the device (e.g., by providing information that allows a user to perceive how an
operation has altered virtual object without cluttering the display with additional displayed operation has altered virtual object without cluttering the display with additional displayed
information and information andwithout withoutrequiring requiringthe the user user to to view the display) view the display) and and makes theuser-device makes the user-device interface more interface efficient which, more efficient which, additionally, additionally,reduces reducespower power usage and improves usage and improvesbattery batterylife life of of
the device by enabling the user to use the device more quickly and efficiently. the device by enabling the user to use the device more quickly and efficiently.
[00533]
[00533] In some In embodiments, some embodiments, thethe device device detects(20020) detects (20020) additional additional movement movement of of the the device (e.g., lateral movement and/or rotation of the device, including the one or more device (e.g., lateral movement and/or rotation of the device, including the one or more
cameras) that further adjusts the field of view of the one or more cameras after generation of cameras) that further adjusts the field of view of the one or more cameras after generation of
the first the firstaudio audioalert. alert.ForFor example, asasdescribed example, with described regard with toto regard Figures 15W-15X, Figures movement 15W-15X, movement
of device of device 100 further adjusts 100 further adjusts the thefield fieldofof view viewofof one oneoror more morecameras cameras (following (following adjustment adjustment
of the of the field fieldofofview viewof ofone oneor ormore more cameras that occurs cameras that occurs in in response response to to movement movement ofofdevice device 100 from15V-15W). 100 from 15V-15W).In In response response to to detecting detecting thethe additionalmovement additional movement of the of the device device thatthat
further adjusts the field of view of the one or more cameras (20022): the device adjusts further adjusts the field of view of the one or more cameras (20022): the device adjusts
display of the representation of the virtual object in the first user interface region in display of the representation of the virtual object in the first user interface region in
accordance with the first spatial relationship (e.g., orientation and/or position) between the accordance with the first spatial relationship (e.g., orientation and/or position) between the
virtual object and the plane detected within the field of view of the one or more cameras as virtual object and the plane detected within the field of view of the one or more cameras as
the field of view of the one or more cameras is further adjusted, and, in accordance with a the field of view of the one or more cameras is further adjusted, and, in accordance with a
determinationthat determination that the the additional additional movement movement ofofthe thedevice devicecauses causesmore morethan thana asecond second threshold amount threshold amount(e.g., (e.g., 50%, 80%,oror100%) 50%, 80%, 100%)of of thethe virtualobject virtual objecttoto move moveinto intoaadisplayed displayed portion of the field of view of the one or more cameras (e.g., because the spatial relationship portion of the field of view of the one or more cameras (e.g., because the spatial relationship
between the representation of the virtual object and the plane detected within the physical between the representation of the virtual object and the plane detected within the physical
environmentthat environment thatis is captured captured in in the the field fieldofofview viewof ofthe theone oneorormore morecameras cameras remains fixed remains fixed
215
1005066680
during movement during movement of of thedevice the device relativetotothe relative the physical physical environment), environment),the thedevice devicegenerates, generates, 10 Jan 2024
via the one or more audio output generators, a third audio alert (e.g., an audio output that via the one or more audio output generators, a third audio alert (e.g., an audio output that
includes an includes an announcement announcement indicatingthat indicating thatmore more than than a a thresholdamount threshold amount of of thethe virtualobject virtual object is moved is backinto moved back intothe the camera cameraview). view).For Forexample, example,asas describedwith described with regard regard toto Figure15X, Figure 15X, in response in response to to movement movement ofof device100 device 100 thatcauses that causesvirtual virtualobject object 11002 11002totomove move intoa into a displayed portion of the field of view 6036 of the one or more cameras, audio alert 15122 is displayed portion of the field of view 6036 of the one or more cameras, audio alert 15122 is
generated (e.g., generated (e.g., including including the theannouncement, “Chairisis now announcement, "Chair nowprojected projectedininthe the world, world, 100 100 2024200149
percent visible, occupying 10 percent of the screen”). Generating an audio output in percent visible, occupying 10 percent of the screen"). Generating an audio output in
accordancewith accordance withaadetermination determinationthat thatmovement movementof of a device a device causes causes a virtualobject a virtual objecttotomove move into aa displayed into displayed augmented reality view augmented reality viewprovides providesthe the user user with with feedback feedbackindicating indicatingan anextent extent to which movement of the device has affected display of the virtual object relative to the to which movement of the device has affected display of the virtual object relative to the
augmentedreality augmented realityview. view.Providing Providing improved improved feedback feedback to the to the user user enhances enhances the the operability operability of of the device (e.g., by providing information that allows a user to perceive whether a virtual the device (e.g., by providing information that allows a user to perceive whether a virtual
object has moved into the display without cluttering the display with additional displayed object has moved into the display without cluttering the display with additional displayed
information and information andwithout withoutrequiring requiringthe the user user to to view the display) view the display) and and makes theuser-device makes the user-device interface more efficient which, additionally, reduces power usage and improves battery life of interface more efficient which, additionally, reduces power usage and improves battery life of
the device by enabling the user to use the device more quickly and efficiently. the device by enabling the user to use the device more quickly and efficiently.
[00534]
[00534] In some In embodiments, some embodiments, while while displaying displaying thethe representation representation of of thevirtual the virtualobject object in the first user interface region and a first object manipulation type of a plurality of object in the first user interface region and a first object manipulation type of a plurality of object
manipulation types applicable to the virtual object is currently selected for the virtual object, manipulation types applicable to the virtual object is currently selected for the virtual object,
the device detects (20024) a request to switch to another object manipulation type applicable the device detects (20024) a request to switch to another object manipulation type applicable
to the virtual object (e.g., detecting a swipe input by a contact (e.g., including movement of to the virtual object (e.g., detecting a swipe input by a contact (e.g., including movement of
the contact in a horizontal direction) at a location on the touch-sensitive surface that the contact in a horizontal direction) at a location on the touch-sensitive surface that
corresponds to a portion of the first user interface region that displays the representation of corresponds to a portion of the first user interface region that displays the representation of
the field the fieldof ofview view of ofthe theone oneor ormore more cameras). cameras). For For example, as described example, as described with with regard regard to to Figure 15AG, while a clockwise rotation control 15170 is currently selected, a swipe input is Figure 15AG, while a clockwise rotation control 15170 is currently selected, a swipe input is
detected for switching to counterclockwise rotation control 15180 (for rotating virtual object detected for switching to counterclockwise rotation control 15180 (for rotating virtual object
15160 counterclockwise). 15160 counterclockwise). In response In response to detecting to detecting the to the request request switch to to switch another to another object object
manipulation type applicable to the virtual object, the device generates (20026) an audio manipulation type applicable to the virtual object, the device generates (20026) an audio
output that output that names names aa second secondobject object manipulation manipulationtype typeamong among a pluralityofofobject a plurality object manipulation types applicable to the virtual object (e.g., the audio output includes an manipulation types applicable to the virtual object (e.g., the audio output includes an
announcement announcement thatsays, that says,"rotate “rotateobject object around aroundx-axis," x-axis,” "resize “resize object," object,” or or “move object on "move object on the plane,” etc.), wherein the second object manipulation type is distinct from the first object the plane," etc.), wherein the second object manipulation type is distinct from the first object
216
1005066680
manipulationtype. manipulation type. For Forexample, example,ininFigure Figure15AH, 15AH,in in response response to to detectionofofthe detection therequest request 10 Jan 2024
described with described with regard regard to to 15AG, audioalert 15AG, audio alert 15182 15182isisgenerated, generated,including includingannouncement announcement 15184 (“selected: rotate 15184 ("selected: rotate counterclockwise”). In some counterclockwise"). In embodiments, some embodiments, thethe device device iterates iterates
through a predefined list of applicable object manipulation types in response to consecutive through a predefined list of applicable object manipulation types in response to consecutive
swipe inputs swipe inputs in in the the same direction. In same direction. In some some embodiments, embodiments, ininresponse responsetotodetecting detectingaaswipe swipe input in input in the the reverse reversedirection directionfrom fromthe theimmediately immediately preceding swipeinput, preceding swipe input, the the device device
generates an generates an audio audio output output that that includes includes an an announcement thatnames announcement that names a previously a previously announced announced 2024200149
object manipulation type applicable to the virtual object (e.g., the one before the last object manipulation type applicable to the virtual object (e.g., the one before the last
announcedobject announced objectmanipulation manipulation type).InInsome type). some embodiments, embodiments, the the device device doesdoes not not display display a a corresponding control for each object manipulation types applicable to the virtual object (e.g., corresponding control for each object manipulation types applicable to the virtual object (e.g.,
there is no button or control displayed for operations that are initiated by gestures (e.g., there is no button or control displayed for operations that are initiated by gestures (e.g.,
rotation, resizing, translation, etc.)). Generating an audio output in response to a request to rotation, resizing, translation, etc.)). Generating an audio output in response to a request to
switch an switch an object object manipulation typeprovides manipulation type providesthe theuser user with with feedback feedbackindicating indicatingthat that the the switch switch
operation has operation has been been performed. performed.Providing Providing improved improved feedback feedback to the to the useruser enhances enhances the the operability of operability of the thedevice device Providing Providing improved feedbacktotothe improved feedback theuser user enhances enhancesthe theoperability operability of of the device (e.g., by providing information that confirms that an switching input was the device (e.g., by providing information that confirms that an switching input was
successfully performed successfully withoutcluttering performed without cluttering the the display display with with additional additional displayed displayed information information
and without and without requiring requiring the the user user to to view view the the display) display) and and makes the user-device makes the user-device interface interface more more
efficient which, additionally, reduces power usage and improves battery life of the device by efficient which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00535]
[00535] In some In embodiments, some embodiments, aftergenerating after generating(20028) (20028) an an audio audio output output that that names names thethe
secondobject second object manipulation manipulationtype typeamong amongthethe pluralityofofobject plurality objectmanipulation manipulationtypes typesapplicable applicable to the virtual object (e.g., the audio output includes an announcement that says, “rotate object to the virtual object (e.g., the audio output includes an announcement that says, "rotate object
around x-axis,” “resize object,” or “move object on the plane,” etc.), the device detects a around x-axis," "resize object," or "move object on the plane," etc.), the device detects a
request to request to execute execute an an object object manipulation behaviorcorresponding manipulation behavior correspondingtotoa acurrently currentlyselected selected object manipulation type (e.g., detecting a double tap input by a contact at a location on the object manipulation type (e.g., detecting a double tap input by a contact at a location on the
touch-sensitive surface that correspond to a portion of the first user interface region that touch-sensitive surface that correspond to a portion of the first user interface region that
displays the representation of the field of view of the one or more cameras)). For example, as displays the representation of the field of view of the one or more cameras)). For example, as described with regard to Figure 15AH, a double tap input is detected for rotating virtual described with regard to Figure 15AH, a double tap input is detected for rotating virtual
object 11002 object counterclockwise.InInresponse 11002 counterclockwise. responsetotodetecting detectingthe therequest request to to perform the object perform the object manipulationbehavior manipulation behaviorcorresponding correspondingto to thecurrently the currentlyselected selectedobject object manipulation manipulationtype, type,the the device executes device executes (20030) (20030)ananobject objectmanipulation manipulationbehavior behaviorthat thatcorresponds correspondstoto thesecond the second object manipulation type (e.g., rotating the virtual object around the y axis by 5 degrees, or object manipulation type (e.g., rotating the virtual object around the y axis by 5 degrees, or
217
1005066680
increasing the size of the object by 5%, or moving the object on the plane by 20 pixels) (e.g., increasing the size of the object by 5%, or moving the object on the plane by 20 pixels) (e.g., 10 Jan 2024
adjusting display of the representation of the virtual object in the first user interface region in adjusting display of the representation of the virtual object in the first user interface region in
accordancewith accordance withthe thesecond secondobject objectmanipulation manipulationtype). type).For Forexample, example,inin Figure15AI, Figure 15AI, in in
response to detection of the request described with regard to 15AH, virtual object 11002 is response to detection of the request described with regard to 15AH, virtual object 11002 is
rotated counterclockwise. rotated In some counterclockwise. In someembodiments, embodiments,thethe device, device, in in additiontotoexecuting addition executingthe the object manipulation object behaviorthat manipulation behavior that corresponds correspondstotothe the second secondobject objectmanipulation manipulationtype, type, outputs an outputs an audio audio output output that that includes includes an an announcement thatindicates announcement that indicatesthe the object object manipulation manipulation 2024200149
behavior that is executed with respect to the virtual object and a resulting state of the virtual behavior that is executed with respect to the virtual object and a resulting state of the virtual
object after object afterthe theexecution executionof ofthe theobject objectmanipulation manipulationbehavior. behavior.For Forexample, example, in in Figure Figure 15AI, 15AI,
audio output audio output 15190 15190isis generated generatedthat that includes includes announcement announcement 15192 15192 (“Chair ("Chair rotated rotated by by five five
degrees counterclockwise. Chair is now rotated by zero degrees relative to the screen”). degrees counterclockwise. Chair is now rotated by zero degrees relative to the screen").
Performingananobject Performing objectmanipulation manipulationoperation operationininresponse responsetotoananinput inputdetected detectedwhile whilethe the operation is selected provides an additional control option for performing the operation (e.g., operation is selected provides an additional control option for performing the operation (e.g.,
allowing the user to perform the operation by providing a tap input rather than requiring a allowing the user to perform the operation by providing a tap input rather than requiring a
two-contact input). two-contact input). Providing Providingananadditional additional control control option option for for providing an input providing an input without without
cluttering the user interface with additional displayed controls enhances the operability of the cluttering the user interface with additional displayed controls enhances the operability of the
device (e.g., by providing users that have limited ability to provide multi-contact gestures device (e.g., by providing users that have limited ability to provide multi-contact gestures
with an with an option option for for manipulating the object) manipulating the object) and and makes theuser-device makes the user-deviceinterface interface more more efficient which, additionally, reduces power usage and improves battery life of the device by efficient which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00536]
[00536] In some In embodiments, some embodiments, in in response response to to detectingthe detecting therequest requesttotoswitch switchtoto another another object manipulation type applicable to the virtual object (20032): in accordance with a object manipulation type applicable to the virtual object (20032): in accordance with a
determinationthat determination that the the second object manipulation second object typeis manipulation type is aa continuously adjustable continuously adjustable
manipulation type, the device generates an audio alert in conjunction with the audio output manipulation type, the device generates an audio alert in conjunction with the audio output
that names the second object manipulation type, to indicate that the second object that names the second object manipulation type, to indicate that the second object
manipulation type is a continuously adjustable manipulation type (e.g., outputting an audio manipulation type is a continuously adjustable manipulation type (e.g., outputting an audio
output that output that says says “adjustable” "adjustable" after afterthe theaudio audioannouncement that names announcement that thesecond names the secondobject object manipulation type (e.g., “rotate object clockwise around the y axis”)); the device detects a manipulation type (e.g., "rotate object clockwise around the y axis")); the device detects a
request to request to execute execute the the object object manipulation manipulation behavior that corresponds behavior that to the corresponds to the second object second object
manipulation type, including detecting a swipe input at a location on the touch-sensitive manipulation type, including detecting a swipe input at a location on the touch-sensitive
surface that corresponds to a portion of the first user interface region that displays the surface that corresponds to a portion of the first user interface region that displays the
representation of the field of view of the one or more cameras (e.g., after detecting a double representation of the field of view of the one or more cameras (e.g., after detecting a double
tap input by a contact at a location on the touch-sensitive surface that correspond to a portion tap input by a contact at a location on the touch-sensitive surface that correspond to a portion
218
1005066680
of the first user interface region that displays the representation of the field of view of the one of the first user interface region that displays the representation of the field of view of the one 10 Jan 2024
or more cameras)); and in response to detecting the request to execute the object or more cameras)); and in response to detecting the request to execute the object
manipulationbehavior manipulation behaviorcorresponding correspondingto to thesecond the second objectmanipulation object manipulation type, type, thethedevice device executes the executes the object object manipulation behaviorcorresponding manipulation behavior correspondingtotothe thesecond secondobject objectmanipulation manipulation type by an amount that corresponds to a magnitude of the swipe input (e.g., rotating the type by an amount that corresponds to a magnitude of the swipe input (e.g., rotating the
virtual object around the y axis by 5 degrees or 10 degrees, or increasing the size of the object virtual object around the y axis by 5 degrees or 10 degrees, or increasing the size of the object
by 5% by 5%oror10%, 10%,orormoving movingthethe object object on on thethe plane plane byby 2020 pixelsoror4040pixels, pixels pixels,depending dependingonon 2024200149
whether the magnitude of the swipe input is a first amount or a second amount that is larger whether the magnitude of the swipe input is a first amount or a second amount that is larger
than the than the first firstamount). amount).For Forexample, example, as as described described with with regard regard to to Figures Figures 15J-15K, while aa 15J-15K, while
rotate clockwise control 15038 is currently selected, a swipe input is detected for switching to rotate clockwise control 15038 is currently selected, a swipe input is detected for switching to
a zoom a control15064. zoom control 15064.Audio Audio alert15066 alert 15066isisgenerated generatedthat thatincludes includesannouncement announcement 15068 15068
(“scale: adjustable”). ("scale: adjustable").As Asdescribed described with with regard regard to toFigures Figures15K-15L, an swipe 15K-15L, an swipeinput inputis is detected for zooming in on virtual object 11002, and, in response to the input, a zoom detected for zooming in on virtual object 11002, and, in response to the input, a zoom
operation is performed on virtual object 11002 (in the illustrative example of Figures 15K- operation is performed on virtual object 11002 (in the illustrative example of Figures 15K-
15L, an input 15L, an input for for continuously continuously adjustable adjustable manipulation is detected manipulation is detected while while staging staging view view
interface 6010 is displayed, but it will be recognized that a similar input may be detected at a interface 6010 is displayed, but it will be recognized that a similar input may be detected at a
location on the touch-sensitive surface that corresponds to a portion of the first user interface location on the touch-sensitive surface that corresponds to a portion of the first user interface
region that displays the representation of the field of view of the one or more cameras). In region that displays the representation of the field of view of the one or more cameras). In
someembodiments, some embodiments,thethe device, device, in in additiontotoexecuting addition executingthe thesecond secondobject objectmanipulation manipulation behavior, outputs behavior, outputs an an audio audio announcement announcement thatindicates that indicatesthe theamount amountof of theobject the object manipulation behavior that is executed with respect to the virtual object and a resulting state manipulation behavior that is executed with respect to the virtual object and a resulting state
of the virtual object after the execution of the object manipulation behavior by that amount. of the virtual object after the execution of the object manipulation behavior by that amount.
Performingananobject Performing objectmanipulation manipulationoperation operationininresponse responsetotoa aswipe swipeinput inputprovides providesanan additional control option for performing the operation (e.g., allowing the user to perform the additional control option for performing the operation (e.g., allowing the user to perform the
operation by operation by providing providingaa swipe swipeinput inputrather rather than than requiring requiring aa two-contact two-contact input). input). Providing an Providing an
additional control option for providing an input without cluttering the user interface with additional control option for providing an input without cluttering the user interface with
additional displayed controls (e.g., by providing users that have limited ability to provide additional displayed controls (e.g., by providing users that have limited ability to provide
multi-contact gestures multi-contact gestures with with an an option option for for manipulating the object) manipulating the object) and and makes the user-device makes the user-device interface more efficient which, additionally, reduces power usage and improves battery life of interface more efficient which, additionally, reduces power usage and improves battery life of
the device by enabling the user to use the device more quickly and efficiently. the device by enabling the user to use the device more quickly and efficiently.
[00537]
[00537] In some embodiments, prior to displaying the representation of the virtual In some embodiments, prior to displaying the representation of the virtual
object in the first user interface region, the device displays (20034) the representation of the object in the first user interface region, the device displays (20034) the representation of the
virtual object in a second user interface region (e.g., a staging user interface), wherein the virtual object in a second user interface region (e.g., a staging user interface), wherein the
219
1005066680
second user interface region does not include a representation of the field of view of one or second user interface region does not include a representation of the field of view of one or 10 Jan 2024
more cameras (e.g., the second user interface region is a staging user interface in which the more cameras (e.g., the second user interface region is a staging user interface in which the
virtual object can be manipulated (e.g., rotated, resized, and moved) without maintaining a virtual object can be manipulated (e.g., rotated, resized, and moved) without maintaining a
fixed relationship to a plane detected in the physical environment captured in the field of fixed relationship to a plane detected in the physical environment captured in the field of
view of the cameras). While displaying the representation of the virtual object in the second view of the cameras). While displaying the representation of the virtual object in the second
user interface region and a first operation of a plurality of operations applicable to the virtual user interface region and a first operation of a plurality of operations applicable to the virtual
object is currently selected for the virtual object, the device detects (20036) a request to object is currently selected for the virtual object, the device detects (20036) a request to 2024200149
switch to another operation applicable to the virtual object (e.g., including a request to switch switch to another operation applicable to the virtual object (e.g., including a request to switch
an object manipulation type applicable to the virtual object in the second user interface region an object manipulation type applicable to the virtual object in the second user interface region
(e.g., (e.g., resize, resize, rotate, rotate, tilt, tilt,etc.) orora user etc.) a userinterface interfaceoperation applicabletotothethevirtual operation applicable virtual object object in in
the second user interface region (e.g., go back to 2D user interface, drop object into the the second user interface region (e.g., go back to 2D user interface, drop object into the
augmented reality view of the physical environment)) (e.g., detecting the request includes augmented reality view of the physical environment)) (e.g., detecting the request includes
detecting a swipe input by a contact (e.g., including movement of the contact in a horizontal detecting a swipe input by a contact (e.g., including movement of the contact in a horizontal
direction) at a location on the touch-sensitive surface that corresponds to the first user direction) at a location on the touch-sensitive surface that corresponds to the first user
interface region). interface region).For Forexample, example, as as described described with with regard regard to to Figures Figures 15F-15G, whilestaging 15F-15G, while staging user interface 6010 is displayed and a tilt down control 15022 is currently selected, a swipe user interface 6010 is displayed and a tilt down control 15022 is currently selected, a swipe
input is detected for switching to rotate clockwise control 15038. In response to detecting the input is detected for switching to rotate clockwise control 15038. In response to detecting the
request to switch to another operation applicable to the virtual object in the second user request to switch to another operation applicable to the virtual object in the second user
interface region, interface region,the thedevice devicegenerates generates(20038) (20038) an an audio audio output output that that names a second names a operation second operation
among the plurality of operations applicable to the virtual object (e.g., the audio output among the plurality of operations applicable to the virtual object (e.g., the audio output
includes an announcement that says, “rotate object around x-axis,” “resize object,” “tilt the includes an announcement that says, "rotate object around x-axis," "resize object," "tilt the
object toward the display,” or “display object in the augmented reality view,” etc.), wherein object toward the display," or "display object in the augmented reality view," etc.), wherein
the second operation is distinct from the first operation. In some embodiments, the device the second operation is distinct from the first operation. In some embodiments, the device
iterates through a predefined list of applicable operations in response to consecutive swipe iterates through a predefined list of applicable operations in response to consecutive swipe
inputs in the same direction. For example, in Figure 15G, in response to detection of the inputs in the same direction. For example, in Figure 15G, in response to detection of the
request described with regard to 15F, audio alert 15040 is generated, including the request described with regard to 15F, audio alert 15040 is generated, including the
announcement announcement 15042 15042 (“selected: ("selected: rotateclockwise rotate clockwise button”). button"). Generating Generating an an audio audio output output that that
names a selected operation type in response to a request to switch an operation type provides names a selected operation type in response to a request to switch an operation type provides
the user with feedback indicating that a switching input was successfully received. the user with feedback indicating that a switching input was successfully received.
Generating an audio output that names a selected operation type in response to a request to Generating an audio output that names a selected operation type in response to a request to
switch an operation type provides the user with feedback indicating that a switching input switch an operation type provides the user with feedback indicating that a switching input
was successfully was successfully received. received. Providing Providingimproved improved feedback feedback to the to the user user enhances enhances thethe operability operability
of the device (e.g., by providing information that allows a user to perceive when a selected of the device (e.g., by providing information that allows a user to perceive when a selected
220
1005066680
control has changed control withoutcluttering changed without cluttering the the display with with additional additional displayed displayed information information and and 10 Jan 2024
without requiring without requiring the the user user to to view view the the display) display)and and makes the user-device makes the interface more user-device interface more
efficient which, additionally, reduces power usage and improves battery life of the device by efficient which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00538]
[00538] In some embodiments, prior to displaying the representation of the virtual In some embodiments, prior to displaying the representation of the virtual
object in the first user interface region (20040): while displaying the representation of the object in the first user interface region (20040): while displaying the representation of the
virtual object in a second user interface region (e.g., a staging user interface) that does not virtual object in a second user interface region (e.g., a staging user interface) that does not 2024200149
include a representation of the field of view of the one or more cameras (e.g., the second user include a representation of the field of view of the one or more cameras (e.g., the second user
interface region is a staging user interface in which the virtual object can be manipulated interface region is a staging user interface in which the virtual object can be manipulated
(e.g., (e.g., rotated, rotated, resized, andmoved) resized, and moved) without without maintaining maintaining a fixed arelationship fixed relationship to a planetoina the plane in the physical environment), the device detects a request to display a representation of the virtual physical environment), the device detects a request to display a representation of the virtual
object in the first user interface region that includes a representation of the field of view of object in the first user interface region that includes a representation of the field of view of
the one or more cameras (e.g., detecting a double tap input when a currently selected the one or more cameras (e.g., detecting a double tap input when a currently selected
operation is “display the object in the augmented reality view” and after the device has just operation is "display the object in the augmented reality view" and after the device has just
outputted an outputted an audio audio announcement announcement thatnames that names thethe currently currently selected selected operation operation inin response response toto aa
swipe input (e.g., received right before the double tap input)). For example, as described with swipe input (e.g., received right before the double tap input)). For example, as described with
regard to regard to Figures Figures 15P-15V, whilestaging 15P-15V, while staginguser userinterface interface 6010 6010isis displayed displayed and andtoggle toggle control control 6018 is selected, a double tap input is detected to display a representation of virtual object 6018 is selected, a double tap input is detected to display a representation of virtual object
11002 11002 toto aa userinterface user interface region region thatthat includes includes a representation a representation ofoffield of field viewof view 6036 6036 of the oneof the one
or more cameras. In response to detecting the request to display a representation of the virtual or more cameras. In response to detecting the request to display a representation of the virtual
object in the first user interface region that includes a representation of the field of view of object in the first user interface region that includes a representation of the field of view of
the one or more cameras: the device displays a representation of the virtual object in the first the one or more cameras: the device displays a representation of the virtual object in the first
user interface region in accordance with the first spatial relationship between the user interface region in accordance with the first spatial relationship between the
representation of the virtual object and the plane detected within the physical environment representation of the virtual object and the plane detected within the physical environment
that is captured in the field of view of the one or more cameras (e.g., the rotational angle and that is captured in the field of view of the one or more cameras (e.g., the rotational angle and
size of the virtual object in the staging view is maintained in the augmented reality view size of the virtual object in the staging view is maintained in the augmented reality view
whenthe when thevirtual virtual object object is isdropped dropped into into physical physical environment representedin environment represented in the the augmented augmented
reality view, and the tilt angle is reset in the augmented reality view in accordance with the reality view, and the tilt angle is reset in the augmented reality view in accordance with the
orientation of the plane detected in the physical environment captured in the field of view.); orientation of the plane detected in the physical environment captured in the field of view.);
and the device generates a fourth audio alert indicating that the virtual object is placed in the and the device generates a fourth audio alert indicating that the virtual object is placed in the
augmented reality view in relation to the physical environment captured in the field of view augmented reality view in relation to the physical environment captured in the field of view
of the of the one one or or more cameras.For more cameras. Forexample, example,asasdescribed describedwith withregard regardtotoFigure Figure15V, 15V,ininresponse response to the input for displaying a representation of virtual object 11002 in a user interface region to the input for displaying a representation of virtual object 11002 in a user interface region
221
1005066680
that includes a representation of field of view 6036 of the one or more cameras, a that includes a representation of field of view 6036 of the one or more cameras, a 10 Jan 2024
representation of virtual object 11002 is displayed in a user interface region that includes a of representation of virtual object 11002 is displayed in a user interface region that includes a of
field of field ofview view 6036 of the one 6036 of one or or more camerasand more cameras andaudio audioalert alert 15114 15114isisgenerated generatedincluding including announcement announcement 15116 15116 (“chair ("chair is is nownow projected projected in in thethe world, world, 100 100 percent percent visible,occupying visible, occupying10 10
percent of the screen”). Generating an audio output in response to a request to place an object percent of the screen"). Generating an audio output in response to a request to place an object
in an augmented reality view provides the user with feedback indicating that the operation to in an augmented reality view provides the user with feedback indicating that the operation to
place the place the virtual virtualobject objectwas was successfully successfullyexecuted. executed.Providing Providing improved feedbacktotothe improved feedback theuser user 2024200149
enhances the operability of the device (e.g., by providing information that allows a user to enhances the operability of the device (e.g., by providing information that allows a user to
perceive that the object is displayed in the augmented reality view without cluttering the perceive that the object is displayed in the augmented reality view without cluttering the
display with additional displayed information and without requiring the user to view the display with additional displayed information and without requiring the user to view the
display) and makes the user-device interface more efficient which, additionally, reduces display) and makes the user-device interface more efficient which, additionally, reduces
power usage and improves battery life of the device by enabling the user to use the device power usage and improves battery life of the device by enabling the user to use the device
more quickly and efficiently. more quickly and efficiently.
[00539]
[00539] In some In embodiments, some embodiments, thethe thirdaudio third audioalert alertindicates indicates (20042) (20042)information information about an appearance of the virtual object relative to the portion of the field of view of the one about an appearance of the virtual object relative to the portion of the field of view of the one
or more cameras (e.g., the third audio alert includes an audio output that includes an or more cameras (e.g., the third audio alert includes an audio output that includes an
announcement announcement thatsays, that says,"object “objectXxisis placed placed in in the the world, world, object object xX is is30% 30% visible, visible,occupying occupying
90%ofofthe 90% thescreen."). screen.”). For For example, as described example, as describedwith withregard regardto to Figure Figure 15V, 15V,audio audioalert alert 15114 15114 is generated is generated including including announcement 15116 announcement 15116 (“chair ("chair is is now now projected projected in in theworld, the world,100100 percent visible, occupying 10 percent of the screen”). Generating an audio output that percent visible, occupying 10 percent of the screen"). Generating an audio output that
indicates an appearance of a virtual object visible relative to a displayed augmented reality indicates an appearance of a virtual object visible relative to a displayed augmented reality
view provides the user with feedback (e.g., indicating an extent to placement of the object in view provides the user with feedback (e.g., indicating an extent to placement of the object in
the augmented the reality view augmented reality viewaffected affected the the appearance appearanceofofthe the virtual virtual object). object). Providing Providing
improved feedback to the user enhances the operability of the device (e.g., by providing improved feedback to the user enhances the operability of the device (e.g., by providing
information that information that allows a user allows a user to to perceive perceive how the object how the object is isdisplayed displayedin inthe theaugmented augmented
reality view without cluttering the display with additional displayed information and without reality view without cluttering the display with additional displayed information and without
requiring the user to view the display) and makes the user-device interface more efficient requiring the user to view the display) and makes the user-device interface more efficient
which, additionally, which, additionally, reduces reduces power usageand power usage andimproves improves batterylife battery lifeof of the the device device by by enabling enabling the user to use the device more quickly and efficiently. the user to use the device more quickly and efficiently.
[00540]
[00540] In some In embodiments, some embodiments, thethe device device generates generates (20044) (20044) a tactileoutput a tactile outputinin conjunction with placement of the virtual object in the augmented reality view in relation to conjunction with placement of the virtual object in the augmented reality view in relation to
the physical the physical environment capturedininthe environment captured the field field of of view view of of the the one one or or more more cameras. For cameras. For
example, when the object is placed on the plane detected in the field of view of the cameras, example, when the object is placed on the plane detected in the field of view of the cameras,
222
1005066680
the device generates a tactile output indicating landing of the object onto the plane. In some the device generates a tactile output indicating landing of the object onto the plane. In some 10 Jan 2024
embodiments,thethedevice embodiments, devicegenerates generatesa atactile tactile output output when whenthe theobject objectreaches reachesaa predefined predefined default size during resizing of the object. In some embodiments, the device generates a tactile default size during resizing of the object. In some embodiments, the device generates a tactile
output for each operation that is performed with respect to the virtual object (e.g., for each output for each operation that is performed with respect to the virtual object (e.g., for each
rotation by a preset angular amount, for dragging the virtual object onto a different plane, for rotation by a preset angular amount, for dragging the virtual object onto a different plane, for
resetting the object to an original orientation and/or size, etc.). In some embodiments, these resetting the object to an original orientation and/or size, etc.). In some embodiments, these
tactile outputs precede the corresponding audio alerts describing the operation that is tactile outputs precede the corresponding audio alerts describing the operation that is 2024200149
performed and the result state of the virtual object. For example, as described with regard to performed and the result state of the virtual object. For example, as described with regard to
Figure 15V, tactile output 15118 is generated in conjunction with placement of virtual object Figure 15V, tactile output 15118 is generated in conjunction with placement of virtual object
11002 11002 isisfield fieldofofview view 6036 6036 of one of the the or one or cameras. more more cameras. Generating Generating a tactile a tactile output in output in
conjunction with placement of a virtual object in relation to the physical environment conjunction with placement of a virtual object in relation to the physical environment
captured by the one or more cameras provides the user with feedback (e.g., indicating that the captured by the one or more cameras provides the user with feedback (e.g., indicating that the
operation to operation to place place the the virtual virtualobject objectwas wassuccessfully successfullyexecuted). executed). Providing Providing improved improved
feedback to the user enhances the operability of the device (e.g., by providing sensory feedback to the user enhances the operability of the device (e.g., by providing sensory
information that allows a user to perceive that placement of the virtual object has occurred information that allows a user to perceive that placement of the virtual object has occurred
without cluttering without cluttering the the user userinterface interfacewith withdisplayed displayedinformation) information)and and makes the user-device makes the user-device
interface more efficient which, additionally, reduces power usage and improves battery life of interface more efficient which, additionally, reduces power usage and improves battery life of
the device by enabling the user to use the device more quickly and efficiently. the device by enabling the user to use the device more quickly and efficiently.
[00541]
[00541] In some embodiments, the device displays (20046) a first control at a first In some embodiments, the device displays (20046) a first control at a first
location in the first user interface region (e.g., among a plurality of controls displayed at location in the first user interface region (e.g., among a plurality of controls displayed at
different locations in the first user interface region) concurrently with a representation of the different locations in the first user interface region) concurrently with a representation of the
field of field ofview view of of the theone one or ormore more cameras. In accordance cameras. In withaa determination accordance with determinationthat that control- control- fading criteria are met (e.g., the control-fading criteria are met when the first user interface fading criteria are met (e.g., the control-fading criteria are met when the first user interface
region is displayed for at least a threshold amount of time without a touch input being region is displayed for at least a threshold amount of time without a touch input being
detected on the touch-sensitive surface), the device ceases (20048) to display the first control detected on the touch-sensitive surface), the device ceases (20048) to display the first control
in the first user interface region (e.g., along with all the other controls in the first user in the first user interface region (e.g., along with all the other controls in the first user
interface region) while maintaining display of the representation of the field of view of the interface region) while maintaining display of the representation of the field of view of the
one or more cameras in the first user interface region (e.g., controls are not redisplayed when one or more cameras in the first user interface region (e.g., controls are not redisplayed when
the user moves the device relative to the physical environment). While displaying the first the user moves the device relative to the physical environment). While displaying the first
user interface region without displaying the first control in the first user interface region, the user interface region without displaying the first control in the first user interface region, the
device detects (20050) a touch input at a respective location on the touch-sensitive surface device detects (20050) a touch input at a respective location on the touch-sensitive surface
that corresponds to the first location in the first user interface region. In response to detecting that corresponds to the first location in the first user interface region. In response to detecting
the touch input, the device generates (20052) a fifth audio alert including an audio output that the touch input, the device generates (20052) a fifth audio alert including an audio output that
223
1005066680
specifies an operation corresponding to the first control (e.g., “go back to staging view” or specifies an operation corresponding to the first control (e.g., "go back to staging view" or 10 Jan 2024
“rotate object around the y-axis”). In some embodiments, the device also redisplays the first "rotate object around the y-axis"). In some embodiments, the device also redisplays the first
control at the first location in response to detecting the touch input. In some embodiments, control at the first location in response to detecting the touch input. In some embodiments,
redisplaying the control and making it the currently selected control upon a touch input at the redisplaying the control and making it the currently selected control upon a touch input at the
usual location of the control on the display provides a quicker way to access the control than usual location of the control on the display provides a quicker way to access the control than
scanning through the available controls using a series of swipe inputs once the user is aware scanning through the available controls using a series of swipe inputs once the user is aware
of the locations of the controls on the display. Automatically ceasing to display a control in of the locations of the controls on the display. Automatically ceasing to display a control in 2024200149
response to determining that control-fading criteria are met reduces the number of inputs response to determining that control-fading criteria are met reduces the number of inputs
neededtoto cease needed cease displaying displaying controls. controls. Reducing Reducingthe thenumber numberof of inputs inputs needed needed to to perform perform an an operation enhances operation enhancesthe theoperability operability of of the the device device and and makes the user-device makes the user-deviceinterface interface more more efficient, which, additionally, reduces power usage and improves battery life of the device by efficient, which, additionally, reduces power usage and improves battery life of the device by
enabling the user to use the device more quickly and efficiently. enabling the user to use the device more quickly and efficiently.
[00542]
[00542] It should be understood that the particular order in which the operations in It should be understood that the particular order in which the operations in
Figures 20A-20F Figures 20A-20Fhave have been been described described is is merely merely an an example example and and is not is not intended intended to indicate to indicate
that the that the described described order order is isthe theonly onlyorder orderinin which whichthe operations the operationscould couldbebeperformed. performed.One One of of
ordinary skill in the art would recognize various ways to reorder the operations described ordinary skill in the art would recognize various ways to reorder the operations described
herein. Additionally, it should be noted that details of other processes described herein with herein. Additionally, it should be noted that details of other processes described herein with
respect to respect to other other methods described herein methods described herein (e.g., (e.g., methods 800, 900, methods 800, 900, 1000, 1000, 16000, 16000,17000, 17000, 18000, and20000) 18000, and 20000)are arealso alsoapplicable applicable in in an an analogous analogousmanner mannertoto method method 20000 20000 described described
above with respect to Figures 20A-20F. For example, contacts, inputs, virtual objects, user above with respect to Figures 20A-20F. For example, contacts, inputs, virtual objects, user
interface regions, fields of view, tactile outputs, movements, and/or animations described interface regions, fields of view, tactile outputs, movements, and/or animations described
abovewith above withreference referenceto to method method20000 20000 optionally optionally have have oneone or or more more of the of the characteristicsofof characteristics
the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs, the contacts, inputs, virtual objects, user interface regions, fields of view, tactile outputs,
movements,and/or movements, and/oranimations animations described described herein herein with with reference reference to to other other methods methods described described
herein (e.g., herein (e.g.,methods methods 800, 800, 900, 900, 1000, 16000, 17000, 1000, 16000, 17000,18000, 18000,and and19000). 19000). For For brevity,these brevity, these details are not repeated here. details are not repeated here.
[00543]
[00543] Theoperations The operationsdescribed describedabove abovewith withreference referencetotoFigures Figures8A-8E, 8A-8E, 9A-9D, 9A-9D,
10A-10D, 16A-16G, 10A-10D, 16A-16G, 17A-17D, 17A-17D, 18A-18I, 18A-18I, 19A-19H, 19A-19H, and 20A-20F and 20A-20F are, optionally, are, optionally,
implementedbyby implemented components components depicted depicted in Figures in Figures 1A-1B. 1A-1B. For example, For example, display display operations operations
802, 806, 802, 806, 902, 902, 906, 906, 910, 910, 1004, 1004, 1008, 1008, 16004, 16004,17004, 17004,18002, 18002, 19002, 19002, andand 20002; 20002; detection detection
operations 804, operations 804, 904, 904, 908, 908, 17006, 17006,18004, 18004,19004, 19004,and and20004; 20004; changing changing operation operation 910,910,
receiving operations receiving 1002, 1006, operations 1002, 1006,16002, 16002,and and17002; 17002;ceasing ceasing operations operations 17008; 17008; rotation rotation
operation 18006; operation 18006;update updateoperation operation19006; 19006;adjust adjustoperation operation20006; 20006;andand generation generation operation operation 224
1005066680
20006are, 20006 are, optionally, optionally, implemented implemented byby eventsorter event sorter170, 170,event eventrecognizer recognizer180, 180,and andevent event 10 Jan 2024
handler 190. handler 190. Event Eventmonitor monitor171 171ininevent eventsorter sorter 170 170detects detects aa contact contact on on touch-sensitive touch-sensitive display 112, display 112, and event dispatcher and event dispatcher module module174 174delivers deliversthe theevent eventinformation informationtotoapplication application 136-1. 136-1. A respective event A respective event recognizer recognizer 180 180of of application application 136-1 136-1compares comparesthetheevent event information to respective event definitions 186, and determines whether a first contact at a information to respective event definitions 186, and determines whether a first contact at a
first location on the touch-sensitive surface (or whether rotation of the device) corresponds to first location on the touch-sensitive surface (or whether rotation of the device) corresponds to
a predefined event or sub-event, such as selection of an object on a user interface, or rotation a predefined event or sub-event, such as selection of an object on a user interface, or rotation 2024200149
of the of the device device from one orientation from one orientation to to another. another. When When aa respective respective predefined predefined event event or or sub- sub- event is detected, event recognizer 180 activates an event handler 190 associated with the event is detected, event recognizer 180 activates an event handler 190 associated with the
detection of the event or sub-event. Event handler 190 optionally uses or calls data updater detection of the event or sub-event. Event handler 190 optionally uses or calls data updater
176 or object updater 176 or updater 177 to update 177 to the application update the application internal internalstate state192. InIn 192. some someembodiments, embodiments,
event handler event handler 190 190accesses accessesaa respective respective GUI GUIupdater updater178 178totoupdate updatewhat what is isdisplayed displayedbybythe the application. Similarly, it would be clear to a person having ordinary skill in the art how other application. Similarly, it would be clear to a person having ordinary skill in the art how other
processes can processes can be be implemented implemented based based on on thethe components components depicted depicted in Figures in Figures 1A-1B. 1A-1B.
[00544]
[00544] Theforegoing The foregoingdescription, description, for for purpose of explanation, purpose of explanation, has has been been described describedwith with reference to specific embodiments. However, the illustrative discussions above are not intended reference to specific embodiments. However, the illustrative discussions above are not intended
to be exhaustive to or to exhaustive or to limit limit the the invention invention to tothe theprecise preciseforms forms disclosed. disclosed.Many modifications Many modifications
and variations and variations are possible possible in inview view of ofthe theabove above teachings. teachings.The The embodiments were embodiments were chosen chosen andand
described in order to best explain the principles of the invention and its practical applications, described in order to best explain the principles of the invention and its practical applications,
to thereby to thereby enable enableothers othersskilled skilledinin the theart art to to best best use usethe theinvention inventionand andvarious various described described
embodiments embodiments with with various various modifications modifications as as areare suitedtotothe suited theparticular particular use use contemplated. contemplated.
[00545]
[00545] Referenceto Reference to any anyprior prior art art in inthe thespecification specificationis is notnot an an acknowledgement acknowledgement or or
suggestion that suggestion that this thisprior priorart forms art formspart of of part thethe common common general general knowledge inany knowledge in anyjurisdiction jurisdiction or that this prior art could reasonably be expected to be combined with any other piece of or that this prior art could reasonably be expected to be combined with any other piece of
prior art by a skilled person in the art. prior art by a skilled person in the art.
225
Claims (22)
1. 1. A method, A method,including: including: at aa computer at systemhaving computer system havinga adisplay displaygeneration generationcomponent, component,oneone or or more more input input devices, devices, andand
one or one or more cameras: more cameras:
displaying, via the display generation component, a representation of a virtual object displaying, via the display generation component, a representation of a virtual object
in a first user interface region that includes a representation of a field of view of one or more in a first user interface region that includes a representation of a field of view of one or more 2024200149
cameras, wherein the displaying includes maintaining a first spatial relationship between the cameras, wherein the displaying includes maintaining a first spatial relationship between the
representation of the virtual object and a plane detected within a physical environment that is representation of the virtual object and a plane detected within a physical environment that is
captured in the field of view of the one or more cameras; captured in the field of view of the one or more cameras;
detecting movement detecting movement ofof thecomputer the computer system system that that adjusts adjusts thethefield fieldofofview viewofofthe the one one or more or cameras;and more cameras; and in response in to detecting response to detecting movement movement ofofthe thecomputer computer system system that that adjuststhe adjusts thefield field of of view of view of the the one or more one or cameras: more cameras:
adjusting display of the representation of the virtual object in the first user adjusting display of the representation of the virtual object in the first user
interface region in accordance with the first spatial relationship between the virtual object and interface region in accordance with the first spatial relationship between the virtual object and
the plane detected within the field of view of the one or more cameras as the field of view of the plane detected within the field of view of the one or more cameras as the field of view of
the one the or more one or camerasisisadjusted, more cameras adjusted, and, and, in accordance in withaa determination accordance with determinationthat that the the movement movement ofof thecomputer the computer system system
causes more causes morethan thanaathreshold thresholdamount amountofofthe thevirtual virtual object object to to move outsideof move outside of aa displayed displayed
portion of the field of view of the one or more cameras, generating a first alert. portion of the field of view of the one or more cameras, generating a first alert.
2. 2. Themethod The methodofofclaim claim1,1,wherein whereinthethecomputer computer system system includes includes one one or more or more audio audio
output generators, and generating the first alert includes generating, via the one or more audio output generators, and generating the first alert includes generating, via the one or more audio
output generators, a first audio alert. output generators, a first audio alert.
3. 3. Themethod The methodofofclaim claim1,1,including, including,after after the the movement movement ofof thecomputer the computer system system causes causes
morethan more thanaa threshold threshold amount amountofofthe thevirtual virtual object object to to move outside of move outside of the the displayed displayed portion portion of of
the field of view of the one or more cameras, generating audio associated with the virtual the field of view of the one or more cameras, generating audio associated with the virtual
object. object.
4. 4. The method of any of claims 1-3, wherein outputting the first alert includes generating The method of any of claims 1-3, wherein outputting the first alert includes generating
an audio output that indicates an amount of the virtual object that remains visible on the an audio output that indicates an amount of the virtual object that remains visible on the
displayed portion of the field of view of the one or more cameras. displayed portion of the field of view of the one or more cameras.
226
1005066680
5. 5. The method of any of claims 1-4, wherein outputting the first alert includes generating The method of any of claims 1-4, wherein outputting the first alert includes generating 10 Jan 2024
an audio output that indicates an amount of the displayed portion of the field of view that is an audio output that indicates an amount of the displayed portion of the field of view that is
occluded by the virtual object. occluded by the virtual object.
6. 6. Themethod The methodofofany anyofofclaims claims1-5, 1-5,wherein whereinthetheone oneorormore more input input devices devices include include a a touch-sensitive surface, touch-sensitive surface, and and the the method includes: method includes:
detecting an input by a contact at a location on the touch-sensitive surface that detecting an input by a contact at a location on the touch-sensitive surface that 2024200149
corresponds to the representation of the field of view of the one or more cameras; and corresponds to the representation of the field of view of the one or more cameras; and
in response to detecting the input, and in accordance with a determination that the in response to detecting the input, and in accordance with a determination that the
input is detected at a first location on the touch-sensitive surface that corresponds to a first input is detected at a first location on the touch-sensitive surface that corresponds to a first
portion of the field of view of the one or more cameras that is not occupied by the virtual portion of the field of view of the one or more cameras that is not occupied by the virtual
object, generating a second audio alert. object, generating a second audio alert.
7. 7. The method of any of claims 1-6, wherein outputting the first alert includes generating The method of any of claims 1-6, wherein outputting the first alert includes generating
an audio output that indicates an operation that is performed with respect to the virtual object an audio output that indicates an operation that is performed with respect to the virtual object
and a resulting state of the virtual object after the performance of the operation. and a resulting state of the virtual object after the performance of the operation.
8. 8. The method of claim 7, wherein the resulting state of the virtual object after The method of claim 7, wherein the resulting state of the virtual object after
performance of the operation is described in the audio output in the first alert in relation to a performance of the operation is described in the audio output in the first alert in relation to a
reference frame reference frame corresponding correspondingtotothe thephysical physicalenvironment environmentcaptured captured inin thefield the field of of view viewof of the one the or more one or cameras. more cameras.
9. 9. Themethod The methodofofany anyofofclaims claims1-8, 1-8,including: including: detecting additional detecting additional movement movement ofofthe thecomputer computer system system that that furtheradjusts further adjuststhe thefield field of of
view of the one or more cameras after generation of the first alert; and view of the one or more cameras after generation of the first alert; and
in response in response to to detecting detecting the theadditional additionalmovement of the movement of the computer computersystem systemthat thatfurther further adjusts the field of view of the one or more cameras: adjusts the field of view of the one or more cameras:
adjusting display of the representation of the virtual object in the first user adjusting display of the representation of the virtual object in the first user
interface region in accordance with the first spatial relationship between the virtual object and interface region in accordance with the first spatial relationship between the virtual object and
the plane detected within the field of view of the one or more cameras as the field of view of the plane detected within the field of view of the one or more cameras as the field of view of
the one or more cameras is further adjusted, and, the one or more cameras is further adjusted, and,
in accordance in withaa determination accordance with determinationthat that the the additional additional movement movement ofofthe the computersystem computer systemcauses causesmore more than than a second a second threshold threshold amount amount of the of the virtual virtual object object to to move move
into a displayed portion of the field of view of the one or more cameras, generating a second into a displayed portion of the field of view of the one or more cameras, generating a second
alert. alert.
227
1005066680
10. 10. Themethod The methodofofclaim claim9,9,wherein whereinthethecomputer computer system system includes includes one one or more or more audio audio 10 Jan 2024
output generators, and generating the second alert includes generating, via the one or more output generators, and generating the second alert includes generating, via the one or more
audio output generators, a third audio alert. audio output generators, a third audio alert.
11. 11. Themethod The methodofofany anyofofclaims claims1-10, 1-10,including: including: while displaying the representation of the virtual object in the first user interface while displaying the representation of the virtual object in the first user interface
region and a first object manipulation type of a plurality of object manipulation types region and a first object manipulation type of a plurality of object manipulation types 2024200149
applicable to the virtual object is currently selected for the virtual object, detecting a request applicable to the virtual object is currently selected for the virtual object, detecting a request
to switch to another object manipulation type applicable to the virtual object; and to switch to another object manipulation type applicable to the virtual object; and
in response to detecting the request to switch to another object manipulation type in response to detecting the request to switch to another object manipulation type
applicable to the virtual object, generating an audio output that names a second object applicable to the virtual object, generating an audio output that names a second object
manipulation type among a plurality of object manipulation types applicable to the virtual manipulation type among a plurality of object manipulation types applicable to the virtual
object, wherein the second object manipulation type is distinct from the first object object, wherein the second object manipulation type is distinct from the first object
manipulationtype. manipulation type.
12. 12. Themethod The methodofofclaim claim11, 11,including: including: after generating after generating the the audio audio output output that thatnames names the the second second object object manipulation type manipulation type
among the plurality of object manipulation types applicable to the virtual object, detecting a among the plurality of object manipulation types applicable to the virtual object, detecting a
request to request to execute execute an an object object manipulation behaviorcorresponding manipulation behavior correspondingtotoa acurrently currentlyselected selected object manipulation object type; and manipulation type; and in response in to detecting response to detecting the therequest requesttotoperform perform the theobject objectmanipulation manipulation behavior behavior
correspondingtoto the corresponding the currently currently selected selected object object manipulation type, executing manipulation type, executing an an object object
manipulationbehavior manipulation behaviorthat thatcorresponds correspondstotothe thesecond secondobject objectmanipulation manipulationtype. type.
13. 13. Themethod The methodofofany anyofofclaims claims11-12, 11-12,wherein wherein thethe one one or or more more input input devices devices include include a a touch-sensitive surface, touch-sensitive surface, and and the the method includes: method includes:
in response to detecting the request to switch to another object manipulation type in response to detecting the request to switch to another object manipulation type
applicable to the virtual object: applicable to the virtual object:
in accordance in withaa determination accordance with determinationthat that the the second object manipulation second object manipulationtype typeisis a continuously adjustable manipulation type, generating an audio alert in conjunction with the a continuously adjustable manipulation type, generating an audio alert in conjunction with the
audio output audio output naming namingthe thesecond secondobject objectmanipulation manipulation type,totoindicate type, indicatethat that the the second object second object
manipulationtype manipulation typeisis aa continuously adjustable manipulation continuously adjustable manipulationtype; type; detecting a request to execute an object manipulation behavior that detecting a request to execute an object manipulation behavior that
corresponds to the second object manipulation type, including detecting a swipe input at a corresponds to the second object manipulation type, including detecting a swipe input at a
228
1005066680
location on the touch-sensitive surface that corresponds to a portion of the first user interface location on the touch-sensitive surface that corresponds to a portion of the first user interface 10 Jan 2024
region that displays the representation of the field of view of the one or more cameras; and region that displays the representation of the field of view of the one or more cameras; and
in response to detecting the request to execute the object manipulation in response to detecting the request to execute the object manipulation
behavior corresponding behavior correspondingtotothe thesecond secondobject objectmanipulation manipulationtype, type,executing executingthe theobject object manipulationbehavior manipulation behaviorcorresponding correspondingto to thesecond the second objectmanipulation object manipulation type type by by an an amount amount
that corresponds that to aa magnitude corresponds to of the magnitude of the swipe swipe input. input. 2024200149
14. 14. Themethod The methodofofany anyofofclaims claims1-13, 1-13,including: including: prior to displaying the representation of the virtual object in the first user interface prior to displaying the representation of the virtual object in the first user interface
region, displaying the representation of the virtual object in a second user interface region, region, displaying the representation of the virtual object in a second user interface region,
wherein the second user interface region does not include a representation of the field of view wherein the second user interface region does not include a representation of the field of view
of one of or more one or cameras; more cameras;
while displaying the representation of the virtual object in the second user interface while displaying the representation of the virtual object in the second user interface
region and a first operation of a plurality of operations applicable to the virtual object is region and a first operation of a plurality of operations applicable to the virtual object is
currently selected for the virtual object, detecting a request to switch to another operation currently selected for the virtual object, detecting a request to switch to another operation
applicable to the virtual object; and applicable to the virtual object; and
in response to detecting the request to switch to another operation applicable to the in response to detecting the request to switch to another operation applicable to the
virtual object in the second user interface region, generating an audio output naming a second virtual object in the second user interface region, generating an audio output naming a second
operation among the plurality of operations applicable to the virtual object, wherein the operation among the plurality of operations applicable to the virtual object, wherein the
second operation is distinct from the first operation. second operation is distinct from the first operation.
15. 15. Themethod The methodofofany anyofofclaims claims1-14, 1-14,including: including: prior to displaying the representation of the virtual object in the first user interface prior to displaying the representation of the virtual object in the first user interface
region: region:
while displaying the representation of the virtual object in a second user while displaying the representation of the virtual object in a second user
interface region that does not include a representation of the field of view of the one or more interface region that does not include a representation of the field of view of the one or more
cameras, detecting a request to display a representation of the virtual object in the first user cameras, detecting a request to display a representation of the virtual object in the first user
interface region that includes a representation of the field of view of the one or more cameras; interface region that includes a representation of the field of view of the one or more cameras;
and and
in response to detecting the request to display a representation of the virtual in response to detecting the request to display a representation of the virtual
object in the first user interface region that includes a representation of the field of view of object in the first user interface region that includes a representation of the field of view of
the one the one or or more cameras: more cameras:
displaying a representation of the virtual object in the first user displaying a representation of the virtual object in the first user
interface region in accordance with the first spatial relationship between the representation of interface region in accordance with the first spatial relationship between the representation of
229
1005066680
the virtual object and the plane detected within the physical environment that is captured in the virtual object and the plane detected within the physical environment that is captured in 10 Jan 2024
the field the fieldof ofview view of ofthe theone oneor ormore more cameras; cameras; and and
generating a fourth audio alert indicating that the virtual object is generating a fourth audio alert indicating that the virtual object is
placed in the first user interface region in relation to the physical environment captured in the placed in the first user interface region in relation to the physical environment captured in the
field of field ofview view of of the theone one or ormore more cameras. cameras.
16. 16. Themethod The methodofofclaim claim15, 15,wherein wherein thefourth the fourthaudio audioalert alertindicates indicates information informationabout aboutanan 2024200149
appearance of the virtual object relative to a displayed portion of the field of view of the one appearance of the virtual object relative to a displayed portion of the field of view of the one
or more or cameras. more cameras.
17. 17. Themethod The methodofofany anyofofclaims claims15-16, 15-16,including: including: generating a tactile output in conjunction with placement of the virtual object in the generating a tactile output in conjunction with placement of the virtual object in the
first user interface region in relation to the physical environment captured in the field of view first user interface region in relation to the physical environment captured in the field of view
of the of the one one or or more cameras. more cameras.
18. 18. Themethod The methodofofany anyofofclaims claims1-17, 1-17,wherein wherein theone the one oror more more input input devices devices include include a a touch-sensitive surface, touch-sensitive surface, and and the the method includes: method includes:
displaying a first control at a first location in the first user interface region, displaying a first control at a first location in the first user interface region,
concurrently with a representation of the field of view of the one or more cameras; concurrently with a representation of the field of view of the one or more cameras;
in accordance with a determination that control-fading criteria are met, ceasing to in accordance with a determination that control-fading criteria are met, ceasing to
display the first control in the first user interface region while maintaining display of the display the first control in the first user interface region while maintaining display of the
representation of the field of view of the one or more cameras in the first user interface representation of the field of view of the one or more cameras in the first user interface
region; region;
while displaying the first user interface region without displaying the first control in while displaying the first user interface region without displaying the first control in
the first user interface region, detecting a touch input at a respective location on the touch- the first user interface region, detecting a touch input at a respective location on the touch-
sensitive surface that corresponds to the first location in the first user interface region; and sensitive surface that corresponds to the first location in the first user interface region; and
in response to detecting the touch input, generating a fifth audio alert including an in response to detecting the touch input, generating a fifth audio alert including an
audio output that specifies an operation corresponding to the first control. audio output that specifies an operation corresponding to the first control.
19. 19. A computer A computersystem, system,including: including: a display a display generation generation component; component;
one or one or more input devices; more input devices; one or one or more cameras; more cameras;
one or one or more processors;and more processors; and
230
1005066680
memory memory storingone storing oneorormore more programs, programs, wherein wherein the the one one or more or more programs programs are are 10 Jan 2024
configuredto configured to be be executed executedby bythe the one oneor or more moreprocessors, processors,the theone oneorormore moreprograms programs including including
instructions for: instructions for:
displaying, via the display generation component, a representation of a virtual displaying, via the display generation component, a representation of a virtual
object in a first user interface region that includes a representation of a field of view of one or object in a first user interface region that includes a representation of a field of view of one or
more cameras, wherein the displaying includes maintaining a first spatial relationship more cameras, wherein the displaying includes maintaining a first spatial relationship
between the representation of the virtual object and a plane detected within a physical between the representation of the virtual object and a plane detected within a physical 2024200149
environment that is captured in the field of view of the one or more cameras; environment that is captured in the field of view of the one or more cameras;
detecting movement detecting movement ofof thecomputer the computer system system that that adjusts adjusts thethefield fieldofofview viewofof the one the or more one or cameras;and more cameras; and in response in to detecting response to detecting movement movement ofofthe thecomputer computer system system that that adjuststhe adjusts the field of field ofview view of of the theone one or ormore more cameras: cameras:
adjusting display of the representation of the virtual object in the first adjusting display of the representation of the virtual object in the first
user interface region in accordance with the first spatial relationship between the virtual user interface region in accordance with the first spatial relationship between the virtual
object and the plane detected within the field of view of the one or more cameras as the field object and the plane detected within the field of view of the one or more cameras as the field
of view of of the view of the one one or or more camerasisis adjusted, more cameras adjusted, and, and, in accordance in withaa determination accordance with determinationthat that the the movement movement of of thecomputer the computer systemcauses system causesmore morethan thana athreshold thresholdamount amountof of thevirtual the virtualobject object to to move moveoutside outsideofofaa displayed portion of the field of view of the one or more cameras, generating a first alert. displayed portion of the field of view of the one or more cameras, generating a first alert.
20. 20. Thecomputer The computersystem system of of claim claim 19, 19, wherein wherein thethe oneone or or more more programs programs include include
instructions for instructions forperforming performing any of the any of the methods of claims methods of claims 2-18. 2-18.
21. 21. A computer A computerprogram program including including instructions instructions that,when that, when executed executed by by a computer a computer system system
with aa display with display generation generation component, oneorormore component, one more inputdevices, input devices,and andone one oror more more cameras, cameras,
cause the cause the computer systemto: computer system to: display, via the display generation component, a representation of a virtual object in a display, via the display generation component, a representation of a virtual object in a
first user interface region that includes a representation of a field of view of one or more first user interface region that includes a representation of a field of view of one or more
cameras, wherein the displaying includes maintaining a first spatial relationship between the cameras, wherein the displaying includes maintaining a first spatial relationship between the
representation of the virtual object and a plane detected within a physical environment that is representation of the virtual object and a plane detected within a physical environment that is
captured in the field of view of the one or more cameras; captured in the field of view of the one or more cameras;
detect movement detect movement ofof thecomputer the computer system system that that adjuststhethefield adjusts fieldof of view viewofofthe the one one or or morecameras; more cameras;and and
231
1005066680
in response in response to to detecting detecting movement movement ofofthe thecomputer computer system system that that adjuststhe adjusts thefield field of of 10 Jan 2024
viewof view of the the one or more one or cameras: more cameras:
adjust display of the representation of the virtual object in the first user adjust display of the representation of the virtual object in the first user
interface region in accordance with the first spatial relationship between the virtual object and interface region in accordance with the first spatial relationship between the virtual object and
the plane detected within the field of view of the one or more cameras as the field of view of the plane detected within the field of view of the one or more cameras as the field of view of
the one the or more one or camerasisisadjusted, more cameras adjusted, and, and, in accordance in with aa determination accordance with determinationthat that the the movement movement ofof thecomputer the computer system system 2024200149
causes more causes morethan thanaathreshold thresholdamount amountofofthe thevirtual virtual object object to to move outsideof move outside of aa displayed displayed portion of the field of view of the one or more cameras, generate a first alert. portion of the field of view of the one or more cameras, generate a first alert.
22. 22. Thecomputer The computerprogram program of of claim claim 21,21, furtherincluding further including instructionsthat, instructions that, when whenexecuted executed by the by the computer system,cause computer system, causethe thecomputer computer system system to to perform perform anyany of the of the methods methods of claims of claims
2-18. 2-18.
232
Memory 102 Portable Multifunction Device 100 126 136 Operating System Applications (continued) 128 148 Communication Module Calendar Module 130 149 Contact/Motion Module Widget Modules 132 149-1 Graphics Module Weather Widget(s) 133 149-2 Haptic Feedback Module Stocks Widget 134 149-3 Text Input Module Calculator Widget 2024200149
135 149-4 GPS Module Alarm Clock Widget 136 149-5 Applications Dictionary Widget 137 Contacts Module 138 149-6 Telephone Module User-Created Widget(s) 139 150 Video Conference Module Widget Creator Module 140 151 E-mail Client Module Search Module 141 155 Instant Messaging Module Online Video Module 142 Workout Support Module 143 Camera Module Device/Global Internal State 157 144 Image Management Module 152 Power 162 Video & Music Player Module 153 System Notes Module 154 External 124 Map Module Port V Browser Module 147 118 103 RF Circuitry 103 103 Speaker 108 104 111 103 Audio Controller Circuitry 122 110 103 Peripherals 103 Microphone Interface Proximity 113
Sensor 166 103 Processor(s) 120 Accelerometer(s) 168 106 103
I/O Subsystem Optical Intensity Haptic Other Input Display sensor(s) sensor(s) Feedback Controller(s) Controller 156 Controller 158 Controller 159 Controller 161 160
103 103 103 103 103 Touch- Contact Optical Tactile Output Other Input or Sensitive Intensity Sensor(s) Generator(s) Control Display Sensor(s) 164 167 Devices 116 System 112 165
Figure 1A 1/273
Event Sorter 170
171 Event Monitor
Hit View Determination 172 2024200149
Module Active Event Recognizer 173 Determination Module
174 Event Dispatcher Module
Application 136-1
Application
Application View 191
: e Event Recognizer 180 Application View 191 Event Recognizer 180 Event Receiver 182 Event Handler 190 Event Recognizer 180 Event Comparator 184 Data Updater 176 Event Data 179 Event Definitions 186 Object Updater 177 Event 1 187-1 GUI Updater 178 Event 2 187-2
:
Metadata 183 Event Handler 190 Event Delivery 188
Application Internal State 192
Figure 1B 2/273
output(s)
tactile
Tactile Output 2024200149
Generator
Sensor
167 169
to Audio Circuitry
Amplifier
163
Figure 1C
Processing Module
Haptic Feedback
Hardware Input Hardware Input
Controller
Device
145 146 161
- 133 Module Feedback Haptic 123 Module Waveform 131 Controller Thermal 129 Filter Low-Pass Compressor 127
Trigger 121
Mixer 125
Portable Multifunction Device 100 206 210 212
Optical Proximity Speaker 111 Sensor 164 Sensor 166 200 S 2024200149
210 is SIM card slot 212 is headphone jack
202
Touch Screen 112
Contact Intensity Sensor(s) 165
Tactile Output Generator(s) 167
203
Microphone Home Accelerometer(s) 113 204 168
External Port 124
Figure 2 4/273
Memory 370
Operating System 126
Communication Module 128 Contact/Motion Module 130 132 Graphics Module 133 Haptic Feedback Module 134 2024200149
Text Input Module 136 Applications 137 Contacts Module 138 Telephone Module 139 Video Conference Module Device 300 140 E-mail Client Module 141 Instant Messaging Module 142 Workout Support Module 143 Camera Module 144 310 Image Management Module CPU(s) 147 Browser Module 148 Calendar Module 320 149 Widget Modules 149-1 Weather Widget 330 149-2 Stocks Widget 149-3 I/O Interface Calculator Widget 149-4 Alarm Clock Widget Display 149-5 340 Dictionary Widget
149-6 Keyboard/Mouse 350 User-Created Widget(s) 150 Widget Creator Module Touchpad 151 355 Search Module 152 Tactile Output Video & Music Player Module 380 Generator(s) 357 Drawing Module 382 Presentation Module Sensor(s) 359 384 Word Processing Module 386 Website Creation Module 388 Network Disk Authoring Module 390 360 Communications Spreadsheet Module Interface
Device/Global Internal State 157
Figure 3 5/273
Portable Multifunction Device 100
164 111 Time 400
Sunday 2024200149
14 424 426 428 430
280
432 434 01436 73°
438
12 11
10
440 442 444 446
F S V T B L W App Store Calculator Utilities Voice Memo
414 410 4 6 N
m 2 416 418 420 422 408
Figure 4A Touch Screen 112 6/273
468 2024200149
470
453
451
460
462
Tactile Output Generator(s) 357 452
Contact Intensity Sensor(s) 359
Figure 4B 7/273
Intensity p1 480 476 2024200149
481 478
480
478 474
Time
Figure 4C
8/273
Intensity 486 2024200149
484
488 ID
p1
IL
l 482
p2 Time
Figure 4D
9/273
Intensity
p1 492 2024200149
lo
492 490
494
lH
482
p2 Time
Figure 4E
10/273
Xmax FullTap (80Hz) Xzero 35.4ms gain = 1.0 Xmin 2024200149
Xmax FullTap (100Hz) Xzero 28.3ms gain = 1.0 Xmin Xmax FullTap (125Hz) Xzero 22.6ms gain = 1.0
Xmin Xmax FullTap (150Hz) Xzero 19.4ms gain= 1.0 Xmin Xmax FullTap (200Hz) Xzero 14.4ms gain = 1.0 Xmin Xmax FullTap (230Hz) Xzero 12.5ms gain # 1.0
Xmin Xmax FullTap (270Hz) Xzero 10.8ms gain = 1.0 Xmin Xmax FullTap (300Hz) Xzero 9.9ms gain = 1.0 Xmin
Figure 4F
11/273
Xmax MiniTap (80Hz) Xzero 22.0ms gain = 1.0
Xmin Xmax 2024200149
MiniTap (100Hz) Xzero 17.6ms gain = 1.0
Xmin Xmax MiniTap (125Hz) Xzero 14.1ms gain=1.0 Xmin Xmax MiniTap (150Hz) Xzero 12.8ms gain = 1.0
Xmin Xmax MiniTap (200Hz) Xzero 10.8ms gain 388 1.0
Xmin Xmax MiniTap (230Hz) Xzero 9.3ms gain # 1.0
Xmin Xmax MiniTap (270Hz) Xzero 7.8ms gain is 1.0
Xmin Xmax MiniTap (300Hz) Xzero 7.0ms gain = 1.0
Xmin
Figure 4G 12/273
Xmax MicroTap (80Hz) Xzero 15.8ms gain 33 1.0
Xmin Xmax 2024200149
MicroTap (100Hz) Xzero 12.6ms gain 33 1.0
Xmin Xmax MicroTap (125Hz) Xzero 10.1ms gain = 1.0
Xmin Xmax MicroTap (150Hz) Xzero 9.4ms gain = 1.0
Xmin Xmax MicroTap (200Hz) Xzero 8.5ms gain #3 1.0
Xmin Xmax MicroTap (230Hz) Xzero 8, 1ms gain 3 1.0
Xmin Xmax MicroTap (270Hz) Xzero 7.5ms gain = 1.0
Xmin Xmax MicroTap (300Hz) Xzero 6.8ms gain=1.0 Xmin
Figure 4H 13/273
Xmax FullTap (80Hz) Xzero 35.4ms gain # 1.0
Xmin 2024200149
Xmax FullTap (80Hz) Xzero 35.4ms gain = 0.75
Xmin Xmax FullTap (80Hz) Xzero 35.4ms gain # 0.5
Xmin Xmax FullTap (80Hz) Xzero 35.4ms gain 1: 0.25
Xmin Xmax FullTap (200Hz) Xzero 14.4ms gain = 1.0 Xmin Xmax FullTap (200Hz) Xzero 14.4ms gain IN 0.75
Xmin Xmax FullTap (200Hz) Xzero 14.4ms gain = 0.5 Xmin Xmax FullTap (200Hz) Xzero 14.4ms gain = 0.25
Xmin
Figure 4
14/273
Xmax MiniTap (80Hz) Xzero 22.0ms gain IN 1.0
Xmin Xmax 2024200149
MiniTap (80Hz) Xzero 22.0ms gain=0.75 Xmin Xmax MiniTap (80Hz) Xzero 22.0ms gain 300 0.5
Xmin Xmax MiniTap (80Hz) Xzero 22.0ms gain I 0.25
Xmin Xmax MiniTap (200Hz) Xzero 10.8ms gain 13 1.0
Xmin Xmax MiniTap (200Hz) Xzero 10.8ms gain = 0.75
Xmin Xmax MiniTap (200Hz) Xzero 10.8ms gain=0.5 Xmin Xmax MiniTap (200Hz) Xzero 10.8ms gain = 0.25
Xmin
Figure 4J 15/273
Xmax MicroTap (80Hz) Xzero 15.8ms gain= 1.0 Xmin Xmax 2024200149
MicroTap (80Hz) Xzero 15.8ms gain = 0.75
Xmin Xmax MicroTap (80Hz) Xzero 15.8ms gain II 0.5
Xmin Xmax MicroTap (80Hz) Xzero 15.8ms gain # 0.25
Xmin Xmax MicroTap (200Hz) Xzero 8.5ms gain = 1.0
Xmin Xmax MicroTap (200Hz) Xzero 8.5ms gain 33 0.75
Xmin Xmax MicroTap (200Hz) Xzero 8.5ms gain A 0.5
Xmin Xmax MicroTap (200Hz) Xzero 8.5ms gain = 0.25
Xmin
Figure 4K 16/273
2024200149
5004
100
case
< 8003
Albite done was
security use are
5006
was
G
Figure 5A 17/273
Portable Multifunction Device 100
11:47 164 111
5008
i 2024200149
FRED 5010 5012 Please have a seat. 5014
I could certainly use one. 5016
5018
5022
5020
0 ITD
ITL
ITH
ITo
5024 Intensity of Contact 5028
Figure 5B Touch Screen 112 18/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED
Please have a seat.
I could certainly use one.
5018
5020
5026
0 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5C Touch Screen 112 19/273
Portable Multifunction Device 100
11:47 5008 2024200149
FRED
Please have a seat
one
5018
5020
5026
ITD
ITL
ITH
ITo
Intensity of Contact 5028
3 Figure 5D Touch Screen 112 20/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030
0 5020
5026
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5E Touch Screen 112 21/273
Portable Multifunction Device 100 MAY 5032 11:47 5008 2024200149
5030
0 5020
5026
ITD
ITL
ITH
ITo
Intensity of Contact 5028 5032
Figure 5F Touch Screen 112 22/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5G Touch Screen 112 23/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5H Touch Screen 112 24/273
Portable Multifunction Device 100 my 5036 11:47 5008 2024200149
5030 5034
5020
ITD
5038 ITL
ITH
ITo
Intensity of Contact 5028 5036
Figure 5I Touch Screen 112 25/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
ITD
5038 ITL
ITH
ITo
Intensity of Contact 5028
Figure 5J Touch Screen 112 26/273
2024200149
5004
100
5034 5006
5020
5038
Figure 5K 27/273
2024200149
5004
100
5034 5006
5020
5038
Figure 5L 28/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
ITD
5038 ITL
ITH
ITo
Intensity of Contact 5028
Figure 5M Touch Screen 112 29/273
B 10 Jan 2024
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5004
5042 5040
5020
ITD
5038 ITL
ITH
ITo
Intensity of Contact 5028
a Figure 5N Touch Screen 112 30/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5044 5004
5040
5020
5038
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 50 Touch Screen 112 31/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
5040
5004
5038
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5P Touch Screen 112 32/273
1+ 10 Jan 2024
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
5004
5038
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5Q Touch Screen 112 33/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5048 5020 5050
5004
5038
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5R Touch Screen 112 34/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5048 5020 5052
5004
5046
5038
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5S Touch Screen 112 35/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5048
5020
5004
5046
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5T Touch Screen 112 36/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5U Touch Screen 112 37/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5054 5020 5056
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5V Touch Screen 112 38/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5054 5020 5058
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Touch Screen 112 Figure 5W 39/273
Portable Multifunction Device 100
11:47 5008 2024200149
5030 5034
5054 5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5X Touch Screen 112 40/273
Portable Multifunction Device 100
11:47 5008 2024200149
FRED
5030 Ple 5034
5054 5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5Y Touch Screen 112 41/273
Portable Multifunction Device 100
11:47 5008 2024200149
FRED
Please have a seat
core 5020
5018 5054
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5Z Touch Screen 112 42/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED
5020
5018 5054
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AA Touch Screen 112 43/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED
Please have a seat.
I could certainly use one.
5020
5018
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AB Touch Screen 112 44/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED
Please have a seat.
I could certainly use one.
5018 5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AC Touch Screen 112 45/273
Portable Multifunction Device 100
11:47 164 111
5008
i 2024200149
FRED
Please have a seat.
I could certainly use one. 5022
5018 5020
0 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AD Touch Screen 112 46/273
Portable Multifunction Device 100
164 111 11:47 5060 5062 com 2024200149
Shop Add to Cart
5078 Q 5082 Y 5066 5072
5080
5074
5068
5084 ITD
ITL
ITH 5070
ITo
5076 Intensity of Contact 5028
5064
Figure 5AE Touch Screen 112 47/273
Portable Multifunction Device 100
164 111 11:47 5060
.com 2024200149
Shop Add to Cart Q Y Y
5068
5084 ITD
5086
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AF Touch Screen 112 48/273
Portable Multifunction Device 100
11:47 164 111
5060
.com 2024200149
Shop Add to Cart Q Y
5034
5068
5084 ITD
5086
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AG Touch Screen 112 49/273
Portable Multifunction Device 100 MYA 5088 164 111 11:47
.com 2024200149
5034
5084 ITD
5086
ITL
ITH
ITo
Intensity of Contact 5028 5088
Figure 5AH Touch Screen 112 50/273
Portable Multifunction Device 100
164 111 11:47
.com 2024200149
5034
5090
5084 ITD
5086
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AI Touch Screen 112 51/273
Portable Multifunction Device 100
11:47 164 111
com 2024200149
5034
5092
5084 ITD
5086
ITL
5038 ITH
ITo
Intensity of Contact 5028
Figure 5AJ Touch Screen 112 52/273
Portable Multifunction Device 100
164 111 11:47
com 2024200149
5034
5094
5084 5004
5086 5046
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AK Touch Screen 112 53/273
Portable Multifunction Device 100
164 111 11:47
.com 2024200149
5034
5084 5004
5086 5046
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AL Touch Screen 112 54/273
Portable Multifunction Device 100
164 111 11:47
com 2024200149
5034
5084 5004
5046
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AM Touch Screen 112 55/273
Portable Multifunction Device 100
164 111 11:47
com 2024200149
5034
5098
5084
5096
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AN Touch Screen 112 56/273
Portable Multifunction Device 100
164 111 11:47
.com 2024200149
5034
5084
5096
5100
ITo
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AO Touch Screen 112 57/273
Portable Multifunction Device 100
164 111 11:47
com 2024200149
5034
5084
5096
ITo
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AP Touch Screen 112 58/273
Portable Multifunction Device 100
164 111 11:47
.com 2024200149
5034
5084
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AQ Touch Screen 112 59/273
Portable Multifunction Device 100
164 111 11:47
com 2024200149
5034
5084
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AR Touch Screen 112 60/273
Portable Multifunction Device 100
164 111 11:47 5060
com (c) 2024200149
5084
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AS Touch Screen 112 61/273
Portable Multifunction Device 100
164 111 11:47 5060
.com 2024200149
Shop Add to Cart Q Y Y
5068 Y
5084 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 5AT Touch Screen 112 62/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED 5010 5012 Please have a seat. 5014
5016 I could certainly use one.
5018
5022
5020
0 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6A Touch Screen 112 63/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED 5010
Please have a seat. 5014
I could certainly use one.
5018
5020
6004
ITD 6002
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6B Touch Screen 112 64/273
Portable Multifunction Device 100
11:47 164 111
5008
i 2024200149
FRED
I could certainly use one. 5014
5018
5020
6002 6005
ITD
ITL
ITH
Alright, I'm seated now.
ITo
Intensity of Contact 5028
Figure 6C Touch Screen 112 65/273
Portable Multifunction Device 100
11:47 164 111
5008
i 2024200149
FRED
I could certainly use one.
5020
ITD
ITL
ITH
Alright, I'm seated now.
ITo
Intensity of Contact 5028
Figure 6D Touch Screen 112 66/273
Portable Multifunction Device 100
11:47 164 111 5008
i 2024200149
FRED
I could certainly use one.
5018
5020
6006
ITD
ITL
ITH
Alright, I'm seated now.
ITo
Intensity of Contact 5028
Figure 6E Touch Screen 112 67/273
Portable Multifunction Device 100
11:47 5008 2024200149
FRED I could certainly use one
5018
5020
6006
ITD
ITL
ITH
Atright I'm seated now.
ITo
Intensity of Contact
0 5028
Figure 6F Touch Screen 112 68/273
Portable 100
5008 2024200149
6008
5020
6006
ITD
ITL
ITH
ITO
Figure 112 6G
Portable Multifunction Device 100
6012 11:47 164 111
6010
World 3D 2024200149
5020
6006
ITD
ITL
ITH
ITo
Intensity of Contact 5028 6012
Figure 6H Touch Screen 112 70/273
Portable Multifunction Device 100
164 111 11:47 6010
6016 World 3D 2024200149
6020
6018
5020
6006
ITD
ITL 6014
ITH
ITo
Intensity of Contact 5028
Figure 61 Touch Screen 112 71/273
Portable Multifunction Device 100
164 111 11:47 6010
World 30 2024200149
6022 5020
6006
ITD
ITL 6014
ITH
ITo
Intensity of Contact 5028
Figure 6J Touch Screen 112 72/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
5020
6006
6024 1 ITD
ITL 6014
ITH
ITo
Intensity of Contact 5028
Figure 6K Touch Screen 112 73/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
6025
5020
ITo
ITL
6014 ITH
ITo
Intensity of 6006 Contact 5028
Figure 6L Touch Screen 112 74/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3L 2024200149
5020
1 ITD
ITL
6014 ITH
ITo
Intensity of Contact 5028
Figure 6M Touch Screen 112 75/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
6028
5020 1 6026
ITD
ITL
6014 ITH
ITo
6030
Intensity of 6032 Contact 5028
Figure 6N Touch Screen 112 76/273
Portable Multifunction Device 100
164 111 11:47 6010
World 30 2024200149
6026
5020 ITD
ITL
6014 ITH
ITo
6030
Intensity of Contact 5028
Figure 60 Touch Screen 112 77/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
1 5020 ITD
ITL
6014 ITH
ITo
Intensity of Contact 5028
Figure 6P Touch Screen 112 78/273
Portable Multifunction Device 100
11:47 164 111
6010
World 30 2024200149
1 5020 ITD
6034 ITL
6014 ITH
ITo
Intensity of Contact 5028
Figure 6Q Touch Screen 112 79/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
5020 ITD
6034 ITL
6014 ITH
ITo
Intensity of Contact 5028
Figure 6R Touch Screen 112 80/273
Portable Multifunction Device 100
11:47 164 111 2024200149
6036
1 5020 ITD
6034 ITL
ITH
ITo
Intensity of Contact 5028
Figure 6S Touch Screen 112 81/273
Portable Multifunction Device 100 MAY
111 6038 11:47 164
World 3D 2024200149
6036
5004
5038
6034
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028 6038
Figure 6T Touch Screen 112 82/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
5004
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6U Touch Screen 112 83/273
Portable Multifunction Device 100
164 111 11:47
6040 World 3D 2024200149
6036 6018
5004
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6V Touch Screen 112 84/273
Portable Multifunction Device 100
164 111 11:47 2024200149
6036
5020 ITo
ITL
ITH
ITo
Intensity of Contact 5028
Touch Screen 112 Figure 6W 85/273
Portable Multifunction Device 100
164 111 11:47 6010 2024200149
5020 0 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6X Touch Screen 112 86/273
Portable Multifunction Device 100
11:47 164 111
6010
World 3D 2024200149
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6Y Touch Screen 112 87/273
Portable Multifunction Device 100
11:47 164 111
6010
6016 World 30 2024200149
6042
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6Z Touch Screen 112 88/273
Portable Multifunction Device 100
164 111 11:47 6010 2024200149
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028
Touch Screen 112 Figure 6AA 89/273
Portable Multifunction Device 100
11:47 164 111
5008
(i) 2024200149
0 5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6AB Touch Screen 112 90/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED
I could certainly use one.
5020
0 ITD
ITL
ITH
Alright, I'm seated now.
ITo
Intensity of Contact 5028
Figure 6AC Touch Screen 112 91/273
Portable Multifunction Device 100
164 111 11:47 5008
i 2024200149
FRED
I could certainly use one.
6046
6044 5020
ITD
ITL
ITH
Alright, I'm seated now.
ITo
Intensity of Contact 5028
Figure 6AD Touch Screen 112 92/273
Portable Multifunction Device 100
164 111 11:47 5008
6048 2024200149
FRED 6044
5020
0
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6AE Touch Screen 112 93/273
Portable Multifunction Device 100
164 111 11:47
0 2024200149
6044
5020
6036
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6AF Touch Screen 112 94/273
Portable Multifunction Device 100
164 111 11:47
3D
0 2024200149
6044
5020
6036
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6AG Touch Screen 112 95/273
Portable Multifunction Device 100
164 111 11:47
3D
0 2024200149
5020
6036
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6AH Touch Screen 112 96/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
5020
ITD
ITL
ITH
ITo
Intensity of Contact 5028
Figure 6AI Touch Screen 112 97/273
Portable Multifunction Device 100 MAY 6050 11:47 164 111
World 3D 2024200149
6036
5038
5020 ITD
ITL
ITH
ITo
Intensity of Contact 5028 6050
Figure 6AJ Touch Screen 112 98/273
Portable Multifunction Device 100
164 111 11:47 400
Sunday 2024200149
14 424 426 428 430
280
432 434 01436 TOO 73° 438
12 11
10
3 440 442 444 446
F S V T B W L App Store Calculator Utilities Voice Memo
7000
4 6 N
156'm
416 418 420 422 408
Figure 7A Touch Screen 112 99/273
Portable Multifunction Device 100
164 111 11:47 5060
com 2024200149
Shop Add to Cart
5078 Q 5082
5066 Y 5072
5080
5074 Y 5068
5084
5076
5070
Figure 7B Touch Screen 112 100/273
Portable Multifunction Device 100
164 111 11:47 5060
com 2024200149
Shop Add to Cart Q 5078
5082
5066 Y 5072
5080
5074 Y 5068
5084
5076
7004
7002
5070
Figure 7C Touch Screen 112 101/273
Portable Multifunction Device 100
164 111 11:47 5060
.com 2024200149
5072 5066
5080
5068 Y 5074
5084
5076
7002 5070
7005 7003 E. Y
Figure 7D Touch Screen 112 102/273
Portable Multifunction Device 100
11:47 164 111
5060
.com 2024200149
T
Y Y
Figure 7E Touch Screen 112 103/273
100 7006 2024200149
Figure 7F1
100 FEW 5060 NEW e
5078 Shop Ado to Can
5082
5066 5072
5080
5068 5074
5084
5070 5076
Figure 7F2 104/273
5006 100 2024200149
Figure 7G1
100 FRAN 5060 AWAY a
Shop Add to Cart 5078
5082
5066 5072
5080
5068 5074
5084
5070 5076
Figure 7G2 105/273
Portable Multifunction Device 100
164 111 11:47 5060
.com 2024200149
Y
Y Y
7010
7008
Figure 7H Touch Screen 112 106/273
Portable Multifunction Device 100
11:47 2024200149
Shop or Add to Cart ID
7012
7008
Figure 71 Touch Screen 112 107/273
Portable Multifunction Device 100
11:47 7012 5060 Safari 5008 Messa 7014 2024200149
adidas
Shop Add to a
Apr Mirro
Go team
7008
416 418 420 424
Figure 7J Touch Screen 112 108/273
Portable Multifunction Device 100
11:47 7012 5060 Safari 5008 Messa 7014 2024200149
- ( Shop of Add to
Any Mirro
Go team
416 418 420 424
Figure 7K Touch Screen 112 109/273
Portable Multifunction Device 100
11:47
Safari Messa 2024200149
Shop Add to (
a
7016 Any Mirro
Go team
416 418 420 424
Figure 7L Touch Screen 112 110/273
Portable Multifunction Device 100
11:47 164 111
5008
i 2024200149
FRED
5018
5022
5020
0 6005
Alright, I'm seated now. 7020
7018 Go team
Figure 7M Touch Screen 112 111/273
Portable Multifunction Device 100
11:47 164 111 7022
i 2024200149
4 7024
Green
7028 SSS
101 Apple Chestnut 7030 Street
cow HOLL 3
56°
7025 Apple Cancel
Apple Chestnut Street 7026 Y
Apple Orchard 7032
7034
Figure 7N Touch Screen 112 112/273
Portable Multifunction Device 100
164 111 11:47 7036 7038
Locations Drive Select 2024200149
7040 Search
7050 7042
Y 7046
pdf 7045
7044 7048
pdf
Figure 70 Touch Screen 112 113/273
Portable Multifunction Device 100
164 111 11:47 7052 7054
Inbox 2024200149
7056 From: Sent: To: Subject: 7058
7064
7060 Y
7062
Figure 7P Touch Screen 112 114/273
802 Display a representation of a virtual object in a first user interface region on a
display
804 While displaying the first representation of the virtual object in the first user 2024200149
interface region on the display, detect a first input by a contact at a location on a
touch-sensitive surface that corresponds to the representation of the virtual object on the display
806 In response to detecting the first input by the contact: in accordance with a determination that the first input by the contact meets first criteria:
display a second user interface region on the display, including replacing display of at least a portion of the first user interface region with the
representation of a field of view of one or more cameras; and continuously display the representation of the virtual object while switching from displaying the first user interface region to displaying the second user interface region
808 The first criteria include criteria that are satisfied when the contact is
maintained at the location on the touch-sensitive surface that corresponds to the representation of the virtual object with less than a threshold amount of movement for at least a predefined amount of time
810 The first criteria include criteria that are satisfied when a characteristic intensity
of the contact increases above a first intensity threshold
812 The first criteria include criteria that are satisfied when a movement of the contact meets predefined movement criteria
814 In response to detecting the first input by the contact, in accordance with a determination that the first input by the contact has met the first criteria, output, with one or more tactile output generators, a tactile output to indicate satisfaction of the
first criteria by the first input
Figure 8A 115/273
816 In response to detecting at least an initial portion of the first input, analyze the
field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras; and after detecting a respective plane in the field of view of the one or more cameras, determine a size and/or position of the representation of the virtual object based on a relative position of the respective plane to the field of view of the one or more cameras 2024200149
818 Analyzing the field of view of the one or more cameras to detect the one or more planes in the field of view of the one or more cameras is initiated in response to detection of the contact at the location on the touch-sensitive surface that corresponds to the representation of the virtual object on the display
820 Analyzing the field of view of the one or more cameras to detect the one or more planes in the field of view of the one or more cameras is initiated in response to detecting that the first criteria are met by the first input by the
contact
822 Analyzing the field of view of the one or more cameras to detect the one or more planes in the field of view of the one or more cameras is initiated in response to detecting that an initial portion of the first input meets plane- detection trigger criteria without meeting the first criteria
Figure 8B 116/273
816 In response to detecting at least an initial portion of the first input, analyze the
field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras; and, after detecting a respective plane in the field of view of the one or more cameras, determine a size and/or position of the representation of the virtual object based on a relative position of the respective plane to the field of view of the one or more cameras 2024200149
824 Display the representation of the virtual object in the second user interface region in a respective manner such that the virtual object is oriented at a predefined angle relative to a respective plane that is detected in the field
of view of the one or more cameras
826 In response to detecting the respective plane in the field of view of the one or more cameras, output, with one or more tactile output generators, a tactile output to indicate the detection of the respective plane in the field of view of the one or more cameras
828 While switching from displaying the first user interface region to displaying the second user interface region, display an animation as the representation of the virtual object transitions into the second user interface region to a predefined position relative to the respective plane; and, in conjunction with displaying the representation of the virtual object at the predefined angle relative to the respective plane, output, with one or more tactile output generators, a tactile output to indicate display of
the virtual object at the predefined angle relative to the respective plane in the second user interface region
830 The tactile output has a tactile output profile that corresponds to a characteristic of the virtual object.
832 While displaying the representation of the virtual object in the second user interface region, detect movement of the device that adjusts the field of view of the one or more cameras; and, in response to detecting movement of the device, adjust the representation of the virtual object in the second user interface region in accordance with a fixed spatial relationship between the virtual object and the respective plane in the field of view of the one or more cameras as the field of view of the one or more cameras is adjusted
834 Display an animation as the representation of the virtual object is continuously displayed while switching from displaying the first user interface region to displaying the second user interface region
Figure 8C 117/273
836 While displaying the second user interface region on the display, detect a second input by a second contact, wherein the second input includes movement of the second contact along a first path across the display; and, in response to detecting the second input by the second contact, move the representation of the virtual object in the second user interface region along a second path that corresponds to the first path 2024200149
838 Adjust a size of the representation of the virtual object as the representation of the virtual object moves along the second path based on the movement of the contact and a respective plane that corresponds to the virtual object
840 Maintain a first size of the representation of the virtual object as the
representation of the virtual object moves along the second path; detect termination of the second input by the second contact; and in response to detecting the termination of the second input by the second contact: place the representation of the virtual object at a drop-off location in the
second user interface region; and display the representation of the virtual object at the drop-off location in
the second user interface region with a second size that is distinct from the first size
842 In accordance with a determination that the movement of the second contact along the first path across the display meets second criteria: cease to display the second user interface region including the representation of the field of view of the one or more cameras; and redisplay the first user interface region with the representation of the virtual object
844 At a time that corresponds to redisplaying the first user interface region, display an animated transition from displaying the representation of the virtual object in the second user interface region to displaying the representation of the virtual object in the first user interface region
846 As the second contact moves along the first path, change a visual appearance of one or more respective planes identified in the field of view of the one or more cameras that corresponds to a current location of the contact
Figure 8D 118/273
848 In response to detecting the first input by the contact, in accordance with a determination that the first input by the contact meets third criteria, display a third user interface region on the display, including replacing display of at least a portion
of the first user interface region 2024200149
850 In accordance with a determination that the first input by the contact does not meet the first criteria, maintain display of the first user interface region without replacing display of at least a portion of the first user interface region with the
representation of the field of view of the one or more cameras
Figure 8E 119/273
902 Display a first representation of a virtual object in a first user interface region
on a display
904 While displaying the first representation of the virtual object in the first user 2024200149
interface region on the display, detect a first input by a first contact at a location on
a touch-sensitive surface that corresponds to the first representation of the virtual object on the display
906 In response to detecting the first input by the first contact and in accordance with a determination that the first input by the first contact meets first criteria,
display a second representation of the virtual object in a second user interface region that is different from the first user interface region
908 While displaying the second representation of the virtual object in the second user interface region, detect a second input
910 In response to detecting the second input: in accordance with a determination that the second input corresponds to a request to manipulate the virtual object in the second user interface region, change a display property of the second representation of the virtual object within the second user interface region based on the second input; and in accordance with a determination that the second input corresponds to a request to display the virtual object in an augmented reality environment, display a third representation of the virtual object with a representation of a field of
view of one or more cameras
912 The first criteria include criteria that are satisfied when the first input includes a tap input by the first contact at a location on the touch-sensitive surface that
corresponds to a virtual object indicator
Figure 9A 120/273
914 The first criteria include criteria that are satisfied when the first contact is
maintained at the location on the touch-sensitive surface that corresponds to the first representation of the virtual object with less than a threshold amount of movement for at least a predefined threshold amount of time 2024200149
916 The first criteria include criteria that are satisfied when a characteristic intensity of the first contact increases above a first intensity threshold
918 In response to detecting the first input by the first contact and in accordance with a determination that the first input by the first contact meets second criteria,
wherein the second criteria require that the first input includes movement of the first contact in a direction across the touch-sensitive surface for more than a threshold distance, scroll the first user interface region in a direction that corresponds to the
direction of movement of the first contact
920 In response to detecting the first input by the first contact and in accordance with a determination that the first input by the first contact meets third criteria, display the third representation of the virtual object with the representation of the
field of view of the one or more cameras
922 In response to detecting the first input by the first contact, determine, by one or more device orientation sensors, a current device orientation of the device; and the third criteria require that the current device orientation be within a first range of orientations in order for the third criteria to be met
924 At least one display property of the second representation of the virtual object is applied to the third representation of the virtual object
Figure 9B 121/273
926 In response to detecting at least an initial portion of the first input by the first
contact: activate the one or more cameras; and analyze the field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras
928 In response to detecting a respective plane in the field of view of the one 2024200149
or more cameras, output, with one or more tactile output generators, a tactile output to indicate the detection of a respective plane in the field of view of the
one or more cameras
930 A size of the third representation of the virtual object on the display is
determined based on a simulated real-world size of the virtual object and a distance between the one or more cameras and a location in the field of view of the one or more cameras with which the third representation of the virtual object has a fixed spatial relationship
932 The second input that corresponds to the request to display the virtual object in an augmented reality environment includes an input that drags the second representation of the virtual object
934 While displaying the second representation of the virtual object in the second user interface region, detect a fourth input that meets respective criteria for redisplaying the first user interface region; and,
in response to detecting the fourth input: cease to display the second representation of the virtual object in the second user interface region; and redisplay the first representation of the virtual object in the first user interface
region
936 While displaying the third representation of the virtual object with the representation of the field of view of the one or more cameras, detect a fifth input that meets respective criteria for redisplaying the second user interface region; and, in response to detecting the fifth input:
cease to display the third representation of the virtual object and the representation of the field of view of the one or more cameras; and redisplay the second representation of the virtual object in the second user interface region
Figure 9C 122/273
938 While displaying the third representation of the virtual object with the representation of the field of view of the one or more cameras, detect a sixth input that meets respective criteria for redisplaying the first user interface region; and, in response to detecting the sixth input:
cease to display the third representation of the virtual object and the 2024200149
representation of the field of view of the one or more cameras; and redisplay the first representation of the virtual object in the first user interface
region
940 In response to detecting the first input by the first contact and in accordance with a determination that the input by the contact meets the first criteria,
continuously display the virtual object when transitioning from displaying the first user interface region to displaying the second user interface region, including displaying an animation of the first representation of the virtual object in the first
user interface region transforming into the second representation of the virtual object in the second user interface region
942 In response to detecting the second input by the second contact and in accordance with a determination that the second input by the second contact corresponds to the request to display the virtual object in the augmented reality environment, continuously display the virtual object when transitioning from displaying the second user interface region to displaying a third user interface region including the field of view of the one or more cameras, including displaying an animation of the second representation of the virtual object in the second user interface region transforming into the third representation of the virtual object in the
third user interface region including the field of view of the one or more cameras
Figure 9D 123/273
1002 Receive a request to display a first user interface that includes a first item
1004 In response to the request to display the first user interface, display the first user interface with a representation of the first item, including: 2024200149
in accordance with a determination that the first item corresponds to a respective virtual three-dimensional object, display the representation of the first item with a visual indication to indicate that the first item corresponds to a first
respective virtual three-dimensional object; and, in accordance with a determination that the first item does not correspond to a respective virtual three-dimensional object, display the representation of the first
item without the visual indication
1006 After displaying the representation of the first item, receive a request to display a second user interface that includes a second item
1008 In response to the request to display the second user interface, display the second user interface with a representation of the second item, including: in accordance with a determination that the second item corresponds to a respective virtual three-dimensional object, display the representation of the second item with the visual indication to indicate that the second item corresponds to a second respective virtual three-dimensional object; and in accordance with a determination that the second item does not correspond to a respective virtual three-dimensional object, display the representation of the second item without the visual indication
1010 Displaying the representation of the first item with the visual indication to indicate that the first item corresponds to a first respective virtual three-dimensional
object includes: in response to detecting a movement of the device that results in a change from a first device orientation to a second device orientation, displaying movement of the first item that corresponds to the change from the first device orientation to
the second device orientation
Figure 10A 124/273
1012 Displaying the representation of the first item with the visual indication to indicate that the first item corresponds to a first respective virtual three-dimensional
object includes: in response to detecting a first input by a first contact that scrolls the first user interface while the representation of the first item is displayed in the first user
interface: translating the representation of the first item on the display in 2024200149
accordance with scrolling of the first user interface; and rotating the representation of the first item relative to a plane defined by the first user interface in accordance with a direction in which the first user interface is scrolled
1014 While displaying the representation of the first item with the visual indication in the first user interface, display a representation of a third item, wherein the representation of the third item is displayed without the visual indication in order to
indicate that the third item does not correspond to a virtual three-dimensional object
1016 While displaying the representation of the second item with the visual indication in the second user interface, display a representation of a fourth item, wherein the representation of the fourth item is displayed without the visual indication in order to indicate that the fourth item does not correspond to a respective virtual three-dimensional object
1018 The first user interface corresponds to a first application;
the second user interface corresponds to a second application that is distinct from the first application; and the representation of the first item displayed with the visual indication and the
representation of the second item displayed with the visual indication share a predefined set of visual characteristics and/or behavioral characteristics
1020 The first user interface is an Internet browser application user interface and the first item is an element of a web page
1022 The first user interface is an e-mail application user interface and the first item
is an attachment to an e-mail
Figure 10B 125/273
1024 The first user interface is a messaging application user interface and the first item is an attachment or an element in a message
1026 The first user interface is a file management application user interface and the first item is a file preview object 2024200149
1028 The first user interface is a map application user interface and the first item is
a representation of a point of interest in a map
1030 The visual indication that the first item corresponds to a respective virtual three-dimensional object includes an animation of the first item that occurs without requiring an input directed to the representation of the respective three-dimensional object
1032 While displaying the representation of the second item with the visual indication to indicate that the second item corresponds to a respective virtual three- dimensional object, detect a second input by a second contact at a location on the touch-sensitive surface that corresponds to the representation of the second item; and, in response to detecting the second input by the second contact and in accordance with a determination that the second input by the second contact meets first criteria:
display a third user interface region on the display, including replacing display of at least a portion of the second user interface with a representation of a field of view of the one or more cameras; and continuously display the second virtual three-dimensional object while switching from displaying the second user interface to displaying the third user interface region
Figure 10C 126/273
1034 While displaying the second item with the visual indication to indicate that the second item corresponds to the respective virtual three-dimensional object, detect a I third input by a third contact at a location on the touch-sensitive surface that I corresponds to the representation of the second item; in response to detecting the third input by the third contact and in I accordance with a determination that the third input by the third contact meets first I criteria, display the second virtual three-dimensional object in a fourth user interface 2024200149
I that is different from the second user interface;
while displaying the second virtual three-dimensional object in the fourth user I interface, detect a fourth input; and,
in response to detecting the fourth input: in accordance with a determination that the fourth input corresponds I to a request to manipulate the second virtual three-dimensional object in the fourth user interface, change a display property of the second virtual three-dimensional object within the fourth user interface based on the fourth input; and in accordance with a determination that the fourth input corresponds I to a request to display the second virtual object in an augmented reality I
environment, display the second virtual three-dimensional object with a representation of a field of view of the one or more cameras
Figure 10D 127/273
Portable Multifunction Device 100
164 111 11:47 5060
.com 2024200149
Shop Add to Cart Q
S
11002
11004
Y
Figure 11A Touch Screen 112 128/273
Portable Multifunction Device 100
11:47 164 111 2024200149
6010
11002
Figure 11B Touch Screen 112 129/273
Portable Multifunction Device 100
11:47 164 111 2024200149
6010
11002
Figure 11C Touch Screen 112 130/273
Portable Multifunction Device 100
164 111 11:47
World 30 2024200149
6010
11002
11006
m Figure 11D 131/273 Touch Screen 112
Portable Multifunction Device 100
164 111 11:47 6010
6016 World 3D 2024200149
6020
6010
6018
11002
Figure 11E Touch Screen 112 132/273
Portable Multifunction Device 100
164 111 11008 11:47 6010
World < 3D 2024200149
6010
6018
11002
Figure 11F Touch Screen 112 133/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
11002
5004
Figure 11G Touch Screen 112 134/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002
5004
Figure 11H Touch Screen 112 135/273
Portable Multifunction Device 100 my 11010 164 111 11:47
World 3D 2024200149
6036
11002
5004
5038
11010
Figure 11I Touch Screen 112 136/273
Portable Multifunction Device 100
11:47 164 111
6016 World 3D 2024200149
6020
6036
6018
11002
5004
Figure 11J Touch Screen 112 137/273
Portable Multifunction Device 100
11:47 164 111
6016 World 3D 2024200149
6020
6036
6018
11002
Figure 11K Touch Screen 112 138/273
Portable Multifunction Device 100
164 111 11:47 2024200149
6036
11002
Figure 11L Touch Screen 112 139/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11016
11012
11002
11014
11018
Figure 11M Touch Screen 112 140/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036 11002 11012
11020
11022
11014
Figure 11N Touch Screen 112 141/273
Portable Multifunction Device 100 MAY 11024 11:47 164 111
World 3D 2024200149
6036
11012
11002
11020
11022
11014
11024
Figure 110 Touch Screen 112 142/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11012
11002
11014
Figure 11P Touch Screen 112 143/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
11002
Figure 11Q Touch Screen 112 144/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
11026
11002
Figure 11R Touch Screen 112 145/273
Portable Multifunction Device 100 my 11028 11:47 164 111
World 3D 2024200149
6036
11002
11028
Figure 11S Touch Screen 112 146/273
Portable Multifunction Device 100
164 111 11:47 11030
World 3D 2024200149
6036
6018
11002
Figure 11T Touch Screen 112 147/273
Portable Multifunction Device 100
164 111 11:47 6010
6016 World 30 2024200149
11032
11002
Figure 11U Touch Screen 112 148/273
Portable Multifunction Device 100
164 111 11:47 5060
.com 2024200149
Add to Cart Shop Q
Y Y
11002
Y
Figure 11V Touch Screen 112 149/273
Portable Multifunction Device 100
164 111 12002 11:47 6010
6016 World 3D 2024200149
6020
6018
11002
Figure 12A Touch Screen 112 150/273
Portable Multifunction Device 100
12004 3D 12006 12008 2024200149
6036
12010
Scan surface to begin
11002 12012
Figure 12B Touch Screen 112 151/273
Portable Multifunction Device 100
11:47
12004 World 3D 12006 12008 2024200149
6036
12010
Move your device
11002 12012
5004
Figure 12C Touch Screen 112 152/273
Portable Multifunction Device 100
11:47
12004 World 3D 1 12006 12008 2024200149
6036
12010
Move your device
11002 12012
5004
Figure 12D Touch Screen 112 153/273
11:47 ill
the SD 6036 2024200149
12014
11002
5002
Figure 12E-1
100
5006
Figure 12E-2 154/273
11:47
30
6036 2024200149
12014
11002
5002
Figure 12F-1
100
Figure 12F-2 155/273 alamy <<< 35
6036 2024200149
12014
11002
5002
Figure 12G-1
100
Figure 12G-2 156/273 ill and I 3D
6036 2024200149
12014
11002
5002
Figure 12H-1
100
Figure 12H-2 157/273 the RD 6036 2024200149
12014
K 11002
5002
Figure 121-1
100
Figure 121-2 158/273
Portable Multifunction Device 100
12016 11:47 164 111
World 3D 2024200149
6036
11002
5038
12016
Figure 12J Touch Screen 112 159/273 illl 11:47
< World 3D 2024200149
6036
11002
5002
Figure 12K-1
100
Figure 12K-2 160/273
11:47 ill
World 3D
6036 2024200149
11002
5002
Figure 12L-1
100
Figure 12L-2 161/273
Portable Multifunction Device 100
164 111 11:47 6010
World 30 2024200149
y
11002
X
Z
Figure 13A Touch Screen 112 162/273
Portable Multifunction Device 100
11:47 164 111
6010
World 30 2024200149
d1
13004 11002
13002
13006
Figure 13B Touch Screen 112 163/273
Portable Multifunction Device 100
11:47 164 111
6010
World 30 2024200149
11002
13002
13006
Figure 13C Touch Screen 112 164/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
13006
Figure 13D Touch Screen 112 165/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
d1
13010
13008
13006
Figure 13E Touch Screen 112 166/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
d2
13012
13008
13006
Figure 13F Touch Screen 112 167/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
13008
11002
Figure 13G Touch Screen 112 168/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
Figure 13H Touch Screen 112 169/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
13014 11002
Figure 13I Touch Screen 112 170/273
Portable Multifunction Device 100
11:47 164 111
6010 13016 World 3L 2024200149
6018
11002
Figure 13J Touch Screen 112 171/273
Portable Multifunction Device 100 my 13018 164 111 11:47
World 3D 2024200149
6036
11002
13018
Figure 13K Touch Screen 112 172/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
11002
13020
- 13022
5038
Figure 13L Touch Screen 112 173/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
11002
13020
5038
Figure 13M Touch Screen 112 174/273
Portable Multifunction Device 100
11:47 164 111
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
Scaling Movement 14004
RT
Angle of Rotation 14006 Figure 14A Touch Screen 112 175/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
14010
Scaling 14008 Movement 14004
14012
14014
RT
Angle of Rotation 14006 Figure 14B Touch Screen 112 176/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
14010
Scaling 14008 Movement 14004
14012
14014
RT
Angle of Rotation 14006 Figure 14C Touch Screen 112 177/273
Portable Multifunction Device 100
164 111 11:47 TT'
World 3D TT 2024200149
6036
Translation Movement 14002
11002 ST'
14010 ST
14008
Scaling Movement 14004
14012 RT
Angle of Rotation 14014 14006 Figure 14D Touch Screen 112 178/273
Portable Multifunction Device 100
164 111 11:47 TT'
World 3D TT 2024200149
6036
Translation Movement 14002
11002 ST'
ST
14008
Scaling Movement 14004
14012 RT
Angle of Rotation 14006 Figure 14E Touch Screen 112 179/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
0 Scaling Movement 14004
RT
Angle of Rotation 14006 Figure 14F Touch Screen 112 180/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
14018
14016
Scaling Movement 14020 14004
14022
RT
Angle of Rotation 14006 Figure 14G Touch Screen 112 181/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
14018
14016 Scaling Movement 14020 14004
14022
RT
Angle of Rotation 14006 Figure 14H Touch Screen 112 182/273
Portable Multifunction Device 100
11:47 164 111 TT'
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
14016
Scaling Movement 14020 14004
RT'
RT
Angle of Rotation 14006 Figure 14I Touch Screen 112 183/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
Scaling Movement 14004
RT
Angle of Rotation 14006 Figure 14J Touch Screen 112 184/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
14024 ST 14026
14028 Scaling Movement 14004
14030
RT
Angle of Rotation 14006 Figure 14K Touch Screen 112 185/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
14024 ST
14026
14028 Scaling Movement 14004
14030
RT
Angle of Rotation 14006 Figure 14L Touch Screen 112 186/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002 ST'
14024 ST
14028 Scaling Movement 14004
RT'
RT
Angle of Rotation 14006 Figure 14M Touch Screen 112 187/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002
ST
Scaling Movement 14004
RT
Angle of Rotation 14006 Figure 14N Touch Screen 112 188/273
Portable Multifunction Device 100
11:47 164 111
World 3D TT 2024200149
6036
Translation Movement 14002
11002
14034 ST
14032
14036 Scaling Movement 14004
RT 14038
Angle of Rotation 14006 Figure 140 Touch Screen 112 189/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
Translation Movement 14002
11002 ST'
ST
14032
14036 Scaling Movement 14004
RT'
RT
Angle of Rotation 14006
Figure 14P Touch Screen 112 190/273
Portable Multifunction Device 100
164 111 11:47
14043 World 3D TT 2024200149
6036
Translation Movement 14002
11002 ST'
14040 ST
14032
14036 Scaling Movement 14004 14042
RT'
RT
Angle of Rotation 14006 Figure 14Q Touch Screen 112 191/273
Portable Multifunction Device 100
164 111 11:47
14043 World 3D TT 2024200149
6036
Translation Movement 14002
11002 ST'
ST
14044 Scaling 14032 Movement 14004
14036
RT' 14046
RT
Angle of Rotation 14006
Figure 14R Touch Screen 112 192/273
Portable Multifunction Device 100
164 111 11:47
14043 World 3D TT 2024200149
6036
Translation 14032 Movement 14002
11002 ST'
ST
Scaling Movement 14004
14036 by RT'
RT
Angle of Rotation 14006 Figure 14S Touch Screen 112 193/273
Portable Multifunction Device 100
164 111 11:47
14043 World 3D TT 2024200149
6036
Translation 14032 Movement 14002 14048 14047 11002 ST'
ST
Scaling Movement 14004
14036 by RT' 14050
RT
Angle of Rotation 14006
Figure 14T Touch Screen 112 194/273
Portable Multifunction Device 100
11:47 164 111
14043 World 3D TT 2024200149
6036
Translation 14032 Movement 14002
14047 11002 ST'
ST
0 Scaling Movement 14004
14036
RT'
RT
Angle of Rotation 14006 Figure 14U Touch Screen 112 195/273
Portable Multifunction Device 100
164 111 11:47
14043 World 3D TT 2024200149
6036
Translation 14032 Movement 14002 14052 14047 11002 ST'
ST
0 Scaling Movement 14004
14036
RT'
14054
RT
Angle of Rotation 14006 Figure 14V Touch Screen 112 196/273
Portable Multifunction Device 100
164 111 11:47
14043 World 3D TT 2024200149
6036
Translation 14032 Movement 14002
14047 11002 ST'
ST
Scaling Movement 14004
14036
RT'
RT
Angle of Rotation 14006
Figure 14W Touch Screen 112 197/273
Portable Multifunction Device 100
164 111 11:47
World 3D TT 2024200149
6036
14056 Translation 14032 Movement 14002
11002 ST'
ST
Scaling Movement 14004
14036
RT'
14058
RT
Angle of Rotation 14006
Figure 14X Touch Screen 112 198/273
Portable Multifunction Device 100
11:47 164 111
World 3D TT 2024200149
6036
14060 Translation 14032 Movement 14002
11002 ST'
ST
Scaling Movement 14004
RT'
RT 14036
14062 Angle of Rotation 14006
Figure 14Y Touch Screen 112 199/273
Portable Multifunction Device 100
11:47 164 111
World 3D TT 2024200149
6036
Translation 14032 Movement 14002
11002 ST'
ST
Scaling Movement 14004
RT'
RT
14036 Angle of Rotation 14006 Figure 14Z Touch Screen 112 200/273
Detect first portion of user input 14066 that includes movement of one or more contacts 2024200149
14068 14070 Input movement increases above object Rotate object based on input rotation threshold ? Y 14072
N Increase object translation threshold and increase object scaling threshold
14074 A 14076 Input movement increases above object Translate object based on input translation threshold ? Y 14078 Increase object rotation threshold N and increase object scaling threshold
B 14080 14082 Input movement increases above object Scale object based on input scaling threshold ? Y 14084 Increase object translation N threshold and increase object rotation threshold Detect additional portion of user input that includes movement of the one or more contacts 14085 C
Figure 14AA 201/273
A
Detect additional portion of user 14086 input that includes movement of the one or more contacts 2024200149
14088 14090 Input movement Rotate object based on input is rotation movement ?
Y
N
14092 Input 14094 movement increases above Translate object based on input increased object translation threshold ? Y
N
14096 14098 N Input movement increases above Scale object based on input increased object scaling threshold ? Y
Figure 14AB 202/273
B
Detect additional portion of user 14100 input that includes movement of the one or more contacts 2024200149
14102 14104 Input movement Translate object based on input is translation movement ?
Y
N
14106 Input 14108 movement increases above Rotate object based on input increased object rotation threshold ? Y
N
14110 14112 Input N movement increases above Scale object based on input increased object scaling threshold ? Y
Figure 14AC 203/273
C
Detect additional portion of user 14114 input that includes movement of the one or more contacts 2024200149
14116 14118 Input movement Scale object based on input is scaling movement ?
Y
N
14120 Input 14122 movement increases above Rotate object based on input increased object rotation threshold ? Y
N
14124 14126 Input N movement increases above Translate object based on input increased object translation threshold ? Y
Figure 14AD 204/273
Portable Multifunction Device 100
164 111 11:47
i 2024200149
FRED
5008
Please have a seat.
I could certainly use one. 15001
11002
15002
Figure 15A Touch Screen 112 205/273
Portable Multifunction Device 100
164 111 11:47
6020 World 30 2024200149
15001
6010 15004
15006
11002
15008
15010
"Chair is now shown in the staging view"
Figure 15B Touch Screen 112 206/273
Portable Multifunction Device 100
164 111 11:47 6010
World 30 2024200149
15012
15018
15020
11002
15016
"Selected: tilt up button"
15014
Figure 15C Touch Screen 112 207/273
Portable Multifunction Device 100
164 111 11:47 6010
World 30 2024200149
15028
11002
15026
"Selected: tilt down button"
15022 15024
Figure 15D Touch Screen 112 208/273
Portable Multifunction Device 100
164 111 11:47 6010
World 35 2024200149
"Chair tilted five degrees down. Chair is 15032 now tilted 10 degrees toward the screen."
11002
15030
15022
)))
Figure 15E Touch Screen 112 209/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
15034
15036
11002
15022
Figure 15F Touch Screen 112 210/273
Portable Multifunction Device 100
11:47 164 111
6010
World 30 2024200149
15044
15046
11002
15038
15040
15042
"Selected: rotate clockwise button"
Figure 15G Touch Screen 112 211/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
15054
11002
15048
15050
15052
"Selected: rotate counterclockwise button"
Figure 15H Touch Screen 112 212/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
15048
15056
15058 "Chair rotated by five degrees counterclockwise. Chair is now rotated five degrees away from the screen." "
Figure 15I Touch Screen 112 213/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
15060
15062
11002
15038
Figure 15J Touch Screen 112 214/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
15072
15064 15070
15066
15068
"Scale: adjustable"
Figure 15K Touch Screen 112 215/273
Portable Multifunction Device 100
164 111 11:47 6010
World 3D 2024200149
11002
15078
15080
15064
15074
15076
"Chair is now adjusted to 150 percent of original size"
Figure 15L Touch Screen 112 216/273
Portable Multifunction Device 100
111 15086 164 11:47 6010
World 3D 2024200149
11002
15064 15078
15082 15084
"Chair is now adjusted to 100 percent of original size"
Figure 15M Touch Screen 112 217/273
Portable Multifunction Device 100
164 111 11:47 6010
World 30 2024200149
15088
15090
11002
15064
Figure 15N Touch Screen 112 218/273
Portable Multifunction Device 100
164 111 11:47 15001 6010
6016 World 30 2024200149
15096
15098
11002
15092
15094
"Selected: return button"
Figure 150 Touch Screen 112 219/273
Portable Multifunction Device 100
164 111 11:47 15001 6010
World 3L 2024200149
6018
15102
11002
15098
15100
"Selected: world view/staging view toggle"
Figure 15P Touch Screen 112 220/273
Portable Multifunction Device 100
11:47
12004 3D 2024200149
6036
12010
Move your device
11002
5004
15102
15104
"Move the device to detect a plane"
Figure 15Q Touch Screen 112 221/273
Portable Multifunction Device 100
11:47
12004 World SO 2024200149
6036
12010
Move your device
11002
5004
Figure 15R Touch Screen 112 222/273
Portable Multifunction Device 100
11:47
World 3D 2024200149
6036
12014
11002
5004
15106
15108
"Move the device to detect a plane"
Figure 15S Touch Screen 112 223/273
Portable Multifunction Device 100
11:47
forld 3D 2024200149
6036
12014
11002
5004
15110
15112
"Plane detected"
Figure 15T Touch Screen 112 224/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002
Figure 15U Touch Screen 112 225/273
Portable Multifunction Device 100 my 15118 164 111 11:47
World 3D 2024200149
6036
11002
5004
15116 15114
"Chair is now projected in the world, 100 percent 15118 visible, occupying 10 percent of the screen"
Figure 15V Touch Screen 112 226/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
5004
15118
15120
"Chair is not on the screen"
Figure 15W Touch Screen 112 227/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002
5004
15124 15122
"Chair is now projected in the world, 100 percent visible, occupying 10 percent of the screen"
Figure 15X Touch Screen 112 228/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002
15128 15126
'Chair is 90 percent visible, occupying 20 percent of the screen"
Figure 15Y Touch Screen 112 229/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
15132
11002
15130
)))
Figure 15Z Touch Screen 112 230/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002 15134
15138 15136
"Chair is 90 percent visible, occupying 20 percent of screen"
Figure 15AA Touch Screen 112 231/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
15142 15140
11002
Figure 15AB Touch Screen 112 232/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002
15144
15150
15148 15146
"Selected: move right button"
Figure 15AC Touch Screen 112 233/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
11002
15144
15154 15152
"Chair is 100 percent visible, occupying 30 percent of screen"
Figure 15AD Touch Screen 112 234/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
15158 15156
11002
15144
Figure 15AE Touch Screen 112 235/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
15168 15166
11002
15160
15164 15162
"Selected: move left"
Figure 15AF Touch Screen 112 236/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
15178 15176
11002
15170
15174 15172
"Selected: rotate clockwise"
Figure 15AG Touch Screen 112 237/273
Portable Multifunction Device 100
11:47 164 111
World 3D 2024200149
6036
15186
11002
15180
15184 15182
"Selected: rotate counterclockwise"
Figure 15AH Touch Screen 112 238/273
Portable Multifunction Device 100
164 111 11:47
World 3D 2024200149
6036
11002
15180
15192 15190
"Chair rotated by five degrees counterclockwise. Chair is now rotated by zero degrees relative to the screen"
Figure 15AI Touch Screen 112 239/273
16002 Receive a request to display a virtual object in a first user interface region that
includes at least a portion of a field of view of the one or more cameras 2024200149
16004 In response to the request to display the virtual object in the first user interface
region, display, via a display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras that is included the first user interface region, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object includes: in accordance with a determination that object-placement criteria are not met, wherein the object-placement criteria require that a placement location for the virtual object be identified in the field of view of the one or more cameras in order
for the object-placement criteria to be met, displaying the representation of the virtual object with a first set of visual properties and with a first orientation that is
independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and in accordance with a determination that the object-placement criteria are met, displaying the representation of the virtual object with a second set of visual properties that are distinct from the first set of visual properties and with a second
orientation that corresponds to a plane in the physical environment detected in the field of view of the one or more cameras
Figure 16A 240/273
16006 Detect that the object-placement criteria are met while the representation of the virtual object is displayed with the first set of visual properties and the first
orientation 2024200149
16008 In response to detecting that the object-placement criteria are met, display, via the display generation component, an animated transition showing the representation of the virtual object moving from the first orientation to the
second orientation and changing from having the first set of visual properties to having the second set of visual properties
16010 Detecting that the object-placement criteria are met includes one or more of: detecting that a plane has been identified in the field of view of the one
or more cameras; detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time;
and detecting that at least a predetermined amount of time has elapsed since receiving the request for displaying the virtual object in the first user
interface region
16012 Detect first movement of the one or more cameras while the representation of the virtual object is displayed with the first set of visual properties and the first orientation over a first portion of the physical environment captured in the field of
view of the one or more cameras
16014 In response to detecting the first movement of the one or more cameras, display the representation of the virtual object with the first set of visual properties and the first
orientation over a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is distinct from the first portion of the physical environment
Figure 16B 241/273
16016 Detect second movement of the one or more cameras while the representation of the virtual object is displayed with the second set of visual properties and the second orientation over a third portion of the physical environment captured in the field of view of the one or more cameras 2024200149
16018 In response to detecting the second movement of the device, maintain display of the representation of the virtual object with the second set of visual properties and the second orientation over the third portion of the physical environment captured in the field of view of the one or more cameras, while the physical environment as captured in the field of view of the one or more cameras moves in accordance with the second movement of the device, and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras
16020 In accordance with a determination that the object-placement criteria are met, generate a tactile output in conjunction with displaying the representation of the virtual object with the second set of visual properties and with the second orientation that corresponds to the plane in the physical environment detected in the field of view of the one or more cameras
16022 While displaying the representation of the virtual object with the second set of visual properties and with the second orientation that corresponds to the plane in the physical environment detected in the field of view of the one or more cameras, receive an update regarding at least a location or an orientation of the plane in the physical environment detected in the field of view of the one or more cameras
16024 In response to receiving the update regarding at least the location or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjust at least a location and/or an orientation of the representation of the virtual object in accordance with the update
Figure 16C 242/273
16026 The first set of visual properties include a first size and a first translucency level 2024200149
16028 The second set of visual properties include a second size that is distinct from the first size, and a second translucency level that is lower than the first translucency
level
16030 The request to display the virtual object in the first user interface region that
includes at least a portion of the field of view of the one or more cameras is received while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras and the first orientation corresponds to an orientation of the virtual object while the virtual
object is displayed in the respective user interface at a time when the request is received
16032 The first orientation corresponds to a predefined orientation
Figure 16D 243/273
16034 While displaying the virtual object in the first user interface region with the second
set of visual properties and the second orientation that corresponds to the plane in the physical environment detected in the field of view of the one or more cameras, detect a request to change a simulated physical size of the of the virtual object from 2024200149
a first simulated physical size to a second simulated physical size relative to the physical environment captured in the field of view of the one or more cameras
16036 In response to detecting the request to change the simulated physical size of the virtual object:
gradually change a displayed size of the representation of the virtual object in the first user interface region in accordance with a gradual change of the I simulated physical size of the virtual object from the first simulated physical size to
the second simulated physical size; and during the gradual change of the displayed size of the representation of the virtual object in the first user interface region, in accordance with a determination
that the simulated physical size of the virtual object has reached a predefined simulated physical size, generate a tactile output to indicate that the simulated I physical size of the virtual object has reached the predefined simulated physical size
16038 While displaying the virtual object in the first user interface region at the second simulated physical size of the virtual object that is distinct from the
predefined simulated physical size, detect a request to return the virtual object to the predefined simulated physical sizes
16040 In response to detecting the request to return the virtual object to the
predefined simulated physical size, change the displayed size of the representation of the virtual object in the first user interface region in
accordance with a change of the simulated physical size of the virtual object to the predefined simulated physical size
Figure 16E 244/273
16042 I Select the plane for setting the second orientation of the representation of the virtual object with the second set of visual properties in accordance with a respective position and orientation of the one or more cameras relative to the physical environment, wherein selecting the plane includes: 2024200149
in accordance with a determination that the object-placement criteria were I met when the representation of the virtual object was displayed over a first portion I of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of multiple planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting I the second orientation of the representation of the virtual object with the second set I of visual properties; and
in accordance with a determination that the object-placement criteria were met when the representation of the virtual object was displayed over a second I portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the multiple planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting I the second orientation of the representation of the virtual object with the second set I of visual properties, wherein the first portion of the physical environment is distinct
from the second portion of the physical environment, and the first plane is distinct from the second plane
16044 Display a snapshot affordance concurrently with displaying the virtual object in the first user interface region with the second set of visual properties and the second orientation
16046 In response to activation of the snapshot affordance, capture a snapshot image including a current view of the representation of the virtual object at a placement location in the physical environment in the field of view of the one or more cameras, with the second set of visual properties and the second orientation that corresponds to the plane in the physical environment detected in the field of view of the one or
more cameras
Figure 16F 245/273
16048 Display one or more control affordances with the representation of the virtual object having the second set of visual properties in the first user interface region 2024200149
16050 While displaying the one or more control affordances with the representation of the virtual object having the second set of visual properties, detect that control-fading criteria are met
16052 In response to detecting that the control fading criteria are met, cease to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual properties in the first user interface
region including the field of view of the one or more cameras
16054 In response to the request to display the virtual object in the first user interface region: prior to displaying the representation of the virtual object over at least a
portion of the field of view of the one or more cameras that is included the first user interface region, in accordance with a determination that calibration criteria are not met, display a prompt for the user to move the device relative to the physical environment
Figure 16G 246/273
17002 Receive a request to display an augmented reality view of a physical environment in a first user interface region that includes a representation of a field of view of the
one or more cameras 2024200149
17004 In response to receiving the request to display the augmented reality view of the physical environment, display the representation of the field of view of the one or more cameras and, in accordance with a determination that calibration criteria are not met for the augmented reality view of the physical environment, display a calibration user interface object that is dynamically animated in accordance with movement of the one or more cameras in the physical environment, wherein displaying the calibration user interface object includes: while displaying the calibration user interface object, detecting, via the one or
more attitude sensors, a change in attitude of the one or more cameras in the physical environment; and in response to detecting the change in attitude of the one or more cameras in the physical environment, adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment
17006 While displaying the calibration user interface object that moves on the display in accordance with the detected change in attitude of the one or more cameras in the physical environment, detect that the calibration criteria are met
17008 In response to detecting that the calibration criteria are met, cease to display the calibration user interface object
Figure 17A 247/273
17010 The request to display the augmented reality view of the physical environment in the first user interface region that includes the representation of the field of view of
the one or more cameras includes a request to display a representation of a virtual three-dimensional object in the augmented reality view of the physical environment 2024200149
17012 Display the representation of the virtual three-dimensional object in the first user interface region that includes the representation of the field of view of the
one or more cameras after ceasing to display the calibration user interface object
17014 Display the representation of the virtual three-dimensional object in the first
user interface region concurrently with the calibration user interface object, wherein the representation of the virtual three-dimensional object remains at a fixed location in the first user interface region during the movement of the one or more cameras in the physical environment
17016 The request to display the augmented reality view of the physical environment in the first user interface region that includes the representation of the field of view of
the one or more cameras includes a request to display the representation of the field of view of the one or more cameras without requesting display of a representation of any virtual three-dimensional object in the physical environment captured in the field of view of the one or more cameras
17018 In response to receiving the request to display the augmented reality view of the physical environment, display the representation of the field of view of the one or more cameras and, in accordance with a determination that the calibration criteria are met for the augmented reality view of the physical environment, forgo display of the calibration user interface object
17020 Display a textual object in the first user interface region concurrently with the
calibration user interface object that provides information about actions that can be taken by the user to improve calibration of the augmented reality view
Figure 17B 248/273
17022 In response to detecting that the calibration criteria are met, display a visual
indication of a plane detected in the physical environment captured in the field of view of the one or more cameras 2024200149
17024 In response to receiving the request to display the augmented reality view of the physical environment:in accordance with the determination that the calibration criteria are not met and before displaying the calibration user interface object, display an animated prompt object that includes a representation of the device moving relative to a representation of a plane
17026 Adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment includes: moving the calibration user interface object by a first amount in accordance with a first magnitude of movement of the one or more cameras in the physical environment; and moving the calibration user interface object by a second amount in accordance with a second magnitude of movement of the one or more cameras in the physical environment, wherein the first amount is distinct from the second amount, and the first magnitude of movement is distinct from the second magnitude
of movement
17028 Adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment includes: in accordance with a determination that the detected change in attitude of the one or more cameras corresponds to a first type of movement, moving the calibration user interface object based on the first type of movement; and in accordance with a determination that the detected change in attitude of the one or more cameras corresponds to the second type of movement, forgoing moving the calibration user interface object based on the second type of movement
Figure 17C 249/273
17030 Adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment includes moving the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the 2024200149
physical environment without altering a characteristic display location of the calibration user interface object over the first user interface region
17032 Adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment includes rotating the calibration user interface object about an axis that is perpendicular to a movement direction of the one or more cameras in the physical environment
17034 Adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment includes moving the calibration user interface object at a speed that is determined in accordance with a rate of change detected in the field of view of the one or more cameras
17036 Adjusting at least one display parameter of the calibration user interface object in accordance with the detected change in attitude of the one or more cameras in the physical environment includes moving the calibration user interface object in a direction that is determined in accordance with a direction of change detected in the field of view of the one or more cameras
Figure 17D 250/273
18002 Display, by a display generation component, a representation of a first perspective of a virtual three-dimensional object in a first user interface region 2024200149
18004 While displaying the representation of the first perspective of the virtual three- dimensional object in the first user interface region on the display, detect a first
input that corresponds to a request to rotate the virtual three-dimensional object relative to a display to display a portion of the virtual three-dimensional object that is not visible from the first perspective of the virtual three-dimensional object
18006 In response to detecting the first input:
in accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a first axis, rotate the virtual
three-dimensional object relative to the first axis by an amount that is determined based on a magnitude of the first input and is constrained by a limit on the movement restricting rotation of the virtual three-dimensional object by more than a threshold amount of rotation relative to the first axis; and
in accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a second axis that is different from the first axis, rotate the virtual three-dimensional object relative to the second
axis by an amount that is determined based on a magnitude of the first input, wherein, for an input with a magnitude above a respective threshold, the device rotates the virtual three-dimensional object relative to the second axis by more than the threshold amount of rotation
Figure 18A 251/273
18008 I In accordance with a determination the first input includes first movement of a I contact across a touch-sensitive surface in a first direction, and that the first
movement of the contact in the first direction meets first criteria for rotating the representation of the virtual object with respect to the first axis wherein the first 2024200149
I criteria include a requirement that the first input includes more than a first threshold amount of movement in the first direction in order for the first criteria to be met,
determine that the first input corresponds to a request to rotate the three- dimensional object about the first axis; and in accordance with a determination the first input includes second movement I of the contact across the touch-sensitive surface in a second direction, and that the second movement of the contact in the second direction meets second criteria for rotating the representation of the virtual object with respect to the second axis wherein the second criteria include a requirement that the first input includes more than a second threshold amount of movement in the second direction in order for the second criteria to be met, determine that the first input corresponds to a request to rotate the three-dimensional object about the second axis, wherein the first threshold is greater than the second threshold
18010 Rotation of the virtual three-dimensional object relative to the first axis occurs
with a first degree of correspondence between a characteristic value of a first input parameter of the first input and an amount of rotation applied to the virtual three- dimensional object around the first axis; rotation of the virtual three-dimensional object relative to the second axis occurs with a second degree of correspondence between the characteristic value of the first input parameter of the second input gesture and an amount of rotation applied to virtual three-dimensional object around the second axis; and the first degree of correspondence involves less rotation of the virtual three- dimensional object relative to the first input parameter than the second degree of correspondence does
Figure 18B 252/273
18012 Detect an end of the first input 2024200149
18014 After detecting the end of the first input, continue to rotate the three-dimensional
object based on a magnitude of the first input prior to detecting the end of the input, including:
in accordance with a determination that the three-dimensional object is rotating relative to the first axis, slowing the rotation of the object relative to the first axis by a first amount that is proportional to the magnitude of the rotation of the three-dimensional object relative to the first axis; and
in accordance with a determination that the three-dimensional object is rotating relative to the second axis, slowing the rotation of the object relative to the
second axis by a second amount that is proportional to the magnitude of the rotation of the three-dimensional object relative to the second axis wherein the second amount is different from the first amount
18016 Detect an end of the first input
18018 After detecting the end of the first input:
in accordance with a determination that the three-dimensional object has been rotated beyond a respective rotation threshold relative to the first axis, reverse at least a portion of the rotation of the three-dimensional object relative to the first
axis; and in accordance with a determination that the three-dimensional object has not been rotated beyond the respective rotation threshold relative to the first axis, forgo reversing the rotation of the three-dimensional object relative to the first axis
18020 In accordance with a determination that the first input corresponds to a request to rotate the three-dimensional object about a third axis that is different from the first
axis and the second axis, forgo rotating the virtual three-dimensional object relative to the third axis
Figure 18C 253/273
18022 Display a representation of a shadow cast by the virtual three-dimensional object while displaying the representation of the first perspective of the virtual three-
dimensional object in the first user interface region and vary a shape of the representation of the shadow in accordance with the rotation of the virtual three- 2024200149
dimensional object relative to the first axis and/or second axis
18024 While rotating the virtual three-dimensional object in the first user interface
region: in accordance with a determination that the virtual three-dimensional object is displayed with a second perspective that reveals a predefined bottom of the virtual three-dimensional object, forgo display of the representation of the shadow with the representation of the second perspective of the virtual three-dimensional object
18026 After rotating the virtual three-dimensional object in the first user interface region,
detect a second input that corresponds to a request to reset the virtual three- dimensional object in the first user interface region
18028 In response to detecting the second input, display a representation of a predefined original perspective of the virtual three-dimensional object in the first user interface
region
Figure 18D 254/273
18030 While displaying the virtual three-dimensional object in the first user interface region, detect a third input that corresponds to a request to resize the virtual three-
dimensional object 2024200149
18032 In response to detecting the third input, adjust a size of the representation of the
virtual three-dimensional object in the first user interface region in accordance with a magnitude of the input
18034 While adjusting the size of the representation of the virtual three-dimensional object in the first user interface region, detect that the size of the virtual three-
dimensional object has reached a predefined default display size of the virtual three-dimensional object
18036 In response to detecting that the size of the virtual three-dimensional object has reached the predefined default display size of the virtual three- dimensional object, generate a tactile output to indicate that the virtual three-
dimensional object is displayed at the predefined default display size
Figure 18E 255/273
18042 While displaying a representation of a third perspective of the virtual three- dimensional object in the first user interface region, detecting a fourth input that
corresponds to a request for displaying the virtual three-dimensional object in a second user interface region that includes a field of view of one or more cameras 2024200149
18044 In response to detecting the fourth input, display, via the display generation component, a representation of the virtual object over at least a portion of the field
of view of the one or more cameras that is included the second user interface region, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object includes: rotating the virtual three-dimensional object about the first axis to a
predefined angle; and maintaining a current angle of the virtual three-dimensional object relative to the second axis
18046 While displaying a representation of a fourth perspective of the virtual three- dimensional object in the first user interface region, detect a fifth input that
corresponds to a request for returning to a two-dimensional user interface including a two-dimensional representation of the virtual three-dimensional object
18048 In response to detecting the fifth input:
rotate the virtual three-dimensional object to show a perspective of the virtual three-dimensional object that corresponds to the two-dimensional representation of the virtual three-dimensional object; and display the two-dimensional representation of the virtual three-dimensional object after the virtual three-dimensional object is rotated to show the respective perspective that corresponds to the two-dimensional representation of the virtual three-dimensional object.
Figure 18F 256/273
18050 Prior to displaying the representation of the first perspective of the virtual three-
dimensional object, display a user interface that includes a representation of the virtual three-dimensional object that includes a representation of a view the virtual three-dimensional object from a respective perspective 2024200149
18052 While displaying the representation of the virtual three-dimensional object, detect a request to display the virtual three-dimensional object
18054 In response to detecting the request to display the virtual three-dimensional object, replace display of the representation of the virtual three-dimensional object with the virtual three-dimensional object rotated to match the respective perspective of the representation of the virtual three-dimensional object
Figure 18G 257/273
18056 Prior to displaying the first user interface, display a two-dimensional user interface
including a two-dimensional representation of the virtual three-dimensional object 2024200149
18058 While displaying the two-dimensional user interface including the two-dimensional representation of the virtual three-dimensional object, detect a first portion of a
touch input that meets preview criteria at a location on the touch-sensitive surface that corresponds to the two-dimensional representation of the virtual three- dimensional object
18060 In response to detecting the first portion of the touch input that meets the preview criteria, display a preview of the virtual three-dimensional object that is larger than
the two-dimensional representation of the virtual three-dimensional object
18062 While displaying the preview of the virtual three-dimensional object, detect a second portion of the touch input
18064 In response to detecting the second portion of the touch input: in accordance with a determination that the second portion of the touch input meets menu-display criteria, display a plurality of selectable options corresponding a plurality of operations associated with the virtual object; and in accordance with a determination that the second portion of the touch input meets staging criteria, replace display of the two-dimensional user interface including the two-dimensional representation of the virtual three- dimensional object with the first user interface including the virtual three-
dimensional object
Figure 18H 258/273
18066 The first user interface includes a plurality of controls 2024200149
18068 Prior to displaying the first user interface, display a two-dimensional user interface
including a two-dimensional representation of the virtual three-dimensional object
18070 In response to detecting a request to display the virtual three-dimensional object in the first user interface: display the virtual three-dimensional object in the first user interface without
displaying a set of one or more controls associated with the virtual three- dimensional object; and after displaying the virtual three-dimensional object in the first user interface,
display the set of one or more controls
Figure 18I 259/273
19002 Display, via a display generation component, a first user interface region that includes a user interface object that is associated with a plurality of object
manipulation behaviors, including a first object manipulation behavior that is performed in response to inputs that meet first gesture-recognition criteria and a 2024200149
second object manipulation behavior that is performed in response to inputs that meet second gesture-recognition criteria
19004 While displaying the first user interface region, detect a first portion of an input
directed to the user interface object, including detecting movement of one or more contacts across the touch-sensitive surface, and while the one or more contacts are detected on the touch-sensitive surface, evaluate movement of the one or more contacts with respect to both the first gesture-recognition criteria and the second gesture-recognition criteria
19006 In response to detecting the first portion of the input, update an appearance of the user interface object based on the first portion of the input, including:
in accordance with a determination that the first portion of the input meets the first gesture-recognition criteria before meeting the second gesture-recognition criteria:
change the appearance of the user interface object in accordance with the first object manipulation behavior based on the first portion of the input; and update the second gesture-recognition criteria by increasing a threshold for the second gesture-recognition criteria; in accordance with a determination that the input meets the second gesture- recognition criteria before meeting the first gesture-recognition criteria:
change the appearance of the user interface object in accordance with the second object manipulation behavior based on the first portion of the input;
and update the first gesture-recognition criteria by increasing a threshold for the first gesture-recognition criteria
Figure 19A 260/273
19008 After updating the appearance of the user interface object based on the first portion of the input, detect a second portion of the input 2024200149
19010 I In response to detecting the second portion of the input, update the appearance of the user interface object based on the second portion of the input, including: in accordance with a determination that the first portion of the input met the first gesture-recognition criteria and the second portion of the input does not meet the updated second gesture-recognition criteria: changing the appearance of the user interface object in accordance with the first object manipulation behavior based I on the second portion of the input without changing the appearance of the user interface object in accordance with the second object manipulation behavior; in accordance with a determination that the first portion of the input met the second gesture-recognition criteria and the second portion of the input does not meet the updated first gesture-recognition criteria: changing the appearance of the user interface object in accordance with the second object manipulation behavior based on the second portion of the input without changing the appearance of the user interface object in accordance with the first object manipulation behavior
19012 While the appearance of the user interface object is changed in accordance with the first object manipulation behavior based on the second portion of the input after the first portion of the input met the first gesture-recognition criteria,
the second portion of the input includes input that meets the second gesture- recognition criteria before the second gesture-recognition criteria were updated
19014 While the appearance of the user interface object is changed in accordance with the second object manipulation behavior based on the second portion of the input after the first portion of the input met the second gesture-recognition criteria, the second portion of the input includes input that meets the first gesture-recognition criteria before the first gesture-recognition criteria were
updated
Figure 19B 261/273
19008
19016 While the appearance of the user interface object is changed in accordance with the first object manipulation behavior based on the second portion of the 2024200149
input after the first portion of the input met the first gesture-recognition criteria,
the second portion of the input does not include input that meets the first gesture-recognition criteria
19018 While the appearance of the user interface object is changed in accordance with the second object manipulation behavior based on the second portion of the input after the first portion of the input met the second gesture-recognition criteria, the second portion of the input does not include input that meets the second gesture-recognition criteria
19020 Updating the appearance of the user interface object based on the second I portion of the input includes:
in accordance with a determination that the first portion of the input met the second gesture-recognition criteria and the second portion of the input meets the updated first gesture-recognition criteria: changing the appearance of the user interface object in accordance with the first object manipulation behavior based on the second portion of the input; and
changing the appearance of the user interface object in accordance with the second object manipulation behavior based on the second portion of the input; and in accordance with a determination that the first portion of the input met the first gesture-recognition criteria and the second portion of the input meets the updated second gesture-recognition criteria: changing the appearance of the user interface object in accordance with the first object manipulation behavior based on the second portion of the input; and
changing the appearance of the user interface object in accordance with the second object manipulation behavior based on the second portion of the input
Figure 19C 262/273
19008
19020
19022 2024200149
After updating the appearance of the user interface object based on the second portion of the input, detect a third portion of the input
19026 The third portion of the input does not include input that meets the first gesture-recognition criteria or input that meets the second gesture-recognition criteria
19024 In response to detecting the third portion of the input, updating the appearance of the user interface object based on the third portion of the input, including:
changing the appearance of the user interface object in accordance with the first object manipulation behavior based on the third portion of the input; and
changing the appearance of the user interface object in accordance with the second object manipulation behavior based on the third portion of the input
Figure 19D 263/273
19008
19020
19028 2024200149
The plurality of object manipulation behaviors includes a third object manipulation behavior that is performed in response to inputs that meet third gesture-recognition criteria
19030 Updating the appearance of the user interface object based on the first portion of the input includes:
in accordance with a determination that the first portion of the input meets the first gesture-recognition criteria before meeting the second gesture-recognition criteria or meeting the third-gesture- recognition criteria:
changing the appearance of the user interface object in accordance with the first object manipulation behavior based on the first portion of the input; and
updating the second gesture-recognition criteria by increasing the threshold for the second gesture-recognition criteria; updating the third gesture-recognition criteria by increasing a threshold for the third gesture-recognition criteria;
in accordance with a determination that the input meets the second gesture-recognition criteria before meeting the first gesture- recognition criteria or meeting the third gesture-recognition criteria:
changing the appearance of the user interface object in accordance with the second object manipulation behavior based on the first portion of the input; and updating the first gesture-recognition criteria by increasing la threshold for the first gesture-recognition criteria; and
updating the third gesture-recognition criteria by increasing a threshold for the third gesture-recognition criteria; and
in accordance with a determination that the input meets the third gesture-recognition criteria before meeting the first gesture-recognition criteria or meeting the second gesture-recognition criteria: changing the appearance of the user interface object in accordance with the third object manipulation behavior based on the first portion of the input; updating the first gesture-recognition criteria by increasing a threshold for the first gesture-recognition criteria; and
updating the second gesture-recognition criteria by increasing the threshold for the second gesture-recognition criteria
Figure 19E 264/273
19008
19020
19032 2024200149
The plurality of object manipulation behaviors include a third object manipulation behavior that is performed in response to inputs that meet third gesture-recognition criteria, the first portion of the input did not meet the third gesture-recognition criteria before meeting the first gesture-
recognition criteria or the second gesture-recognition criteria, the device updated the third gesture-recognition criteria by increasing a threshold for the third gesture-recognition criteria after the first portion of the input
met the first gesture-recognition criteria or the second gesture- recognition criteria, the second portion of the input did not meet the updated third gesture-recognition criteria before meeting the updated first gesture-recognition criteria or the updated second gesture-recognition criteria
19034 In response to detecting the third portion of the input:
in accordance with a determination that the third portion of the input meets the updated third gesture-recognition criteria, change the appearance of the user interface object in accordance with the third object manipulation behavior based on the third portion of the input; and in accordance with a determination that the third portion of the input does not meet the updated third gesture-recognition criteria, forgo changing the appearance of the user interface object in accordance with the third object manipulation behavior based on the third portion of the input
Figure 19F 265/273
19008
19020
19036 2024200149
The third portion of the input met the updated third gesture-recognition criteria
19038 After updating the appearance of the user interface object based on the third portion of the input, detect a fourth portion of the input
19040 In response to detecting the fourth portion of the input, update the appearance of the user interface object based on the fourth portion of the input, including:
changing the appearance of the user interface object in accordance with the first object manipulation behavior based on the fourth portion of the input;
changing the appearance of the user interface object in accordance with the second object manipulation behavior based on the fourth portion of the input; and
changing the appearance of the user interface object in accordance with the third object manipulation behavior based on the fourth portion of the input
19042 The fourth portion of the input does not include input that meets the first gesture-recognition criteria, input that meets the second gesture- recognition criteria, or input that meets the third gesture-recognition criteria
19044 The first gesture-recognition criteria and the second gesture-recognition criteria both require a first number of concurrently detected contacts in order to be met 2024200149
19046 The first object manipulation behavior changes a zoom level or displayed size of the user interface object and the second object manipulation behavior changes a rotational angle of the user interface object
19048 The first object manipulation behavior changes a zoom level or displayed size of the user interface object and the second object manipulation behavior changes a position of the user interface object in the first user interface region
19050 The first object manipulation behavior changes a position of the user interface object in the first user interface region and the second object manipulation behavior changes a rotational angle of the user interface object
19052 The first portion of the input and the second portion of the input are provided by a plurality of continuously maintained contacts
19054 Re-establish the first gesture-recognition criteria and the second gesture- recognition criteria to initiate additional first and second object-manipulation behaviors after detecting lift-off of the plurality of continuously maintained contacts
19056 The first gesture-recognition criteria correspond to rotation in around a first axis and
the second gesture-recognition criteria correspond to rotation around a second axis that is orthogonal to the first axis
Figure 19H 267/273
20002 Display, via a display generation component, a representation of a virtual object in a first user interface region that includes a representation of a field of view of one or
more cameras, wherein the displaying includes maintaining a first spatial relationship between the representation of the virtual object and a plane detected 2024200149
within a physical environment that is captured in the field of view of the one or more
cameras
20004 Detect movement of the device that adjusts the field of view of the one or more
cameras
20006 In response to detecting movement of the device that adjusts the field of view of the
one or more cameras: adjust display of the representation of the virtual object in the first user
interface region in accordance with the first spatial relationship between the virtual object and the plane detected within the field of view of the one or more cameras as the field of view of the one or more cameras is adjusted, and, in accordance with a determination that the movement of the device causes more than a threshold amount of the virtual object to move outside of a displayed portion of the field of view of the one or more cameras, generate, via one or more audio output generators, a first audio alert
20008 Outputting the first audio alert includes generating an audio output that indicates an amount of the virtual object that remains visible on the displayed portion of the field of view of the one or more cameras
20010 Outputting the first audio alert includes generating an audio output that indicates an amount of the displayed portion of the field of view that is occluded by the virtual object
Figure 20A 268/273
20012 Detect an input by a contact at a location on the touch-sensitive surface that corresponds to the representation of the field of view of the one or more cameras 2024200149
20014 In response to detecting the input, and in accordance with a determination that the input is detected at a first location on the touch-sensitive surface that corresponds to a first portion of the field of view of the one or more cameras that is not occupied by the virtual object, generate a second audio alert
20016 Outputting the first audio alert includes generating an audio output that indicates an operation that is performed with respect to the virtual object and a resulting state of the virtual object after the performance of the operation
20018 The resulting state of the virtual object after performance of the operation is described in the audio output in the first audio alert in relation to a reference
frame corresponding to the physical environment captured in the field of view of the one or more cameras
20020 Detect additional movement of the device that further adjusts the field of view of the one or more cameras after generation of the first audio alert
20022 In response to detecting the additional movement of the device that further adjusts the field of view of the one or more cameras: adjust display of the representation of the virtual object in the first user
interface region in accordance with the first spatial relationship between the virtual object and the plane detected within the field of view of the one or more cameras as the field of view of the one or more cameras is further adjusted, and, in accordance with a determination that the additional movement of the device causes more than a second threshold amount of the virtual object to move into a displayed portion of the field of view of the one or more cameras, generate, via the one or more audio output generators, a third audio alert
Figure 20B 269/273
20024 While displaying the representation of the virtual object in the first user interface
region and a first object manipulation type of a plurality of object manipulation types applicable to the virtual object is currently selected for the virtual object, detect a
request to switch to another object manipulation type applicable to the virtual object 2024200149
20026 In response to detecting the request to switch to another object manipulation type applicable to the virtual object, generate an audio output that names a second object manipulation type among a plurality of object manipulation types applicable to the virtual object, wherein the second object manipulation type is distinct from the first object manipulation type
20028 After generating the audio output that names the second object manipulation type among the plurality of object manipulation types applicable to the virtual object, detect a request to execute an object manipulation behavior corresponding to a currently selected object manipulation type
20030 In response to detecting the request to perform the object manipulation behavior corresponding to the currently selected object manipulation type, execute an object manipulation behavior that corresponds to the second object manipulation type
20032 In response to detecting the request to switch to another object manipulation type applicable to the virtual object:
in accordance with a determination that the second object manipulation type is a continuously adjustable manipulation type, generate an audio alert in conjunction with the audio output naming the second object manipulation type, to indicate that the second object manipulation type is a continuously adjustable manipulation type; detect a request to execute the object manipulation behavior that corresponds to the second object manipulation type, including detecting a swipe input at a location on the touch-sensitive surface that corresponds to a portion of the first user interface region that displays the representation of the
field of view of the one or more cameras; and in response to detecting the request to execute the object manipulation behavior corresponding to the second object manipulation type, execute the object manipulation behavior corresponding to the second object manipulation type by an amount that corresponds to a magnitude of the swipe input
Figure 20C 270/273
20034 Prior to displaying the representation of the virtual object in the first user interface
region, display the representation of the virtual object in a second user interface region, wherein the second user interface region does not include a representation of the field of view of one or more cameras 2024200149
20036 While displaying the representation of the virtual object in the second user interface region and a first operation of a plurality of operations applicable to the virtual object is currently selected for the virtual object, detect a request to switch to another operation applicable to the virtual object
20038 In response to detecting the request to switch to another operation applicable to the virtual object in the second user interface region, generate an audio output naming a second operation among the plurality of operations applicable to the virtual object, wherein the second operation is distinct from the first operation
Figure 20D 271/273
20040 I Prior to displaying the representation of the virtual object in the first user interface
region: while displaying the representation of the virtual object in a second user I interface region that does not include a representation of the field of view of the one 2024200149
I or more cameras, detect a request to display a representation of the virtual object in I the first user interface region that includes a representation of the field of view of the
I one or more cameras; and in response to detecting the request to display a representation of the virtual I object in the first user interface region that includes a representation of the field of
I view of the one or more cameras: display a representation of the virtual object in the first user interface
region in accordance with the first spatial relationship between the representation of I the virtual object and the plane detected within the physical environment that is I captured in the field of view of the one or more cameras; and generate a fourth audio alert indicating that the virtual object is placed I in the augmented reality view in relation to the physical environment captured in the field of view of the one or more cameras
20042 The third audio alert indicates information about an appearance of the virtual object relative to the portion of the field of view of the one or more cameras
20044 Generate a tactile output in conjunction with placement of the virtual object in the augmented reality view in relation to the physical environment captured in the field of view of the one or more cameras
Figure 20E 272/273
20046 Display a first control at a first location in the first user interface region, concurrently
with a representation of the field of view of the one or more cameras 2024200149
20048 In accordance with a determination that control-fading criteria are met, cease to display the first control in the first user interface region while maintaining display of
the representation of the field of view of the one or more cameras in the first user interface region
20050 While displaying the first user interface region without displaying the first control in the first user interface region, detect a touch input at a respective location on the touch-sensitive surface that corresponds to the first location in the first user
interface region
20052 In response to detecting the touch input, generate a fifth audio alert including an audio output that specifies an operation corresponding to the first control
Figure 20F 273/273
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2024200149A AU2024200149B2 (en) | 2018-01-24 | 2024-01-10 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
Applications Claiming Priority (12)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862621529P | 2018-01-24 | 2018-01-24 | |
| US62/621,529 | 2018-01-24 | ||
| US201862679951P | 2018-06-03 | 2018-06-03 | |
| US62/679,951 | 2018-06-03 | ||
| DKPA201870346 | 2018-06-11 | ||
| DKPA201870346A DK201870346A1 (en) | 2018-01-24 | 2018-06-11 | Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models |
| US16/145,035 US11099707B2 (en) | 2018-01-24 | 2018-09-27 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
| US16/145,035 | 2018-09-27 | ||
| PCT/US2019/014791 WO2019147699A2 (en) | 2018-01-24 | 2019-01-23 | Devices, methods, and graphical user interfaces for system-wide behavior for 3d models |
| AU2019212150A AU2019212150B2 (en) | 2018-01-24 | 2019-01-23 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
| AU2022201389A AU2022201389B2 (en) | 2018-01-24 | 2022-03-01 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
| AU2024200149A AU2024200149B2 (en) | 2018-01-24 | 2024-01-10 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2022201389A Division AU2022201389B2 (en) | 2018-01-24 | 2022-03-01 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| AU2024200149A1 AU2024200149A1 (en) | 2024-01-25 |
| AU2024200149B2 true AU2024200149B2 (en) | 2025-12-04 |
Family
ID=67395660
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2019212150A Active AU2019212150B2 (en) | 2018-01-24 | 2019-01-23 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
| AU2022201389A Active AU2022201389B2 (en) | 2018-01-24 | 2022-03-01 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
| AU2024200149A Active AU2024200149B2 (en) | 2018-01-24 | 2024-01-10 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
Family Applications Before (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2019212150A Active AU2019212150B2 (en) | 2018-01-24 | 2019-01-23 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
| AU2022201389A Active AU2022201389B2 (en) | 2018-01-24 | 2022-03-01 | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |
Country Status (5)
| Country | Link |
|---|---|
| JP (3) | JP7039714B2 (en) |
| KR (3) | KR20240075927A (en) |
| CN (1) | CN120406805A (en) |
| AU (3) | AU2019212150B2 (en) |
| WO (1) | WO2019147699A2 (en) |
Families Citing this family (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116324680A (en) | 2020-09-11 | 2023-06-23 | 苹果公司 | Methods for manipulating objects in the environment |
| US12236546B1 (en) | 2020-09-24 | 2025-02-25 | Apple Inc. | Object manipulations with a pointing device |
| EP3993449A1 (en) | 2020-11-02 | 2022-05-04 | Inter IKEA Systems B.V. | Method and device for communicating a soundscape in an environment |
| KR102442715B1 (en) * | 2020-12-02 | 2022-09-14 | 한국전자기술연구원 | Apparatus and method for reproducing augmented reality image based on divided rendering image |
| US11995230B2 (en) | 2021-02-11 | 2024-05-28 | Apple Inc. | Methods for presenting and sharing content in an environment |
| US12333083B2 (en) | 2021-03-22 | 2025-06-17 | Apple Inc. | Methods for manipulating objects in an environment |
| US12242706B2 (en) * | 2021-07-28 | 2025-03-04 | Apple Inc. | Devices, methods and graphical user interfaces for three-dimensional preview of objects |
| US12236515B2 (en) | 2021-07-28 | 2025-02-25 | Apple Inc. | System and method for interactive three- dimensional preview |
| US12456271B1 (en) | 2021-11-19 | 2025-10-28 | Apple Inc. | System and method of three-dimensional object cleanup and text annotation |
| US12524977B2 (en) | 2022-01-12 | 2026-01-13 | Apple Inc. | Methods for displaying, selecting and moving objects and containers in an environment |
| CN119473001A (en) | 2022-01-19 | 2025-02-18 | 苹果公司 | Methods for displaying and repositioning objects in the environment |
| US12541280B2 (en) | 2022-02-28 | 2026-02-03 | Apple Inc. | System and method of three-dimensional placement and refinement in multi-user communication sessions |
| US12283020B2 (en) | 2022-05-17 | 2025-04-22 | Apple Inc. | Systems, methods, and user interfaces for generating a three-dimensional virtual representation of an object |
| US12112011B2 (en) | 2022-09-16 | 2024-10-08 | Apple Inc. | System and method of application-based three-dimensional refinement in multi-user communication sessions |
| WO2024064930A1 (en) | 2022-09-23 | 2024-03-28 | Apple Inc. | Methods for manipulating a virtual object |
| EP4591145A1 (en) | 2022-09-24 | 2025-07-30 | Apple Inc. | Methods for time of day adjustments for environments and environment presentation during communication sessions |
| EP4591133A1 (en) | 2022-09-24 | 2025-07-30 | Apple Inc. | Methods for controlling and interacting with a three-dimensional environment |
| US12437494B2 (en) | 2022-09-24 | 2025-10-07 | Apple Inc. | Systems and methods of creating and editing virtual objects using voxels |
| EP4659088A1 (en) | 2023-01-30 | 2025-12-10 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying sets of controls in response to gaze and/or gesture inputs |
| WO2024254096A1 (en) | 2023-06-04 | 2024-12-12 | Apple Inc. | Methods for managing overlapping windows and applying visual effects |
| KR20250041794A (en) * | 2023-09-19 | 2025-03-26 | 삼성전자주식회사 | Electronic apparatus and control method thereof |
| WO2025071026A1 (en) * | 2023-09-25 | 2025-04-03 | 삼성전자 주식회사 | Electronic device comprising camera and operation method thereof |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9244562B1 (en) * | 2009-07-31 | 2016-01-26 | Amazon Technologies, Inc. | Gestures and touches on force-sensitive input devices |
| US20160092080A1 (en) * | 2014-09-26 | 2016-03-31 | Disney Enterprises, Inc. | Touch interface for precise rotation of an object |
Family Cites Families (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3761106B2 (en) * | 1996-03-04 | 2006-03-29 | シャープ株式会社 | Image forming apparatus having magnification setting device |
| JP2004046326A (en) * | 2002-07-09 | 2004-02-12 | Dainippon Screen Mfg Co Ltd | Device and method for displaying picture and program |
| JP2007047294A (en) * | 2005-08-08 | 2007-02-22 | Nippon Hoso Kyokai <Nhk> | Stereoscopic image display device |
| US20080071559A1 (en) * | 2006-09-19 | 2008-03-20 | Juha Arrasvuori | Augmented reality assisted shopping |
| US10198854B2 (en) * | 2009-08-14 | 2019-02-05 | Microsoft Technology Licensing, Llc | Manipulation of 3-dimensional graphical objects for view in a multi-touch display |
| US8922583B2 (en) * | 2009-11-17 | 2014-12-30 | Qualcomm Incorporated | System and method of controlling three dimensional virtual objects on a portable computing device |
| JP2012089112A (en) * | 2010-09-22 | 2012-05-10 | Nikon Corp | Image display device |
| JP5799521B2 (en) * | 2011-02-15 | 2015-10-28 | ソニー株式会社 | Information processing apparatus, authoring method, and program |
| KR101852428B1 (en) * | 2011-03-09 | 2018-04-26 | 엘지전자 주식회사 | Mobile twrminal and 3d object control method thereof |
| US8581901B2 (en) * | 2011-07-28 | 2013-11-12 | Adobe Systems Incorporated | Methods and apparatus for interactive rotation of 3D objects using multitouch gestures |
| JP5966834B2 (en) * | 2012-02-29 | 2016-08-10 | 株式会社Jvcケンウッド | Image processing apparatus, image processing method, and image processing program |
| US20130234926A1 (en) * | 2012-03-07 | 2013-09-12 | Qualcomm Incorporated | Visually guiding motion to be performed by a user |
| CN108052264B (en) * | 2012-05-09 | 2021-04-27 | 苹果公司 | Device, method and graphical user interface for moving and placing user interface objects |
| US9159153B2 (en) * | 2012-06-05 | 2015-10-13 | Apple Inc. | Method, system and apparatus for providing visual feedback of a map view change |
| JP6214981B2 (en) * | 2012-10-05 | 2017-10-18 | 株式会社ファイン | Architectural image display device, architectural image display method, and computer program |
| US20140282220A1 (en) * | 2013-03-14 | 2014-09-18 | Tim Wantland | Presenting object models in augmented reality images |
| US9286727B2 (en) * | 2013-03-25 | 2016-03-15 | Qualcomm Incorporated | System and method for presenting true product dimensions within an augmented real-world setting |
| US9245387B2 (en) * | 2013-04-12 | 2016-01-26 | Microsoft Technology Licensing, Llc | Holographic snap grid |
| US9501725B2 (en) * | 2013-06-11 | 2016-11-22 | Qualcomm Incorporated | Interactive and automatic 3-D object scanning method for the purpose of database creation |
| JP2015001875A (en) * | 2013-06-17 | 2015-01-05 | ソニー株式会社 | Image processing apparatus, image processing method, program, print medium, and set of print medium |
| US10175483B2 (en) * | 2013-06-18 | 2019-01-08 | Microsoft Technology Licensing, Llc | Hybrid world/body locked HUD on an HMD |
| JP5937128B2 (en) * | 2014-03-17 | 2016-06-22 | 富士フイルム株式会社 | Augmented reality providing system, method and program |
| US9495008B2 (en) * | 2014-06-27 | 2016-11-15 | Amazon Technologies, Inc. | Detecting a primary user of a device |
| EP3291534A1 (en) * | 2014-09-02 | 2018-03-07 | Apple Inc. | Remote camera user interface |
| WO2016121120A1 (en) * | 2015-01-30 | 2016-08-04 | 技術研究組合次世代3D積層造形技術総合開発機構 | Three-dimensional shaping system, information processing apparatus, method for arranging three-dimensional shaping models, and program for arranging three-dimensional shaping models |
| JP6292344B2 (en) * | 2015-03-23 | 2018-03-14 | 株式会社村田製作所 | Touch input device |
| US10176641B2 (en) * | 2016-03-21 | 2019-01-08 | Microsoft Technology Licensing, Llc | Displaying three-dimensional virtual objects based on field of view |
| WO2017208637A1 (en) * | 2016-05-31 | 2017-12-07 | ソニー株式会社 | Information processing device, information processing method, and program |
-
2019
- 2019-01-23 WO PCT/US2019/014791 patent/WO2019147699A2/en not_active Ceased
- 2019-01-23 AU AU2019212150A patent/AU2019212150B2/en active Active
- 2019-01-23 KR KR1020247015723A patent/KR20240075927A/en active Pending
- 2019-01-23 KR KR1020227015606A patent/KR102666508B1/en active Active
- 2019-01-23 JP JP2020540320A patent/JP7039714B2/en active Active
- 2019-01-23 CN CN202510570529.5A patent/CN120406805A/en active Pending
- 2019-01-23 KR KR1020207024313A patent/KR102397481B1/en active Active
-
2022
- 2022-03-01 AU AU2022201389A patent/AU2022201389B2/en active Active
- 2022-03-09 JP JP2022036354A patent/JP7397899B2/en active Active
-
2023
- 2023-12-01 JP JP2023204027A patent/JP7625063B2/en active Active
-
2024
- 2024-01-10 AU AU2024200149A patent/AU2024200149B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9244562B1 (en) * | 2009-07-31 | 2016-01-26 | Amazon Technologies, Inc. | Gestures and touches on force-sensitive input devices |
| US20160092080A1 (en) * | 2014-09-26 | 2016-03-31 | Disney Enterprises, Inc. | Touch interface for precise rotation of an object |
Also Published As
| Publication number | Publication date |
|---|---|
| KR102397481B1 (en) | 2022-05-12 |
| JP2022091798A (en) | 2022-06-21 |
| JP7625063B2 (en) | 2025-01-31 |
| AU2024200149A1 (en) | 2024-01-25 |
| KR20240075927A (en) | 2024-05-29 |
| WO2019147699A8 (en) | 2020-08-06 |
| AU2022201389A1 (en) | 2022-03-24 |
| JP2021518935A (en) | 2021-08-05 |
| KR102666508B1 (en) | 2024-05-20 |
| AU2019212150B2 (en) | 2021-12-16 |
| JP7397899B2 (en) | 2023-12-13 |
| JP7039714B2 (en) | 2022-03-22 |
| KR20220065899A (en) | 2022-05-20 |
| AU2022201389B2 (en) | 2023-10-12 |
| AU2019212150A1 (en) | 2020-08-20 |
| WO2019147699A2 (en) | 2019-08-01 |
| JP2024026250A (en) | 2024-02-28 |
| KR20200110788A (en) | 2020-09-25 |
| WO2019147699A3 (en) | 2019-09-19 |
| CN120406805A (en) | 2025-08-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2024200149B2 (en) | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models | |
| US12099692B2 (en) | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models | |
| JP6745852B2 (en) | Devices, methods, and graphical user interfaces for system-wide behavior of 3D models | |
| US11073374B2 (en) | Devices and methods for measuring using augmented reality | |
| AU2019101597B4 (en) | Devices, methods, and graphical user interfaces for system-wide behavior for 3D models |