WO2025265153A9 - Providing indications of interactive user interfaces - Google Patents
Providing indications of interactive user interfacesInfo
- Publication number
- WO2025265153A9 WO2025265153A9 PCT/US2025/048554 US2025048554W WO2025265153A9 WO 2025265153 A9 WO2025265153 A9 WO 2025265153A9 US 2025048554 W US2025048554 W US 2025048554W WO 2025265153 A9 WO2025265153 A9 WO 2025265153A9
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- representation
- software object
- input
- computer system
- displaying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The present disclosure generally relates to displaying a representation of a software object and to responding to directional inputs.
Description
Attorney Docket No. 032501 (P68100W01)
PROVIDING INDICATIONS OF INTERACTIVE USER INTERFACES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present international application claims benefit of U.S. Provisional Patent Application Serial No. 63/889,077, entitled "DIRECTIONAL INPUTS" filed September 26, 2025, of U.S. Provisional Patent Application Serial No. 63/700,488, entitled "PROVIDING INDICATIONS OF INTERACTIVE USER
INTERFACES" filed September 27, 2024, and U.S. Provisional Patent Application Serial No. 63/700,463, entitled "DIRECTIONAL INPUTS" filed September 27, 2024, which are hereby incorporated by reference in their entireties for all purposes .
BACKGROUND
[0002] Electronic devices output responses from interactive software objects, where such responses may correspond to different software objects, and respond to various user inputs, supporting diverse input types.
SUMMARY
[0003] Executable instructions for performing techniques below are, optionally, included in a non-transitory computer- readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing such techniques are, optionally, included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
[0004] In some embodiments, a method that is performed at a computer system that is in communication with one or more input devices, a display component, and a movement component is described. In some embodiments, the method comprises: while
1
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
[0005] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices, a display component, and a movement component is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
[0006] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices, a display component, and a movement component is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the display component, a representation of a first software object, receiving an
2
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
[0007] In some embodiments, a computer system configured to communicate with one or more input devices, a display component, and a movement component is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
[0008] In some embodiments, a computer system configured to communicate with one or more input devices, a display component, and a movement component is described. In some embodiments, the computer system comprises means for performing each of the following steps: while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second
3
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
[0009] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices, a display component, and a movement component. In some embodiments, the one or more programs include instructions for: while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
[0010] In some embodiments, a method that is performed at a computer system that is in communication with one or more input devices and a display component is described. In some embodiments, the method comprises: detecting, via the one or more input devices, a request; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a determination that a first
4
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) set of one or more criteria is satis fied, the first set of one or more criteria including a criterion that is satis fied when the first software obj ect corresponds to a first application, displaying, via the display component , an identi fier corresponding to the first software obj ect ; and in accordance with a determination that a second set of one or more criteria is satis fied, wherein the second set of one or more criteria includes a criterion that is satis fied when the first software obj ect corresponds to a second application di f ferent from the first application, forgoing displaying, via the display component , the identi fier corresponding to the first software ob j ect .
[ 0011 ] In some embodiments , a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component is described . In some embodiments , the one or more programs includes instructions for : detecting, via the one or more input devices , a request ; in response to receiving the request , displaying, via the display component , a representation of a first software obj ect that corresponds to a respective application; and while displaying, via the display generation component , the representation of the first software obj ect that corresponds to the respective application : in accordance with a determination that a first set of one or more criteria is satis fied, the first set of one or more criteria including a criterion that is satis fied when the first software obj ect corresponds to a first application, displaying, via the display component , an identi fier corresponding to the first software obj ect ; and in accordance with a determination that a second set of one or more criteria is satis fied, wherein the second set of one or more criteria includes a criterion that is satis fied when the first software
5
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software ob ect .
[0012] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component is described. In some embodiments, the one or more programs includes instructions for: detecting, via the one or more input devices, a request; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software ob j ect .
[0013] In some embodiments, a computer system configured to communicate with one or more input devices and a display component is described. In some embodiments, the computer system comprises one or more processors and memory storing one
6
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: detecting, via the one or more input devices, a request; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software ob j ect .
[0014] In some embodiments, a computer system configured to communicate with one or more input devices and a display component is described. In some embodiments, the computer system comprises means for performing each of the following steps: detecting, via the one or more input devices, a request; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a
7
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software object.
[0015] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component. In some embodiments, the one or more programs include instructions for: detecting, via the one or more input devices, a request; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that
8
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software object.
[0016] In some embodiments, a method that is performed at a computer system that is in communication with one or more input devices and a display component is described. In some embodiments, the method comprises: while displaying, via the display component, a representation of a first software object that corresponds to a first application, detecting input representing a request; and in response to receiving the input: in accordance with a determination that the request corresponds to a second application different from the first application, displaying, via the display component, a representation of a second software object that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application: continuing displaying, via the display component, the representation of the first software object; and forgoing displaying, via the display component, the representation of the second software object that corresponds to the second application .
[0017] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the display component, a representation of a first software object that corresponds to a first application, detecting input representing a request; and in response to receiving the input: in accordance with a determination that the request corresponds to a second application different from
9
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
[ 0018 ] In some embodiments , a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component is described . In some embodiments , the one or more programs includes instructions for : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
[ 0019 ] In some embodiments , a computer system configured to communicate with one or more input devices and a display component is described . In some embodiments , the computer system comprises one or more processors and memory storing one
10
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) or more programs configured to be executed by the one or more processors . In some embodiments , the one or more programs includes instructions for : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
[ 0020 ] In some embodiments , a computer system configured to communicate with one or more input devices and a display component is described . In some embodiments , the computer system comprises means for performing each of the following steps : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display
11
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) component , the representation of the second software obj ect that corresponds to the second application .
[ 0021 ] In some embodiments , a computer program product is described . In some embodiments , the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component . In some embodiments , the one or more programs include instructions for : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
[ 0022 ] In some embodiments , a method that is performed at a computer system that is in communication with one or more display components and one or more input devices is described . In some embodiments , the method comprises : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and
12
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
[ 0023 ] In some embodiments , a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices is described . In some embodiments , the one or more programs includes instructions for : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
[ 0024 ] In some embodiments , a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices is described . In some embodiments , the one or more programs includes instructions for : while displaying, via the one or more display components , a
13
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
[ 0025 ] In some embodiments , a computer system configured to communicate with one or more display components and one or more input devices is described . In some embodiments , the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors . In some embodiments , the one or more programs includes instructions for : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
14
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 )
[ 0026 ] In some embodiments , a computer system configured to communicate with one or more display components and one or more input devices is described . In some embodiments , the computer system comprises means for performing each of the following steps : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
[ 0027 ] In some embodiments , a computer program product is described . In some embodiments , the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices . In some embodiments , the one or more programs include instructions for : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts
15
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) at the location corresponding to the representation of the software object and proceeds in a second direction different from the first direction, maintaining the computer system in the first mode.
[0028] In some embodiments, a method that is performed at a computer system that is in communication with one or more display components and one or more input devices is described. In some embodiments, the method comprises: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
[0029] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the
16
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
[0030] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
17
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0031] In some embodiments, a computer system configured to communicate with one or more display components and one or more input devices is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
[0032] In some embodiments, a computer system configured to communicate with one or more display components and one or more input devices is described. In some embodiments, the computer system comprises means for performing each of the following steps: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the
18
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
[0033] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices. In some embodiments, the one or more programs include instructions for: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
[0034] In some embodiments, a method that is performed at a computer system that is in communication with one or more input devices and one or more display components is described.
19
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 )
In some embodiments , the method comprises : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
[ 0035 ] In some embodiments , a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components is described . In some embodiments , the one or more programs includes instructions for : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
[ 0036 ] In some embodiments , a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components is described . In some embodiments , the one or more programs includes instructions for : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in
20
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
[ 0037 ] In some embodiments , a computer system configured to communicate with one or more input devices and one or more display components is described . In some embodiments , the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors . In some embodiments , the one or more programs includes instructions for : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
[ 0038 ] In some embodiments , a computer system configured to communicate with one or more input devices and one or more display components is described . In some embodiments , the computer system comprises means for performing each of the following steps : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the
21
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
[ 0039 ] In some embodiments , a computer program product is described . In some embodiments , the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components . In some embodiments , the one or more programs include instructions for : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
[ 0040 ] In some embodiments , a method that is performed at a computer system that is in communication with one or more input devices and one or more display components is described . In some embodiments , the method comprises : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software
22
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
[ 0041 ] In some embodiments , a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components is described . In some embodiments , the one or more programs includes instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
[ 0042 ] In some embodiments , a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components is described . In some embodiments , the one or more programs includes instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one
23
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
[ 0043 ] In some embodiments , a computer system configured to communicate with one or more input devices and one or more display components is described . In some embodiments , the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors . In some embodiments , the one or more programs includes instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
[ 0044 ] In some embodiments , a computer system configured to communicate with one or more input devices and one or more display components is described . In some embodiments , the
24
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) computer system comprises means for performing each of the following steps : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
[ 0045 ] In some embodiments , a computer program product is described . In some embodiments , the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components . In some embodiments , the one or more programs include instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of
25
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) one or more criteria, directing the computer system at the second point such that the representation of the software object is directed towards the second point, wherein the second point is different from the first point.
[0046] In some embodiments, a method that is performed at a computer system that is in communication with one or more input devices and one or more display components is described. In some embodiments, the method comprises: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more output devices, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner.
[0047] In some embodiments, a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices,
26
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more output devices, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner .
[0048] In some embodiments, a transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components is described. In some embodiments, the one or more programs includes instructions for: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more output devices, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the
27
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner .
[0049] In some embodiments, a computer system configured to communicate with one or more input devices and one or more display components is described. In some embodiments, the computer system comprises one or more processors and memory storing one or more programs configured to be executed by the one or more processors. In some embodiments, the one or more programs includes instructions for: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more output devices, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner.
[0050] In some embodiments, a computer system configured to communicate with one or more input devices and one or more display components is described. In some embodiments, the computer system comprises means for performing each of the following steps: while displaying, via the one or more display components, (1) a representation of a software object in a
28
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more output devices, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner.
[0051] In some embodiments, a computer program product is described. In some embodiments, the computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components. In some embodiments, the one or more programs include instructions for: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more output devices, the representation of the software object in a second manner
29
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner.
DESCRIPTION OF THE FIGURES
[0052] For a better understanding of the various described embodiments, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0053] FIG. 1A is a block diagram illustrating a computer system in accordance with some embodiments.
[0054] FIGS. 1B-1G illustrate the use of Application Programming Interfaces (APIs) to perform operations in accordance with some embodiments.
[0055] FIGS. 2A-2C are diagrams illustrating exemplary components and user interfaces of electronic device in accordance with some embodiments.
[0056] FIG. 3 is a block diagram illustrating exemplary components of a device in accordance with some embodiments.
[0057] FIG. 4 is a functional diagram of an exemplary actuator device in accordance with some embodiments.
[0058] FIG. 5 is a functional diagram of an exemplary software system in accordance with some embodiments.
[0059] FIGS. 6A-6E illustrate exemplary user interfaces for displaying representations software objects in accordance with some embodiments.
30
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0060] FIG. 7 is a flow diagram illustrating a method for performing movement in conjunction with outputting a representation of a software object in accordance with some embodiments .
[0061] FIG. 8 is a flow diagram illustrating a method for displaying an indication of a respective application corresponding to a software object in accordance with some embodiments .
[0062] FIG. 9 is a flow diagram illustrating a method for displaying a representation of a software object for an application that corresponds to a request in accordance with some embodiments.
[0063] FIGS. 10A-10I illustrate exemplary user interfaces for directional inputs that change a mode of a device in accordance with some embodiments.
[0064] FIG. 11 is a flow diagram illustrating a process for managing modes of a computer system based on inputs at a representation of a system software object in accordance with some embodiments.
[0065] FIG. 12 is a flow diagram illustrating a process for managing user interfaces based on inputs at a representation of a system software object in accordance with some embodiments .
[0066] FIGS. 13A-13H illustrate exemplary user interfaces and environments for displaying a representation of a software object in accordance with some embodiments.
[0067] FIG. 14 is a flow diagram illustrating a process for managing display of a representation of a software object based on a state of an environment in accordance with some embodiments .
31
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0068] FIG. 15 is a flow diagram illustrating a process for directing a computer system based on points of interest in an environment in accordance with some embodiments.
[0069] FIG. 16 is a flow diagram illustrating a process for managing display of content and a representation of a software object based on attention of a subject in accordance with some embodiments .
DETAILED DESCRIPTION
[0070] The description herein sets forth exemplary processes, components, parameters, and the like. While specific examples are set out herein, it should be recognized that such embodiments should not be understood as limiting the scope of the present disclosure to the explicit descriptions of the examples set forth herein but instead should be understood as providing illustrative examples.
[0071] One or more steps of the processes described herein can be predicated on, rely on and/or be contingent on one or more conditions being satisfied. In some embodiments, a process is performed by iterating and/or repeating one or more processes multiple times. In some embodiments, contingent steps can be satisfied within and/or across different iterations of a single process and still be within the scope of the processes described herein. For example, for a given process that includes two or more steps that are contingent on different conditions, one of ordinary skill in the art would understand that the given process is considered to have been performed even when a process is repeated multiple times until and/or before the contingent steps are satisfied. In some embodiments, performance of multiple iterations of a process are not required to practice any one or more of the claims as presented herein. For example, system, electronic device, computer-readable medium, and/or method claims can be made,
32
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) used, and/or performed without iteratively repeating a process. In some embodiments, the system, electronic device, computer-readable medium, and/or method claims include and/or utilize instructions for performing one or more steps that are contingent upon one or more conditions being satisfied. Because such instructions are stored in one or more processors and/or at one or more memory locations, the system, electronic device, computer-readable medium, and/or method claims can include and/or utilize logic that determines whether the one or more conditions have been satisfied without requiring the steps of a process to be repeated and/or iterated multiple times .
[0072] Although elements are described below using numerical descriptors, such as "a first" and/or "a second," these elements do not correspond to order or distinct representations and should not be limited to the stated numerical term. In some embodiments, these terms simply used as prefix to distinguish a reference to one element from a reference to another element. For example, a "first" device and a "second" device can be two separate references to the same device. In contrast, for example, a "first" device and a "second" device can be a reference to two different devices (e.g., not the same device and/or not the same type of device) . For example, a first computer system and a second computer system do not correspond to a first and a second in time, and merely are used to distinguish between two computer systems. As such, the first computer system can be termed a second computer system, and the second computer system can be termed a first computer system without departing from the scope of the various described embodiments.
[0073] For description of various elements and examples, the use of certain terminology is used to provide productive descriptions of the subject matter below and should not be
33
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) read as limiting. As used to describe various examples herein, the singular forms of "a, " "an, " and "the" should not be interpreted as precluding or excluding the plural forms as well, unless the context clearly indicates otherwise. As well, "and/or" is used to encompasses any and all possible combinations of one or more associated listed items. For example, "x and/or y" should be interpreted as including "x, " or "y, " as well as "x and y" as possible permutations. Further, the use of the terms "includes," "including," "comprises," and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0074] When describing choices and/or logical possibilities, the term "if" is, optionally, construed to mean "when," "upon," "in response to determining," "in response to detecting," or "in accordance with a determination that" depending on the context. Similarly, the phrase "if it is determined" or "if [a stated condition or event] is detected" is, optionally, construed to mean "upon determining," "in response to determining," "upon detecting [the stated condition or event] ," "in response to detecting [the stated condition or event] ," or "in accordance with a determination that [the stated condition or event]" depending on the context .
[0075] The processes described below enhance the operability of the devices and make the user-device and/or user-device interfaces more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/ interacting with the device) through various techniques, including by providing improved feedback (e.g.,
34
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) visual, haptic, acoustic, and/or tactile feedback) to the user, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without requiring further input (e.g., input by a user) , and/or additional techniques, such as increasing the security and/or privacy of the computer system and reducing burn-in of one or more portions of a user interface of a display. These techniques also reduce power usage and improve battery life of the device by enabling the user to use the device more quickly and efficiently.
[0076] Bel ow, FIGS. 1, 2A-2C, and 3-5 provide a description of exemplary devices for performing the techniques for displaying a representation of a software object, and responding to directional inputs. FIGS. 6A-6E illustrate exemplary user interfaces for displaying representations software objects. FIGS. 6A-6E illustrate exemplary user interfaces for performing movement in conjunction with outputting a representation of a software object. The user interfaces in FIGS. 6A-6E are used to illustrate the processes described below, including the processes in FIG. 7. FIGS. 6A-6E illustrate exemplary user interfaces for displaying an indication of an application corresponding to a software object. The user interfaces in FIGS. 6A-6E are used to illustrate the processes described below, including the processes in FIG. 8. FIGS. 6A-6E illustrate exemplary user interfaces for displaying a representation of a software object for an application that corresponds to a request. The user interfaces in FIGS. 6A-6E are used to illustrate the processes described below, including the processes in FIG. 9. FIGS. 10A-10I illustrate exemplary user interfaces for directional inputs that change a mode of a device in
35
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) accordance with some embodiments. FIG. 11 is a flow diagram illustrating a process for managing modes of a computer system based on inputs at a representation of a system software object in accordance with some embodiments. FIGS. 10A-10I are used to illustrate the processes described below, including the processes in FIG. 11. FIG. 12 is a flow diagram illustrating a process for managing user interfaces based on inputs at a representation of a system software object in accordance with some embodiments. FIGS. 10A-10I are used to illustrate the processes described below, including the processes in FIG. 12. FIGS. 13A-13H illustrate exemplary user interfaces and environments for displaying a representation of a software object in accordance with some embodiments. FIG. 14 is a flow diagram illustrating a process for managing display of a representation of a software object based on a state of an environment in accordance with some embodiments. FIGS. 13A- 13H are used to illustrate the processes described below, including the processes in FIG. 14. FIG. 15 is a flow diagram illustrating a process for directing a computer system based on points of interest in an environment in accordance with some embodiments. FIGS. 13A-13H are used to illustrate the processes described below, including the processes in FIG. 15. FIG. 16 is a flow diagram illustrating a process for managing display of content and a representation of a software object based on attention of a subject in accordance with some embodiments. FIGS. 13A-13H are used to illustrate the processes described below, including the processes in FIG. 16.
[0077] FIG. 1A depicts a block diagram of computer system 100 (e.g., electronic device and/or electronic system) , which includes a set of electronic components in communication with (e.g., connected to and/or in wired or wireless communication with) each other. It should be understood that computer system 100 is merely an example of a computer system that can be used
36
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) to perform one or more functions described herein. It should be understood that one or more other computer systems can be used to perform one or more functions described herein. Additionally, while FIG. 1A depicts a computer architecture of computer system 100, other computer architectures (e.g., including fewer components, more components, and/or similar components) of a computer system can be used to perform one or more functions described herein.
[0078] In some embodiments, computer system 100 can correspond to (e.g., be and/or include) a system on a chip, a personal computer system, a desktop computer, a laptop computer, a tablet, a smart phone, a wearable device, a fitness tracking device, a smart watch, a head-mounted display ("HMD") device, a communal device (e.g., thermostat, smart speaker, and/or additional home-based computer systems) , an accessory (e.g., light, switch, air conditioner, heater, fan, window cover, lock, media playback device, speaker, gaming console, and/or television) , a controller, a hub, a server system, and/or a sensor .
[0079] In some embodiments, a sensor is and/or includes one or more hardware components having an ability to perform one or more operations for detecting (e.g., sensing, generating, and/or processing) states, conditions, and/or information about a physical environment in a vicinity of the sensor. For example, a sensor can be configured to detect conditions surrounding the sensor, detect conditions in one or more directions casting away from the sensor, and/or detect conditions based on contact of the sensor with a feature of the physical environment. In some embodiments, a hardware component of a sensor includes a sensing component (e.g., a temperature and/or image sensor) , a receiving component (e.g., a radio and/or laser receiver) , and/or a transmitting component (e.g., a radio and/or laser transmitter) . In some
37
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, a sensor includes an angle sensor, a force sensor, a contact sensor, a non-contact sensor, a breakage sensor, a flow sensor, a gas sensor, a chemical sensor, a particle sensor, a humidity sensor, a moisture sensor, a precipitation sensor, an image sensor (e.g., an RGB camera and/or an infrared sensor) , a photoelectric sensor (e.g., ambient light and/or solar light) , a position sensor (e.g., a global positioning system) , a pressure sensor, a temperature sensor, a proximity sensor, a radiation sensor, an inertial measurement unit ("IMU") , an accelerometer, a compass, a leak sensor, a level sensor, a metal sensor, a microphone, a motion sensor, a range sensor, a depth sensor (e.g., RADAR and/or LiDAR) , a speed sensor, a time-of-f light sensor, a torque sensor, and an ultrasonic sensor, a vacancy sensor, a presence sensor, a voltage sensor, a current sensor, a resistivity sensor, a conductivity sensor, a capacitive sensor, and/or a water sensor. While only a single computer system is depicted in FIG. 1A, functionality described herein can be implemented with a single computer system or with two or more computer systems operating together. In some embodiments, computer system 100 includes one or more sensors, as described herein. In some embodiments, information about the physical environment is captured by combining data from one sensor with data from one or more additional sensors (e.g., one or more additional sensors that are part of the computer system and/or one or more additional computer systems) .
[0080] As illustrated in FIG. 1A, computer system 100 includes processor subsystem 110, memory 120, and I/O interface 130. In some embodiments, memory 120 is a system memory in communication with processor subsystem 110. In some embodiments, one or more electronic components of computer system 100 are electrically connected to each other via interconnect 150, which allows communication between the
38
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) electronic components of computer system 100. For example, interconnect 150 can be a system bus, one or more memory locations, and/or additional electrical channels for connecting multiple components of computer system 100. In some embodiments, I/O interface 130 is connected, via a wired and/or wireless connection, to I/O device 140. In some embodiments, computer system 100 includes a single component includes and/or encompasses both of I/O interface 130 and I/O device 140, such that the functionality of I/O interface 130 and I/O device 140 is provided by the single component. It should be understood that computer system 100 can include one or more I/O interfaces communicating with one or more I/O devices. In some embodiments, computer system 100 includes multiple processor subsystems (e.g., processor subsystem 110) , each electrically connected to each other and/or one or more other components of computer system 100 via interconnect 150.
[0081] In some embodiments, processor subsystem 110 is and/or includes one or more processors and/or individual processing units that have a capability for executing instructions (e.g., programs, systems, and/or interrupts) to perform one or more of the functions described herein. For example, operating- system-level and/or application-level instructions are executed by processor subsystem 110. In some embodiments, processor subsystem 110 includes one or more components (e.g., implemented as software, hardware, and/or a combination thereof) capable of performing, supporting, and/or interpreting machine-learning instructions and/or operations. For example, computer system 100 can perform operations according to a machine-learning model that is local to computer system 100. In some embodiments, computer system 100 communicates with (e.g., executes instructions corresponding to and/or performs calculations on) a remote interactive knowledge base (e.g., a processing resource that implements an
39
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) artificial intelligence model, a machine-learning model, and/or a large language model) to perform operations that are otherwise outside a set of capabilities of computer system 100. For example, computer system 100 can determine a set of inputs (e.g., data, instructions, and/or parameters) to the interactive knowledge base for performing requested machinelearning operations.
[0082] In some embodiments, memory 120 is in communication with processor subsystem 110 and can be implemented by a variety of different physical, non-transitory memory media. In some embodiments, computer system 100 includes multiple memory components and/or multiple types of memory components. In some embodiments, each of the multiple memory components is connected to processor subsystem 110 directly and/or via interconnect 150. For example, memory 120 can be implemented using a storage array, a removable flash drive, a storage area network (e.g., SAN) , hard disk storage, flash memory, removable disk storage, optical drive storage, floppy disk storage, random access memory (e.g., SDRAM, DDR SDRAM, EDO RAM, RAMBUS RAM and/or RAM- SRAM ) , and/or read only memory (e.g., EEPROM and/or PROM) . In some embodiments, processor subsystem 110 and/or interconnect 150 is connected to one or more memory controllers that are electrically connected to memory 120.
[0083] In some embodiments, one or more instructions are executed by processor subsystem 110. For example, memory 120 can include a computer-readable medium (e.g., transitory or non-transitory computer-readable medium) usable to store (e.g., configured to store, assigned to store, and/or storing) one or more instructions to be executed by processor subsystem 110. In some embodiments, each instruction stored by memory 120 and executed by processor subsystem 110 corresponds to an operation for completing one or more of the functions
40
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) described herein. For example, memory 120 can store program instructions to implement the functionality associated with processes 700, 800, 900, 1100, 1200, 1400, 1500, and 1600 (FIG. 7, 8, 9, 11, 12, 14, 15, 16) described herein.
[0084] In some embodiments, I/O interface 130 enables computer system 100 to communicate with other devices. In some embodiments, I/O interface 130 is and/or includes one or more interfaces of one or more types. In some embodiments, I/O interface 130 is and/or includes a bridge chip (e.g., Southbridge) from a front-side bus to one or more back-side buses. In some embodiments, I/O interface 130 enables communication with I/O device 140 and/or one or more other I/O devices, via one or more corresponding buses and/or other interfaces. For example, an I/O device can include one or more: physical user-interface devices (e.g., a mouse, a physical keyboard, and/or a joystick) , network interface devices (e.g., to a local or wide-area network) , storage devices (e.g., as described herein with respect to memory 120) , auditory and/or visual output devices (e.g., light, screen, speaker, and/or projector) , and/or sensor devices (e.g., as described herein with respect to sensors) . In some embodiments, the visual output device is referred to as a display component. For example, the display component can be configured to provide visual output, such as displaying images on a physically viewable medium, such as via an LED display or image projection. As used herein, "displaying" content includes causing a component to display the content (e.g., video data decoded and/or rendered by a display controller) by transmitting, via a wired or wireless connection, data (e.g., image data and/or video data) to an integrated and/or external display component to visually produce the content.
[0085] In some embodiments, computer system 100 includes a component that integrates I/O device 140 with one or more
41
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) other components, such as I/O interface 130. In some embodiments, I/O device 140 is separate from one or more other components of computer system 100 (e.g., is a discrete component) . In some embodiments, I/O device 140 includes a network interface device that permits computer system 100 to connect to (e.g., communicate with) a network or one or more other computer systems, such as via a wired or wireless connection. In some embodiments, a network interface device can include Wi-Fi, Ethernet, USB, Thunderbolt, Bluetooth, NFC, and so forth. For example, computer system 100 can utilize an NFC connection to facilitate a bank, credit, financial, token (e.g., fungible or non-fungible token) , and/or cryptocurrency transaction between computer system 100 and another computer system within a vicinity of computer system 100.
[0086] In some embodiments, I/O device 140 is and/or includes one or more components for detecting a user (e.g., a subject, a person, an animal, an object, and/or a computer system different from the computer system) and/or an input (e.g., a tap input, a non-tap input, a swipe input, a hold-and-drag input, a verbal input, an acoustic command, an acoustic statement, an acoustic request, a gaze input, an air gesture, and/or a mouse click) from a user. In some embodiments, I/O device 140 enables computer system 100 to identify users associated with an account and/or users not associated with any account. For example, computer system 100 can detect a known user (e.g., a user associated with an account) and access information about the user using the user's account. In some embodiments, as part of computer system 100 detecting a user, computer system 100 detects that the user's account is associated with (e.g., is identified with respect to and/or included in) a group of accounts (and/or associated users) . For example, computer system 100 can access information associated with a group of accounts in response to detecting a
42
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) member of the group. In some embodiments, an account corresponding to a user can be associated with one or more additional accounts and/or one or more additional computer systems. For example, computer system 100 can detect such one or more additional computer systems and/or detect one or more computer systems for detecting the user. In some embodiments, computer system 100 detects one or more unknown users (e.g., users not associated with any account) and enables one or more guest accounts for the one or more unknown users to utilize computer system 100, for example with limited access.
[0087] In some embodiments, I/O device 140 is and/or includes one or more cameras. In some embodiments, a camera is and/or includes one or more image sensors (e.g., one or more optical sensors and/or one or more depth camera sensors) that provide computer system 100 with the ability to detect a user and/or a user's expressions, movements, and/or gestures (e.g., hand gestures and/or air gestures) as inputs. In some embodiments, an air gesture is a gesture that is detected without requiring the user to touch an input element that is part of computer system 100 (or independently of an input element that is a part of computer system 100) . For example, an air gesture can be based on detected motion of a portion of the user' s body through the air including motion of the user' s body relative to an absolute reference (e.g., a distance of the user's hand relative to the ground and/or an angle of the user's arm relative to the ground) , relative to another portion of the user's body (e.g., movement of one hand of the user relative to another hand of the user, movement of a hand of the user relative to a shoulder of the user, and/or movement of a finger of the user relative to another finger or portion of a hand of the user) , and/or absolute motion of a portion of the user's body (e.g., a shake gesture that includes a predetermined speed or amount of rotation of a portion of the
43
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) user's body and/or a tap gesture that includes movement of a hand in a predetermined pose by a predetermined amount and/or speed) . In some embodiments, the one or more cameras enable computer system 100 to transmit image, pictorial, and/or video information to an application. For example, computer system 100 can complete a video phone call by transmitting image and/or video data captured by a camera to an application that manages the video phone call.
[0088] In some embodiments, I/O device 140 is and/or includes one or more microphones. For example, computer system 100 can use a microphone to obtain data and/or information from a user without requiring a contact input. In some embodiments, a microphone enables computer system 100 to detect speech and/or verbal input from a user. In some embodiments, computer system 100 utilizes speech input to manage one or more functions of a personal assistant. For example, a user can make a verbal request for computer system 100 to perform an action and/or obtain information for the user. In some embodiments, computer system 100 utilizes speech input (e.g., along with one or more other input techniques and/or output techniques) to request and/or detect information from a user without requiring the user to make physical contact with computer system 100.
[0089] In some embodiments, I/O device 140 is and/or includes one or more physical input mediums for a user to interact directly with computer system 100. In some embodiments, a physical input medium includes one or more physical buttons (e.g., one or more touch sensitive non-depressible components and/or one or more tactile depressible buttons) , a touch sensitive display component, a mouse, and/or a keyboard. In some embodiments, one or more physical input mediums are part of computer system 100. In some embodiments, one or more physical input mediums are connected to computer system 100
44
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., together and/or separately) , such as via one or more
I/O interfaces.
[0090] In some embodiments, I/O device 140 is and/or includes one or more display components for outputting information (e.g., a display component, a display screen, a projector, a touch-sensitive display, an audio generation component, a speaker, and/or a haptic output device) . In some embodiments, computer system 100 operates I/O device 140 to convey information and/or a state of computer system 100. In some embodiments, I/O device 140 is and/or includes a tactile output device. For example, a tactile output device can be a haptic component that enables computer system 100 to convey information to a user in contact with (e.g., touching, holding, and/or close to) computer system 100. In some embodiments, I/O device 140 includes one or more components for outputting visual outputs (e.g., an image, video, animation, augmented reality overlay, 3D rendering, motion graphics, data visualization, and/or digital art) . For example, computer system 100 can display content from one or more applications and/or system processes. For another example, computer system 100 can display a widget (e.g., a control that displays real-time information and/or data) corresponding to one or more applications.
[0091] In some embodiments, I/O device 140 is and/or includes one or more audio components for outputting audio (e.g., speakers, smart speakers, television speakers, home theater systems, soundbars, earphones, earbuds, headphones, augmented reality headset speakers, audio jacks, optical audio outputs, Bluetooth audio outputs, HDMI audio outputs, and/or audio sensors) . In some embodiments, computer system 100 is able to output audio through I/O device 140. For example, computer system 100 can output audio-based content and/or information to a user via I/O device 140. In some embodiments, I/O device
45
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
140 enables spatial audio (e.g., an audio output corresponding to an environment) . For example, computer system 100 can detect materials and/or objects within an environment and/or alter the audio pattern, intensity, and/or waveform to compensate for varying characteristics of the environment.
[0092] Referring now to FIGS. 1B-1G, implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (and/or multiple tangible computer-readable storage media of one or more types) encoding one or more computer- readable instructions. It should be recognized that computer- readable instructions can be organized in any format, including applications, processes, widgets, software, and/or hardware components.
[0093] Implementations within the scope of the present disclosure include a computer-readable storage medium that encodes instructions organized as an application (e.g., application 170 of FIG. ID) that, when executed by one or more processing units, control an electronic device (e.g., device 168 of FIG. ID) to perform the process of FIG. IB, the process of FIG. 1C, and/or one or more other processes and/or methods described herein.
[0094] It should be recognized that application 170 can be an application of any suitable type including, for example, one or more of a browser application, a media application, a messaging application, a health application, a fitness application, a digital payments application, a social network application, a maps application, a communication application, and/or an application that functions as an execution environment for plug-ins, widgets, or other applications. In some embodiments, application 170 is an application that is pre-installed on device 168 prior to or upon purchase (e.g., a
46
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) first-party application) . In some embodiments, application 170 is an application that is provided to device 168 via an operating system update file (e.g., a first-party application or a second-party application) . In some embodiments, application 170 is an application that is provided via an application store. In some embodiments, the application store can be pre-installed on device 168 prior to or upon purchase (e.g., a first-party application store) . In some embodiments, the application store is a third-party application store (e.g., an application store that is read from a storage device, downloaded via a network, and/or provided by another application store) .
[0095] Referring now to FIG. IB and FIG. IF, at block 160, an application (e.g., application 170 of FIG. ID) obtains information. In some embodiments, the information is obtained from one or more hardware components of device 168 (shown in FIG. ID) . In some embodiments, the information is obtained from one or more software modules of device 168. In some embodiments, the information is obtained from one or more hardware components external to device 168 (e.g., a server, a peripheral device, and/or an accessory device) . In some embodiments, the information includes user information, environment information, hardware information, historical information, event information, positional information, time information, notification information, electronic device state information, weather information, media information, and/or motion information. In some embodiments, at block 162, in response to and/or after obtaining the information at block 160, the application provides the information to a system (e.g., system 180 of FIG. IE) .
[0096] Referring now to FIG. 1C and FIG. 1G, at block 164, an application (e.g., application 170 of FIG. ID) obtains information. In some embodiments, the information includes
47
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) user information, environment information, hardware information, historical information, event information, positional information, time information, notification information, electronic device state information, weather information, media information, and/or motion information. In some embodiments, at block 166, in response to and/or after obtaining the information, the application performs one or more operations with the information. In some embodiments, the one or more operations includes displaying the information, providing a notification based on the information, sending a message based on the information, setting a reminder based on the information, controlling a user interface of a fitness application based on the information, controlling a user interface of a health application based on the information, controlling a focus mode based on the information, adding a calendar entry based on the information, and/or calling an API of a system (e.g., system 180 of FIG. IE) based on the information .
[0097] In some embodiments, one or more steps of the process of FIG. IB and/or the process of FIG. 1C is performed in response to a trigger. In some embodiments, the trigger includes a user input, a notification received from system 180, a detection of an event, and/or a response to a call to an API provided by system 180.
[0098] In some embodiments, the instructions of application 170, when executed, control device 168 to perform the process of FIG. IB and/or the process of FIG. 1C by calling an application programming interface ("API") , such as API 176 provided by system 180. It should be recognized that calling API 176 is optional. In some embodiments, application 170 performs at least a portion of the process of FIG. IB and/or the process of FIG. 1C without calling API 176.
48
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0099] In some embodiments, one or more steps of the process of FIG. IB and/or the process of FIG. 1C includes calling API 176 using one or more parameters defined by API 176. In some embodiments, the one or more parameters include a data structure, an object, an object class, a key, a constant, a variable, a data type, a pointer, an array, a list and/or a pointer to a function and/or a method, and/or another way to reference data or another item to be passed via API 176.
[0100] Referring to FIG. ID, device 168 is illustrated. In some embodiments, device 168 is a personal computing device, a smart phone, a tablet, a smart watch, a fitness tracker, a head mounted display ("HMD") device, a media device, a speaker, a communal device, and/or a television. As illustrated in FIG. ID, device 168 includes application 170. Application 170 includes application implementation module 172 and API-calling module 174. It should be recognized that device 168 and/or application 170 can include more, fewer, and/or different components than those illustrated in FIG. ID.
[0101] Referring to FIG. IE, system 180 is illustrated. In some embodiments, system 180 is an operating system hosted on device 168. As illustrated in FIG. IE, system 180 (e.g., an operating system of device 168) includes API 176 and implementation module 178. It should be recognized that system 180 can include more, fewer, and/or different components than those illustrated in FIG. IE.
[0102] In some embodiments, application implementation module 172 includes a set of one or more instructions corresponding to one or more operations performed by application 170. For example, when application 170 is a messaging application, application implementation module 172 can include instructions to receive and/or send messages. For another example, when application 170 is a calendar application, application
49
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) implementation module 172 can include instructions to create, edit, track, and/or view calendar appointments. In some embodiments, application implementation module 172 communicates with API-calling module 174 to communicate with system 180 via API 176.
[0103] In some embodiments, API 176 is a software module (e.g., a collection of computer-readable instructions) that provides an interface for another module (e.g., API-calling module 174) to access and/or use one or more functions, procedures, data structures, methods, classes, and/or other services provided by implementation module 178 of system 180. For example, API-calling module 174 can access a feature of implementation module 178 through one or more API calls and/or invocations (e.g., embodied by a method call and/or a function) exposed by API 176 (e.g., a software and/or hardware module that can receive API calls, respond to API calls, and/or send API calls) . For another example, API-calling module 174 can pass data and/or control information using one or more parameters via the API calls or invocations. In some embodiments, API 176 allows application 170 to use a service provided by a Software Development Kit ("SDK") library. In some embodiments, application 170 incorporates a call to a function and/or a method provided by the SDK library and/or provided by API 176. In some embodiments, application 170 uses data types or objects defined in the SDK library and provided by API 176. In some embodiments, API-calling module 174 makes an API call via API 176 to access and use a feature of implementation module 178 that is specified by API 176. In such embodiments, implementation module 178 can return a value via API 176 to API-calling module 174 in response to the API call. The value can report to application 170 the capabilities or state of a hardware component of device 168, including those related to aspects such as processing capability, output
50
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) capabilities and state, input capabilities and state, storage capacity and/or state, power state, and/or communications capability. In some embodiments, API 176 is implemented in part by firmware, microcode, and/or other low-level logic that executes, at least in part, on the hardware component.
[0104] In some embodiments, API 176 allows a developer of API- calling module 174 (e.g., a third-party developer) to leverage one or more features provided by implementation module 178. In such embodiments, there can be one or more API-calling modules, such as API-calling module 174, that communicate with implementation module 178. In some embodiments, API 176 allows multiple API-calling modules written in different programming languages to communicate with implementation module 178 while API 176 is implemented in terms of a specific programming language. For example, API 176 can include features for translating calls and returns between implementation module 178 and API-calling module 174. In some embodiments, API- calling module 174 calls APIs from different providers such as a set of APIs from a plug-in provider, a set of APIs from an operating system provider, and/or a set of APIs from another provider (e.g., the provider of a software library) and/or creator .
[0105] Examples of API 176 include a device detection API (e.g., for locating nearby electronic devices, media devices, and/or smartphones) , a pairing API (e.g., for establishing a secure connection with an accessory) , a UIKit API (e.g., for generating user interfaces) , an application store API, a payment API, a push notification API, a networking API, a WiFi API, a Bluetooth API, an NFC API, a UWB API, a location detection API, a locator API, a maps API, a health sensor API, a sensor API, a messaging API, a streaming API, a collaboration API, a video conferencing API, an advertising services API, a web browser API (e.g., WebKit API) , a vehicle 51
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
API, a fitness API, a smart home API, a contact transfer API, a photos API, a camera API, and/or an image processing API. In some embodiments, a sensor API is and/or includes an API for accessing data associated with a sensor of device 168. For example, the sensor API can provide access to raw sensor data. For another example, the sensor API can provide data derived (and/or generated) from the raw sensor data. In some embodiments, the sensor data is and/or includes image data, video data, temperature data, audio data, heart rate data, inertial measurement unit ("IMU") data, location data, GPS data, LiDAR data, and/or camera data. In some embodiments, the sensor is and/or includes one or more of an optical sensor, accelerometer, gyroscope, proximity sensor, temperature sensor, infrared sensor, heartrate sensor, barometer, temperature sensor, and/or biometric sensor.
[0106] In some embodiments, implementation module 178 is a system software module that is configured to perform one or more operations in response to receiving an API call via API 176. In some embodiments, the system is an operating system and/or server system. In some embodiments, the software module is a collection of computer-readable instructions. In some embodiments, implementation module 178 is configured to provide an API response (e.g., via API 176) as a result of processing an API call. By way of example, implementation module 178 and API-calling module 174 can each be any one of an operating system, a device driver, a library, an API, an application program, and/or another module. It should be understood that implementation module 178 and API-calling module 174 can be the same type of module as each other or different types of modules from each other. In some embodiments, implementation module 178 is embodied at least in part in firmware, microcode, and/or hardware logic.
52
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0107] In some embodiments, implementation module 178 returns a value through API 176 in response to an API call from API- calling module 174. While API 176 defines the syntax and result of an API call (e.g., how to invoke the API call and what the API call does) , API 176 need not indicate how implementation module 178 accomplishes the function specified by the API call. In some embodiments, various API calls are transferred via the one or more application programming interfaces between API-calling module 174 and implementation module 178. Transferring the API calls can include initiating, invoking, issuing, receiving, calling, returning, and/or responding to the function calls and/or messages. In other words, transferring can describe actions by either of API- calling module 174 or implementation module 178. In some embodiments, a function call and/or other invocation of API 176 sends and/or receives one or more parameters through a parameter list and/or other structure.
[0108] In some embodiments, implementation module 178 provides multiple APIs. In such embodiments, each API provides a different view of functionality implemented by implementation module 178 and/or with different aspects of functionality implemented by implementation module 178. For example, one API of implementation module 178 can provide a first set of functions and can be exposed to third-party developers. In this example, another API of implementation module 178 can be hidden (e.g., not exposed) and provide a subset of the first set of functions as well as a second set of functions, such as testing and/or debugging functions, which are not in the first set of functions. In some embodiments, implementation module 178 calls one or more other components via an underlying API. In such embodiments, implementation module 178 is both an API- calling module and an implementation module. It should be recognized that implementation module 178 can include
53
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) additional functions, procedures, data structures, methods, classes, and/or other services that are not necessarily specified through API 176 and are not available to API-calling module 174. It should also be recognized that API-calling module 174 can be on the same system as implementation module 178 and/or can be located remotely and access implementation module 178 using API 176 over a network. In some embodiments, API-calling module 174, API 176, and/or implementation module 178 are stored in a machine-readable medium, which includes any mechanism for storing information in a form readable by a machine (e.g., a computer and/or other data processing system) . For example, a machine-readable medium can be and/or include magnetic disks, flash memory devices, optical disks, random access memory, and/or read only memory.
[0109] An application programming interface ("API") is an interface between a first software process and a second software process that specifies a format for communication between the first software process and the second software process. Limited APIs, such as private APIs or partner APIs, are APIs that are accessible to a limited set of software processes. Public APIs are APIs that are accessible to a wider set of software processes. Some APIs enable software processes to communicate about or set a state of one or more input devices (e.g., one or more touch sensors, visual sensors, motion/orientation sensors, proximity sensors, pressure sensors, intensity sensors, buttons, switches, rotatable elements, sound sensors, wireless proximity sensors, biometric sensors, and/or external controllers) . Some APIs enable software processes to communicate about and/or set a state of one or more output generation components (e.g., one or more tactile output generation components, one or more display generation components, and/or one or more audio output generation components) . Some APIs enable particular
54
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) capabilities (e.g., text entry, scrolling, handwriting, image editing, and/or image creation) to be accessed, used, and/or performed by a software process (e.g., generating one or more outputs for use by a software process based on one or more inputs from the software process) . Some APIs enable content from a software process to be inserted into a template and/or displayed in a user interface that has a layout and/or behaviors that are specified by the template.
[0110] Many software platforms include a set of frameworks that provides the core objects and/or core behaviors that a software developer can utilize to build software applications for use on the software platform. Software developers can use these objects to display content onscreen, to interact with that content, and to manage interactions with the software platform. Software applications rely on the set of frameworks for their basic behavior, and the set of frameworks provides many ways for the software developer to customize the behavior of the application to match the specific needs of the software application. Many of these core objects and core behaviors are accessed via one or more APIs. An API will typically specify a format for communication between software processes, including specifying and grouping available functions, variables, and protocols. An API call (and/or an API request) will typically be sent from a sending software process to a receiving software process to accomplish one or more of the following: the sending software process providing information to the receiving software process (e.g., for the receiving software process to take action on) , the sending software process requesting information from the receiving software process (e.g., for the sending software process to take action on) , the sending software process providing information to the receiving software process about action taken by the sending software process, or the sending software process requesting
55
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) action by the receiving software process. Interaction with a device (e.g., using a user interface) can include the transfer and/or receipt of one or more API calls between multiple different software processes (e.g., an application and an operating system, different portions of an operating system, or different applications) . For example, when an input is detected, the direct sensor data can be processed into one or more input events that are provided (e.g., via one or more APIs) to a receiving software process that makes one or more determinations based on the input events. In this example, the receiving software process can send (e.g., via one or more APIs) information to a software process to perform an operation (e.g., change a device state and/or user interface) based on the determination. While a determination and an operation performed in response could be made by the same software process, the determination could alternatively be made in a first software process and/or relayed (e.g., via one or more APIs) to a second software process, different from the first software process, which causes the operation to be performed by the second software process. Alternatively, the second software process could relay instructions (e.g., via one or more APIs) to a third software process, different from the first software process and/or the second software process, to perform the operation. It should be understood that some or all user interactions with a computer system could involve one or more API calls within a step of interacting with the computer system (e.g., between different software components of the computer system or between a software component of the computer system and a software component of one or more other computer systems) . It should be understood that some or all user interactions with a computer system could involve one or more API calls between steps of interacting with the computer system (e.g., between different software components of the computer system or between a software component of the
56
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) computer system and a software component of one or more other computer systems
[0111] In some embodiments, the application can be any suitable type of application, including, for example, one or more of a browser application, a media application, a messaging application, a health application, a fitness application, a digital payments application, a social network application, a maps application, a communication application, and/or an application that functions as an execution environment for plug-ins, widgets, or other applications.
[0112] In some embodiments, the application is an application that is pre-installed on the first computer system prior to or upon purchase (e.g., a first-party application) . In some embodiments, the application is an application that is provided to the first computer system via an operating system update file (e.g., a first-party application) . In some embodiments, the application is an application that is provided via an application store. In some embodiments, the application store is pre-installed on the first computer system prior to or upon purchase (e.g., a first-party application store) and allows download of one or more applications. In some embodiments, the application store is a third-party application store (e.g., an application store that is read from a storage device, downloaded via a network, and/or provided by another device) . In some embodiments, the application is a third-party application (e.g., an app that is provided by an application store, downloaded via a network, and/or read from a storage device) . In some embodiments, the application controls the first computer system to perform processes 700, 800, 900, 1100, 1200, 1400, 1500, and 1600
(FIG. 7, 8, 9, 11, 12, 14, 15, 16) by calling an application programming interface ("API") provided by the system process using one or more parameters.
57
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0113] In some embodiments, exemplary APIs provided by the system process include one or more of a device detection API (e.g., for locating nearby electronic devices, media devices, and/or smartphones) , a pairing API (e.g., for establishing a secure connection with an accessory) , a UIKit API (e.g., for generating user interfaces) , an application store API, a payment API, a push notification API, a networking API, a WiFi API, a Bluetooth API, an NFC API, a UWB API, a location detection API, a locator API, a maps API, a health sensor API, a sensor API, a messaging API, a streaming API, a collaboration API, a video conferencing API, an advertising services API, a web browser API (e.g., WebKit API) , a vehicle API, a fitness API, a smart home API, a contact transfer API, a photos API, a camera API, and/or an image processing API.
[0114] In some embodiments, at least one API is a software module (e.g., a collection of computer-readable instructions) that provides an interface that allows another module (e.g., an API-calling module) to access and/or use one or more functions, procedures, data structures, methods, classes, and/or other services provided by application implementation module 172. In some embodiments, the API defines one or more parameters that are passed between API-calling module 174 and application implementation module 172. In some embodiments, API 176 defines a first API call that can be provided by API- calling module 174. In some embodiments, application implementation module 172 is a system software module (e.g., a collection of computer-readable instructions) that is configured to perform an operation in response to receiving an API call via API 176. In some embodiments, application implementation module 172 is constructed to provide an API response (via the API) as a result of processing an API call. In some embodiments, application implementation module 172 is included in device 168 that runs application 170. In some
58
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, the implementation module is included in an electronic device that is separate from device 168 that runs application 170.
[0115] FIGS. 2A-5 illustrate exemplary components and user interfaces of device 200 in accordance with some embodiments. Device 200 can represent computer system 100 and/or include one or more features of computer system 100. In the examples described with respect to FIGS. 2A-5, device 200 is a laptop computer. It should be recognized that device 200 is not limited to being a laptop computer and that device 200 can be one or more other devices. For example, device 200 can be a communal device, such as a smart speaker, a smart display, and/or a television. In some embodiments, a communal device is configured to provide functionality to multiple users (e.g., at the same time and/or at different times) . In such embodiments, the communal device can be set up and/or administered by a single user. For another example, device 200 can be a personal device, such as a desktop computer, a tablet, a smart phone, a smart watch, a fitness tracking device, and/or a head mounted display device. In some embodiments, a personal device is configured to provide functionality to a single user, such as when the single user is logged into the personal device.
[0116] FIGS. 2A-2C illustrate device 200 in three different configurations. As illustrated in FIG. 2A, device 200 is a laptop computer (also referred to herein as a "laptop") that includes base portion 200-2 and display portion 200-1. In some embodiments, base portion 200-2 can be arranged to rest horizontally on a surface, such as a desk, as shown in FIG. 2A. In some embodiments, display portion 200-1 is connected to base portion 200-2 at connection 200-3 (e.g., a hinge, a motorized arm, one or more connection points, and/or a joint) . In some embodiments, connection 200-3 enables display portion
59
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
200-1 to pivot and/or change orientation with respect to base portion 200-2. For example, device 200 can pivot at connection 200-3 to rotate display portion 200-1 and/or device 200 to one or more configurations corresponding to an "OFF" state (e.g., as further described herein in relation to FIG. 2C) . In some embodiments, a configuration corresponding to an "OFF" state is a configuration in which device 200 is in a particular pose. For example, the particular pose can be one in which display portion 200-1 is arranged to be parallel to base portion 200-2 or forming a particular angle (e.g., 60-degree angle) with respect to base portion 200-2. In some embodiments, in the "OFF" state, an area in which content is displayed by device 200 is arranged in a manner that corresponds to (e.g., is associated with, represents, and/or is configured to accompany) the "OFF" state (e.g., not visible, facing down, and/or obscuring the area in which content is displayed) . In some embodiments, in the "OFF" state, an area in which content is displayed by device 200 is not arranged in a manner that corresponds to (e.g., is associated with, represents, and/or is configured to accompany) the "OFF" state. For example, when not in the "OFF" state, device 200 can be arranged within a range of different open configurations (e.g., in which display portion 200-1 is not parallel to base portion 200-2 and the area in which content is displayed by device 200 is visible and/or not obscured) . It should be recognized that display portion 200-1 being parallel to base portion 200-2 is an example of a configuration corresponding to an "OFF" state (e.g., a closed position) of device 200. In some embodiments, one or more other configurations, such as the configuration illustrated in FIG. 2C, could corresponding to the "OFF" state.
[0117] FIG. 2A illustrates device 200 in a corresponding pose on the left and display screen 200-4 (providing an area in
60
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) which content is displayed by device 200) on the right. As illustrated in FIG. 2A, device 200 is in a first configuration (e.g., display portion 200-1 is perpendicular to base portion 200-2 forming a 90-degree angle) . In FIG. 2A, display screen 200-4 represents what is being displayed (e.g., via a display component) by device 200 while open in the first configuration. In FIG. 2A, display screen 200-4 illustrates a state in which device 200 is "ON" (e.g., powered on, operational, awake, activated, and/or in a higher powered and/or more resource-intensive state than the "OFF" state) . In some embodiments, device 200 displays, via display screen 200- 4, one or more user interfaces (e.g., system user interfaces, application user interfaces, user interface objects, windows, controls, and/or other visual content) . In some embodiments, device 200 displays, via display screen 200-4, the one or more user interfaces while in the "ON" state. For example, in FIG. 2A, device 200 is in the "ON" state and display screen 200-4 displays user interface 200-5 that includes an application window. In some embodiments, user interface 200-5 includes (and/or is) one or more user interface objects (e.g., windows, icons, and/or other graphical objects) . For example, user interface 200-5 can include one or more graphical objects different than and/or the same as an application window.
[0118] FIG. 2B illustrates device 200 in a corresponding pose on the left and display screen 200-4 on the right. As illustrated in FIG. 2B, device 200 is in a second configuration (e.g., display portion 200-1 is angled, via connection 200-3, with respect to base portion 200-2 forming a 120-degree angle, which is larger than the angle of FIG. 2A) . In FIG. 2B, display screen 200-4 represents what is being displayed by device 200 while in the second configuration. Display screen 200-4 illustrates a state in which device 200 is "ON" (e.g., the same state as the top diagram of FIG. 2A) .
61
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
In FIG. 2B, device 200 displays, via display screen 200-4, user interface 200-5 (e.g., and is the same as displayed in FIG. 2A) . In some embodiments, device 200 displays a different user interface (e.g., other than user interface 200-5) . For example, although FIG. 2B illustrates device 200 displaying the same user interface 200-5 as in FIGS. 2A while in a different configuration than that shown in FIG. 2A, device 200 can display a different user interface. In some embodiments, device 200 displays a user interface that corresponds to (e.g., is based on, related to, caused by, due to, and/or configured to accompany) a physical state (e.g., location, position, and/or orientation) , for example including content that is specific to a particular angle or specific to a current context.
[0119] FIG. 2C illustrates device 200 in a corresponding pose on the left and display screen 200-4 on the right. As illustrated in FIG. 2C, device 200 is in a third configuration (e.g., display portion 200-1 is angled, via connection 200-3, with respect to base portion 200-2 forming a 60-degree angle, which is a smaller angle than the angles of FIG. 2A and FIG. 2B) . In FIG. 2C, display screen 200-4 represents what is being displayed by device 200 while in the third configuration. In FIG. 2C, display screen 200-4 illustrates a state in which device 200 is "OFF" (e.g., not powered on, not operational, not awake, not activated, powered off, asleep, hibernating, inactive, and/or deactivated) . In some embodiments, device 200 does not display (e.g., ceases and/or forgoes displaying) the one or more user interfaces while in the "OFF" state (e.g., does not display any visual content) . In some embodiments, device 200 displays, via display screen 200-4, one or more user interfaces while in the "OFF" state. In some embodiments, device 200 displays the same user interface as or a different user interface from one or more user interfaces displayed
62
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) while in the "ON" state. In some embodiments, device 200 displays a user interface specific to the "OFF" state and/or a manner of displaying a user interface that is not specific to the "OFF" state. In FIG. 20, display screen 200-4 is blank because nothing is being displayed on the display of device 200 (e.g., display screen 200-4 is off and/or not displaying any user interface, such as user interface 200-5) .
[0120] In some embodiments, device 200 includes one or more components (also referred to herein as "movement components") that enable device 200 to perform (e.g., control and/or cause) movement (and/or be moved) . Performing movement can include moving a portion of device 200, moving all of device 200, and/or moving one or more other devices and/or components (e.g., that are in communication with device 200 and/or movement components of device 200) . For example, device 200 can automatically move and/or cause and/or control movement of display portion 200-1 relative to base portion 200-2, such as to any one or more of the configurations illustrated in FIGS. 2A-2C. In some embodiments, device 200 performs movement based on a state of device 200. Performing movement based on a state can facilitate certain interactions by device 200. For example, such interactions of device 200 can utilize special features, functions, modes, and/or programs that take advantage of the ability of device 200 to perform movement. Examples of such interactions include using movement to communicate (e.g., to a user) a state (e.g., on, off, hibernating, and/or sleeping) of the device, to assist with user input (e.g., reduce a distance to a user) , and/or to augment interaction behavior of the device (e.g., moving in particular ways, during an interaction with a user, that convey information such as importance and/or direction of attention) . In some embodiments, the movement performed corresponds to (e.g., is in response to, is caused by, and/or
63
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is determined and/or performed based on) one or more of a detected input, a detected context (e.g., user context and/or environmental context) , and/or a state of device 200 (e.g., a state and/or a set of multiple states) . For example, device 200 can perform a movement of display portion 200-1 such that device 200 moves from being in the first configuration illustrated in FIG. 2A to being in the second configuration illustrated in FIG. 2B. In this example, device 200 can detect that a user has rearranged with respect to device 200 (e.g., the user stood up) , and in response, device 200 can perform the movement to the second configuration so that the display is at an optimized viewing angle based on the rearranged height and/or angle of the user's eyes with respect to the display of device 200. As another example, device 200 can perform a movement such that device 200 moves from being in the first configuration illustrated in FIG. 2A to being in the third configuration illustrated in FIG. 2C. In this example, device 200 can perform the movement to the third configuration in response to detecting a state with reduced activity (e.g., the "OFF" state as described herein) . In this way, the movement of device 200 to one or more configurations can indicate a state of device 200.
[0121] FIGS. 2A-2C illustrate device 200 having a display portion that is able to move with one degree of freedom via connection 200-3 connecting display portion 200-1 to base portion 200-2. In some embodiments, device 200 includes one or more components that provide device 200 with movement in one or more degrees of freedom. For example, a movement component (e.g., an output device that allows and/or causes movement) of device 200 can include multiple degrees of freedom (e.g., six degrees of freedom including three components of translation and three components of rotation) . For example, device 200 can be implemented to be able to move display portion 200-1 in a
64
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) telescoping forward or backward motion (e.g., display portion 200-1 moves forward while base portion 200-2 remains stationary, for example to reduce and/or extend a viewing distance for a user) . As yet another example, device 200 can be implemented to be able to move display portion 200-1 to rotate about an axis that is perpendicular to connection 200-3 such that display portion 200-1 can turn to position the display to follow a user as they walk around device 200. While the examples shown in FIGS. 2A-2C illustrate a hinge, other movement components can be included in device 200, such as an actuator (e.g., an electric actuator, a pneumatic actuator, and/or a hydraulic actuator) , a rotatable component, a movable base, and/or a rotatable base. In some embodiments, one or more movement components can cause device 200 to move in different ways, such as to rotate (e.g., 0-360 degrees) , to move laterally (e.g., left, right, up, down, and/or any combination thereof) , and/or to tilt (e.g., 0-360 degrees) .
[0122] FIG. 3 illustrates an exemplary block diagram of device 200. In some embodiments, device 200 includes some or all of the components described with respect to FIGS. 1A-1G, 2A-2C, and 5. As illustrated in FIG. 3, device 200 has bus 200-13 that operatively couples I/O section 200-12 (also referred to as an I/O subsection and/or an I/O interface) with processors 200-11 and memory 200-10. As illustrated in FIG. 3, I/O section 200-12 is connected to output devices 200-16. In some embodiments, output devices 200-16 include one or more visual output devices (e.g., a display component, such as a display, a touch-sensitive display, a display screen, and/or a projector) , one or more audio output devices (e.g., a speaker) , one or more haptic output devices (e.g., a device that causes vibration and/or other tactile output) , and/or one or more movement components (e.g., a motor, an actuator, a mechanical linkage, one or more devices that allow and/or
65
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) cause movement, and/or one or more movement components as described herein) . As illustrated in FIG. 3, output devices 200-16 include movement controller 200-17 and actuator 200-18. In some embodiments, movement controller 200-17 can be any component, such as a control device, that controls actuator 200-18. For example, movement controller 200-17 can provide control signals that cause actuator 200-18 to actuate (e.g., cause physical movement) . In some embodiments, movement controller 200-17 includes one or more logic components (e.g., a processor) , one or more feedback components (e.g., a sensor) , and/or one or more control components (e.g., for applying control signals, such as a switch, a relay, and/or a control line) . In some embodiments, actuator 200-18 can be any component that performs physical movement of a portion and/or of the entirety of device 200 and/or a device coupled to and/or in contact with device 200. In some embodiments, movement controller 200-17 and actuator 200-18 are embodied in the same device and/or component as each other (e.g., a dedicated onboard movement controller 200-17 that is affixed to actuator 200-18) . In some embodiments, movement controller 200-17 and actuator 200-18 are embodied in different devices and/or components from each other (e.g., one or more processors 200-11 can function as the movement controller 200- 17 of actuator 200-18) . In some embodiments, movement controller 200-17 and/or actuator 200-18 are embodied in one or more devices other than device 200. For example, device 200 can be coupled to (e.g., temporarily and/or removably) another device and can instruct movement controller 200-17 and/or control actuator 200-18 of the other device. In some embodiments, actuator 200-18 functions to cause one or more types of mechanical movement (e.g., linear and/or rotational) in one or more manners (e.g., using electric, hydraulic, magnetic, and/or pneumatic power) . Examples of actuator 200-18
66
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) include linear actuators, electromechanical actuators, and/or rotary actuators .
[0123] As illustrated in FIG. 3, I/O section 200-12 is connected to input devices 200-14. In some embodiments, input devices 200-14 include one or more physical input devices (e.g., a button, a touch-sensitive surface, a switch, a slider, and/or a rotatable input mechanism) , one or more visual input devices (e.g., a light sensor and/or a camera) , one or more audio input devices (e.g., a microphone) , and/or other input devices (e.g., a pressure sensor, a contact intensity sensor, a ranging sensor, an accelerometer, a GPS sensor, an accelerometer, a directional sensor, a compass, a gyroscope, a temperature sensor, a motion sensor, and/or a biometric sensor) . In addition, I/O section 200-12 can be connected with communication unit 200-15 for receiving application and/or operating system data, using for example Wi-Fi, near field communication ("NFC") , Bluetooth, cellular, and/or other wireless and/or wired communication techniques.
[0124] In some embodiments, memory 200-10 of device 200 can be and/or include one or more non-transitory computer-readable storage mediums for storing computer-executable instructions, which, when executed by one or more computer processors 200- 11, cause computer processors 200-11 to perform the techniques described herein, including processes 700, 800, 900, 1100, 1200, 1400, 1500, and 1600 (FIG. 7, 8, 9, 11, 12, 14, 15, 16) . A computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by and/or in connection with the instruction execution device, apparatus, and/or system. In some embodiments, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, optical, magnetic, and/or semiconductor storages. Examples of such storage include
67
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) magnetic disks, optical discs based on CD, DVD, and/or Blu-ray technologies, as well as persistent solid-state memory such as flash and solid-state drives. In some embodiments, the storage medium is a transitory computer-readable storage medium. Device 200 is not limited to the components and configuration of FIG. 3 but rather can include other and/or additional components in a multitude of possible configurations, all of which are intended to be within the scope of this disclosure.
[0125] FIG. 4 illustrates a functional diagram of actuator 200-18 in accordance with some embodiments. As described herein, actuator 200-18 can be and/or include any component that performs physical movement. In some embodiments, actuator 200-18 operates using one or more inputs, such as control signal 200-18A and/or energy source 200-18B. For example, actuator 200-18 can be a rotary actuator that converts electric energy into rotational movement. This rotational movement can cause the movement of the display portion of device 200 described herein with respect to FIGS. 2A-2C. For example, a counterclockwise rotational movement of the actuator can cause device 200 to move to a configuration having a larger angle (e.g., the second configuration illustrated in FIG. 2B) . For another example, a clockwise (e.g., opposite) rotational movement of the actuator can cause device 200 to move to a configuration having a smaller angle (e.g., the third configuration illustrated in FIG. 2C) . In some embodiments, control signal 200-18A can indicate one or more start and/or stop instructions, a goal configuration
(e.g., pose and/or location) for movement and/or actuation, a movement and/or actuation speed, a movement and/or actuation direction, an amount of time to move and/or actuate, and/or one or more other characteristics of movement and/or actuation. In some embodiments, control signal 200-18A and energy source 200-18B are the same signal and/or input. In
68
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) some embodiments, one or more additional components (e.g., mechanical and/or electric) are coupled (e.g., removably or permanently) to actuator 200-18 for affecting movement and/or actuation (e.g., mechanical linkage such as gears, a lead screw, and/or one or more other components for changing a characteristic of movement and/or actuation) . In some embodiments, actuator 200-18 includes one or more feedback components (e.g., encoder, overcurrent sensor, position sensor, and/or force sensor) that form part of a feedback loop for monitoring, controlling, modifying, and/or ceasing movement and/or actuation (e.g., slowing actuation as a goal position is reached and/or ceasing actuation if physical resistance to actuation is detected via a sensor) . In some embodiments, the one or more feedback components are included (e.g., partially and/or wholly) in a movement controller (e.g., movement controller 200-17 of FIG. 3) operatively coupled to the actuator.
[0126] Attention is now turned to functionality (e.g., features and/or capabilities) of one or more devices, such as computer system 100, device 168, and/or device 200. One such functionality is implementing a "software object," which can alternatively be referred to as an agent, a virtual assistant, an intelligent virtual assistant, a software agent, an interactive virtual assistant, a personal assistant, an intelligent personal assistant, a virtual agent, an interactive personal assistant, an intelligent interactive personal assistant, and/or an artificial intelligence ("Al") assistant. In some embodiments, a system software object is a software object provided by an operating system (e.g., of a device and/or a computer system) . In some embodiments, an application software object is a software object provided by an application.
69
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0127] In some embodiments, a user is (and/or represents, includes, and/or is included in) one or more of a subject, person, animal, and/or object in an environment (e.g., a physical and/or virtual environment) , such as an environment of the device. In some embodiments, a user is (and/or represents, includes, and/or is included in) an entity that is perceived (e.g., detected by the device, one or more other devices, and/or one or more components thereof) . In some embodiments, an entity is something that is distinguished from surrounding entities (e.g., pieces of environments and/or other users) and/or that is considered as a discrete logical construct via one or more components (e.g., perception components and/or other components) . In some embodiments, a user is physical. For example, a physical user can represent a user standing in front of, and being perceived by, the device. In some embodiments, a user is virtual. For example, a virtual user can represent an avatar in a virtual scene perceived by the device (e.g., the avatar is detected in a media stream received by the device and/or captured by a camera of the device) . Although presented above as examples of a "user," the terms and/or concepts referred to as "subject," "person," "animal," and/or "object" can be interchanged with "user" throughout this disclosure, unless explicitly indicated otherwise. For example, use the term "user" can likewise be understood to also refer to "user, " unless explicitly indicated otherwise. In some embodiments, a software object refers to a set of one or more functions implemented in hardware and/or software (e.g., locally and/or remotely) on a software system (e.g., a single device and/or multiple devices) .
[0128] As an example, with reference to FIGS. 2A-2C, a software object implemented at least partially on device 200 can perform operations that cause display portion 200-1 of
70
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) device 200 to move with respect to base portion 200-2. For example, the software object detects (e.g., perceives and determines the occurrence of) a context that includes the user standing up (e.g., based on facial detection and tracking) . In this example, in response to detecting the context that includes the user standing up, the software object causes device 200 to open and/or device 200 opens display portion 200-1 to the larger angle. As another example, the software object can detect verbal input that corresponds to (e.g., is interpreted as and/or that refers to an operation that includes) a request to move the display (e.g., "Please enter sleep mode" or "Please move my display") . In this example, in response to detecting the verbal input that corresponds to the request to move the display, the software object causes device 200 to move and/or device 200 moves display portion 200-1. In some embodiments, a software object performs operations to perceive an environment, interact with users, learn skills, retrieve knowledge, acquire knowledge, and/or perform tasks. The software object can, for example, perform these and/or other operations in response to user input and/or automatically (e.g., at an appropriate time determined based on a perceived context) .
[0129] FIG. 5 illustrates a functional diagram of an exemplary software system 200-20. As illustrated in FIG. 5, software system 200-20 is illustrated as a box with a boundary that encloses input devices 200-22, software object components 200- 24, and output devices 200-26. In some embodiments, software system 200-20 includes more, fewer, and/or different components than those that are illustrated in FIG. 5. In some embodiments, software system 200-20 is implemented on a single device (e.g., computer system 100, device 168, or device 200) . In some embodiments, software system 200-20 is implemented on multiple devices. In some embodiments, one or more components
71
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of software system 200-20 illustrated in and/or described with respect to FIG. 5 are external to but operatively coupled to software system 200-20 (e.g., an external device, an external sensor, an external actuator, an external display component, an external speaker, an accessory, and/or an external database) . In some embodiments, one or more components of software system 200-20 are local to one or more other components of software system 200-20. In some embodiments, one or more components of software system 200-20 are remote from one or more other components of software system 200-20.
Examples of operations for and/or with which a software object can be used include providing intelligent interaction capabilities (e.g., due to in part to one or more machinelearning ("ML") models such as a large language model ("LLM") ) for responding and/or causing operations to be performed; performing tasks (e.g., a set of operations for achieving a particular goal) (e.g., automatically and/or intelligently) detecting, recognizing, and/or classifying a user in the environment; detecting and/or responding to input (e.g., air gestures, verbal input, and/or physical input, such as touch input and/or force inputs to physical hardware components (e.g., knobs, button, and/or sliders) ) ; tracking a user's face, eyes, and/or body (e.g., to move with the user and/or identify an activity and/or intent of the user) ; detecting context (e.g., user context, environmental context, and/or operating context) ; moving (e.g., changing location, position, orientation, and/or pose) ; and/or performing one or more operations in response to input, context, and/or stimulus (e.g., an object or event that is external and/or internal to the device and that causes one or more responsive operations by a device) . The preceding list is meant to be illustrative of operations that can be performed using a software object but is not meant to be an exhaustive list. It should be
72
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) recognized that other operations fall within the intended scope of the capabilities of a software object.
[0130] In some embodiments, input devices 200-22 includes components for performing sensing and/or communications functions of software system 200-20. As illustrated in FIG. 5, input devices 200-22 includes one or more sensors 200-22A. One or more sensors 200-22A can include any component that functions to detect data corresponding to a physical environment. Examples of one or more sensors 200-22A include a contact sensor, a camera, a light sensor, a microphone, an accelerometer, a position sensor, a pressure sensor, a temperature sensor, and/or an olfactory sensor. This list is not intended to be exhaustive, and one or more sensors 200-22A can include other sensors not explicitly identified herein that detect, generate, and/or otherwise provide data that can be used (e.g., stored, processed, and/or transformed) for detecting data corresponding to a physical environment. As illustrated in FIG. 5, input devices 200-22 includes one or more communications components 200-22B. One or more communications components 200-22B can include any component that functions to send and/or receive communications (e.g., a network interface component, an antenna, an encoder, a decoder, a modem, and/or a communication protocol stack) internal and/or external to software system 200-20. Communications components 200-22B can be between different devices and/or between components of the same device. The communications can include control signals and/or data (e.g., instructions, files, application data, messages, and/or media streams) . In some embodiments, input devices 200-22 includes more, fewer, and/or different components than those illustrated in FIG. 5. In some embodiments, input devices 200- 22 are implemented in hardware and/or software. In some embodiments, a software object performs operations in response
73
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) to non-contact inputs, such as air gestures and/or natural language commands.
[0131] In some embodiments, software object components 200-24 includes components that manage and/or carry out functions of a software object of software system 200-20. As illustrated in FIG. 5, software object components 200-24 includes the following functional components: coordination, task flow, and/or orchestration component 200-24A, administration component 200-24B, perception component 200-24C, evaluation component 200-24D, interaction component 200-24E, policy and decision component 200-24F, knowledge component 200-24G, learning component 200-24H, models component 200-241, and APIs component 200-24J. Each of these components is described herein. Notably, this list of software object components 200- 24 is not intended to be exhaustive, and software object components 200-24 can include other functional components not explicitly identified herein that can be used (e.g., processed, stored, and/or transformed) for performing any function of a software object, such as those described herein. In some embodiments, software object components 200-24 includes more, fewer, and/or different components than those illustrated in FIG. 5. In some embodiments, software object components 200-24 is implemented in hardware and/or software.
[0132] In some embodiments, task flow, coordination, and/or orchestration component 200-24A performs operations that enable a software object to handle coordination between various components. For example, operations can include handling a data processing task flow to move from perception component 200-24C (e.g., that detects speech input) to models component 200-241 (e.g., for processing the detected speech input using a large language model to determine content and/or intent of the speech input) . In some embodiments, task flow, coordination, and/or orchestration component 200-24A performs
74
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) operations that enable a software object to handle coordination between one or more external components (e.g., resources) . For example, FIG. 5 illustrates examples of external components, such as external database 200-30. In some embodiments, administration component 200-24B includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, administration component 200-24B includes functionality performed by one or more applications of a device implementing software system 200-20.
[0133] In some embodiments, administration component 200-24B performs operations that enable a software system to handle administrative tasks like managing system and/or component updates, managing user accounts, managing system settings, and/or managing component settings. In some embodiments, administration component 200-24B includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, administration component 200-24B includes functionality performed by one or more applications of a device implementing software system 200-20.
[0134] In some embodiments, perception component 200-24C performs operations that enable a software object to perceive environmental input. For example, operations can include detecting that a context and/or environmental condition has occurred, detecting the presence of a user (e.g., subject, person, animal, and/or object in an environment) , detecting characteristics (e.g., visible and/or non-visible) of a user, detecting an input that includes speech, detecting an input that includes an air gesture, detecting facial expressions, and/or detecting verbal and/or physical cues. In some embodiments, perception component 200-24C includes functionality performed by an operating system of a device
75
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) implementing software system 200-20. In some embodiments, perception component 200-24C includes functionality performed by one or more applications of a device implementing software system 200-20.
[0135] In some embodiments, evaluation component 200-24D performs operations that enable a software object to process evaluate data (e.g., to determine a context such as an environmental context, a user context, and/or an operating context) . For example, operations can include evaluating data gathered from perception component 200-24C, knowledge component 200-24G, external database 200-30, and/or remote processing resource 200-32. In some embodiments, evaluation component 200-24D includes functionality performed by an operating system of a device implementing software system 200- 20. In some embodiments, evaluation component 200-24D includes functionality performed by one or more applications of a device implementing software system 200-20.
[0136] Refe rence is made herein to environmental context (also referred to herein as a "context of an environment" and/or "a context corresponding to an environment") . In some embodiments, an environmental context is a context based on one or more characteristics of the environment (e.g., users, time, weather, locations, and/or lighting) . For example, an environmental context can include that it is daytime, that it is raining outside, and/or that a device is currently located in a park. In some embodiments, a device (e.g., using a software object) determines an environmental context (e.g., to be currently true, applicable, and/or occurring) using one or more of detecting input (e.g., via one or more input devices) and/or receiving data (e.g., from one or more other devices and/or components in communication with the device) .
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0137] Reference is made herein to user context (also referred to herein as a "context of a user" and/or "a context corresponding to a user") (and/or a user context) . In some embodiments, a user context is a context based on one or more characteristics of the user (and/or a user) . For example, a user context can include the user's appearance and/or actions, behavior, movement, clothing, personality, location, and/or pose. In some embodiments, a device (e.g., using a software object) determines a user context (e.g., to be currently true, applicable, and/or occurring) using one or more of detecting input (e.g., via one or more input devices) and/or receiving data (e.g., from one or more other devices and/or components in communication with the device) . In some embodiments, a device determines user context based on historical context and/or learned characteristics of the user, where one or more characteristics of the user are learned and/or stored over a period of time by the device.
[0138] Refe rence is made herein to operational context (also referred to herein as a "context of operation" and/or an "operating context") . In some embodiments, an operational context is a context based on one or more characteristics of the operation of a device (e.g., the device determining and/or accessing the operational context and/or one or more other devices) . For example, an operational context can include the state of the device (and/or of one or more components of the device) , an internal dialogue of the device (e.g., the device's understanding of a context) , operations being performed by the device, applications and/processes that are executing (e.g., open and/or running) on the device. In some embodiments, a device (e.g., using a software object) determines an operational context (e.g., to be currently true, applicable, and/or occurring) using one or more of detecting input (e.g., via one or more input devices) and/or receiving
77
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) data (e.g., from one or more other devices and/or components in communication with the device) . In some embodiments, a device (e.g., using a software object) determines an operational context (e.g., to be currently true, applicable, and/or occurring) using one or more states (e.g., retrieved, accessed, and/or queried by a process of the device) .
[0139] In some embodiments, interaction component 200-24E performs operations that enable a software object to manage and/or perform interactions with users. For example, operations can include determining an appropriate interaction model for a particular context and/or in response to a particular input. In some embodiments, interaction component 200-24E includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, interaction component 200-24E includes functionality performed by one or more applications of a device implementing software system 200-20.
[0140] In some embodiments, policy and decision component 200- 24F performs operations that enable a software object to take actions in view of available data. For example, operations can include determining which operations to perform and/or which functional components to utilize in response to a detected context. In some embodiments, policy and decision component 200-24F includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, policy and decision component 200-24F includes functionality performed by one or more applications of a device implementing software system 200-20.
[0141] In some embodiments, knowledge component 200-24G performs operations that enable a software object to access and use stored knowledge. For example, operations can include indexing, storing, and/or retrieving data from a database, a
78
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) data store, and/or other resource. In some embodiments, knowledge component 200-24G includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, knowledge component 200- 24G includes functionality performed by one or more applications of a device implementing software system 200-20.
[0142] In some embodiments, learning component 200-24H performs operations that enable a software object to learn through experiences. For example, operations can include observing and/or keeping track of data that includes user characteristics, routines, preferences, and/or environmental characteristics in a manner that allows such data to be used to inform future operations by the software object and/or a component thereof (e.g., such as when performing tasks and/or interactions with users) . In some embodiments, learning component 200-24H includes functionality performed by an operating system of a device implementing software system 200- 20. In some embodiments, learning component 200-24H includes functionality performed by one or more applications of a device implementing software system 200-20.
[0143] In some embodiments, models component 200-241 performs operations that enable a software object to apply ML models (e.g., such as a large language model or "LLM") to process data. For example, operations can include executing ML models, storing ML models, training and/or re-training ML models, and/or otherwise managing aspects of implementing ML models. In some embodiments, models component 200-241 includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, models component 200-241 includes functionality performed by one or more applications of a device implementing software system 200-20.
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0144] In some embodiments, software system 200-20 responds to natural language input. For example, software system 200-20 responds to a natural language input that is in the form of a question, a command, a statement, and/or a request. In some embodiments, software system 200-20 outputs text and/or speech output that is provided in a natural language or mimicking a natural language style. For example, software system 200-20 can process the natural language question "How hot is it outside?" with a speech response that indicates the current temperature outside at the user's location (e.g., "It is 18 degrees outside") . In some embodiments, software system 200-20 responds to natural language input by providing information (e.g., travel, weather, and/or calendar information) and/or performing a task (e.g., searching a database, opening a document, and/or opening an application) .
[0145] In some embodiments, software system 200-20 includes and/or relies on one or more data models to process input (e.g., natural language input, visual input, gesture input, and/or other data input) and/or provide output (e.g., output of information via visual output, audio output, natural language output, and/or textual output) . Such data models can include and/or be trained using user data (e.g., based on particular interactions and/or data from the user being interacted with) and/or global data (e.g., general data based on interactions and/or data from many users) . For example, user data (e.g., previous use of language and/or phrases, preferences, calendar entries, a contact list, and/or activity data) can be used to better infer user intent and/or provide responses that are more likely to address a user's request. In some embodiments, data models used by software system 200-20 include, are used by, and/or are implemented using one or more machine-learning components (e.g., hardware, software, and/or one or more neural networks) . Such machine-learning components
80
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) can be used to process verbal input to determine words and/or phrases therein, a user intent corresponding to the words, one or more contexts that correspond to the words, one or more confidence scores, and/or a set of one or more actions to take in response to the verbal input. Analogous operations can be performed to process other types of inputs, such as data input, visual input, and/or textual input. Such data models can include machine-learning and/or data-processing models, including, but not limited to, language models, speech recognition models, natural language processing models, object recognition models, visual processing models, ontologies, task flow models, and/or intent recognition models (e.g., used to determine user intent) .
[0146] In some embodiments, Application Programming Interfaces (APIs) component 200-24J performs operations that enable a software object to interface with services, devices, and/or components. For example, operations can include relaying data (e.g., responses, requests, and/or other messages) between data interfaces (e.g., between software programs, between a system process and application process, between system processes, between application processes, between communication protocols, between a client and a server, between file systems, and/or between components on different sides of a trust boundary) . In some embodiments, the data interfaces served by APIs component 200-24J are local (e.g., to the device, such as two application processes exchanging data) and/or remote (e.g., from the device, such as interfacing with a web service via a remote server) . In some embodiments, APIs component 200-24J includes functionality performed by an operating system of a device implementing software system 200-20. In some embodiments, APIs component 200-24J includes functionality performed by one or more applications of a device implementing software system 200-20.
81
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0147] In some embodiments, output devices 200-26 includes components for performing output functions of software system 200-20. The exemplary output devices illustrated in FIG. 5 are described briefly below. In some embodiments, output devices 200-26 include fewer components, more, and/or different components than those illustrated in FIG. 5. In some embodiments, output devices 200-26 are implemented in hardware and/or software.
[0148] As illustrated in FIG. 5, output devices 200-26 includes one or more visual output devices 200-26A. One or more visual output devices 200-26A can include any component that functions to output (e.g., create, generate, and/or display) , and/or cause output of, a visual output (e.g., an output that is visually perceptible, such as playback of visual media content, a graphical user interface, and/or lighting) . Examples of one or more visual output devices 200- 26A include a display component, a light-emitting diode ("LED") , a projector, a head mounted display ("HMD") , and/or a component that creates visually perceptible effects (e.g., movement) . This list is not intended to be exhaustive, and one or more visual output devices 200-26A can include other visual output devices not explicitly identified herein that detect, generate, and/or otherwise provide data that can be used (e.g., stored, processed, and/or transformed) for outputting visual output.
[0149] As illustrated in FIG. 5, output devices 200-26 include one or more audio output devices 200-26B. One or more audio output devices 200-26B can include any component that functions to output (e.g., create and/or generate) , and/or cause output of, an audio output (e.g., an output that is audibly perceptible, such as a sound, speech, music, and/or audio media content) . Examples of one or more audio output devices 200-26B include a speaker, a tone generator, an audio 82
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) amplifier, and/or a component that creates audibly perceptible effects (e.g., movement such as vibrations) . This list is not intended to be exhaustive, and one or more audio output devices 200-26B can include other audio output devices not explicitly identified herein that generate, detect, and/or otherwise provide data that can be used (e.g., stored, processed, and/or transformed) for outputting audio output.
[0150] As illustrated in FIG. 5, output devices 200-26 include one or more movement output devices 200-26C (also referred to herein as a "movement component") . One or more movement output devices 200-26C can include any component that functions to output (e.g., create and/or generate) , and/or cause output of, a movement output (e.g., an output that includes physical movement of the device and/or another device/component ) . Examples of one or more movement output devices 200-26C include a movement controller, an electromechanical device, an actuator, a mechanical linkage, and/or a component that creates physical movement. This list is not intended to be exhaustive, and one or more movement output devices 200-26C can include other movement output devices not explicitly identified herein that detect, generate, and/or otherwise provide data that can be used (e.g., stored, processed, and/or transformed) for outputting movement output. As illustrated in FIG. 5, output devices 200-26 include one or more haptic output devices 200-26D. One or more haptic output devices 200- 26D can include any component that functions to output (e.g., create, generate, and/or display) , and/or cause output of, a haptic output (e.g., an output that is physically perceptible using tactile sensation, such as a vibration, pressure, texture, and/or shape) . Examples of one or more haptic output devices 200-26D include a speaker, a component that generates vibrations, a component that generates pressure changes, a component that generates texture changes, and/or a component
83
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) that creates perceivable tactile effects. This list is not intended to be exhaustive, and one or more haptic output devices 200-26D can include other haptic output devices not explicitly identified herein that detect, generate, and/or otherwise provide data that can be used (e.g., stored, processed, and/or transformed) for outputting haptic output.
[0151] As illustrated in FIG. 5, output devices 200-26 include one or more communications components 200-26E. One or more communications components 200-26E can include any component that functions to send and/or receive communications (e.g., a network interface component, an antenna, an encoder, a decoder, a modem, and/or a communication protocol stack) internal and/or external to software system 200-20. In some embodiments, the communications can be between different devices and/or between components of the same device. In some embodiments, the communications can include control signals and/or data (e.g., instructions, files, application data, messages, and/or media streams) . In some embodiments, one or more communications components 200-26E includes one or more features of one or more communications components 200-22B (e.g., as described herein) . In some embodiments, one or more communications components 200-26E are the same as one or more communications components 200-22B (e.g., one or more components that handle communication inputs and outputs and thus be considered as either and/or both an input device and an output device) .
[0152] Fo r the purposes of this disclosure, a software object does not need to include all of the functionality mentioned herein but rather can include less functionality or more functionality. For example, a software object can be implemented on a software system that does not have movement functionality but that otherwise includes an intelligent personal assistant that can interact with a user.
84
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0153] Throughout this disclosure, reference can be made to movement output (e.g., referred to in various forms, such as movement, output of movement, device movement, movement output, output of motion, device motion, and/or motion output) . In some embodiments, outputting (e.g., causing output of) movement refers to movement of an electronic device (e.g., a portion or component thereof relative to another portion and/or of the whole electronic device) . For example, referring back to FIG. 2B, movement output can refer to device 200 actuating movement component 200-3 to move display portion 200-1 to the position illustrated in FIG. 2B (e.g., from the position in FIG. 2A) . In some embodiments, movement output is not (e.g., does not include and/or does not only include) haptic output (e.g., haptic movement output) . In some embodiments, movement output is not (e.g., does not include and/or does not only include) vibration output. In some embodiments, movement output is not (e.g., does not include and/or does not only include) oscillating movement (e.g., movement of an actuator that merely causes vibration by moving a component repeatedly along a path that is internal to the device) . In some embodiments, movement output includes (e.g., requires and/or results in) changing a location and/or pose of at least a portion of (and/or the entirety of) a component or the electronic device. In some embodiments, movement output includes output that moves at least a portion of (and/or the entirety of) a component or the electronic device from a first location and/or first pose to a second location and/or second pose. For example, with respect to FIGS. 2A-2C, display portion 200-1 is shown in a different location (e.g., in space) and pose (e.g., relative to base portion 200-2) in each of FIGS. 2A, 2B, and 2C. In some embodiments, movement output includes output that moves at least a portion (and/or the entirety of) a component or the electronic device to a third location and/or third pose (e.g., from the first location
85
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and/or first pose and/or from the second location and/or the second pose) . In some embodiments, the third location and/or the third pose is the same as the first location and/or first pose and/or as the second location and/or the second pose. For example, movement output can include device 200 in FIG. 2A beginning from the first position illustrated in FIG. 2A, moving to the second position illustrated in FIG. 2B, and moving to return to the first position illustrated in FIG. 2A. For example, movement output can include device 200 in FIG. 2A beginning from the first position illustrated in FIG. 2A, moving to the second position illustrated in FIG. 2B, and continuing movement to come to rest at the third position illustrated in FIG. 2C.
[0154] Throughout this disclosure, an electronic device can be illustrated in (and/or described as being in) different locations and/or poses at different times. For example, in FIG. 2A illustrates device 200 in the first position, FIG. 2B illustrates device 200 in the second position, and FIG. 2A illustrates device 200 in the third position. In some embodiments, the electronic device moves itself between such locations and/or poses (e.g., using movement output) . For example, device 200 moves from the first position to the second position under its own power (e.g., using a power source and one or more actuators to cause movement) . In particular, any example herein that illustrates and/or describes an electronic device being at different locations and/or poses (e.g., at different times) should be understood to cover a scenario in which the device moved itself between such locations and/or poses (e.g., unless otherwise clearly indicated) .
[0155] Throughout this disclosure, reference can be made to "causing output," "performing output," and/or "outputting" (e.g., by one or more output generation devices and/or by one
86
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) or more output generation components) . In some embodiments, outputting (e.g., or the aforementioned variants) includes (and/or is) outputting movement (e.g., movement output as described herein) .
[0156] Throughout this disclosure, reference can be made to "causing display of," "displaying," and/or "outputting visual content" (e.g., by one or more display components) . In some embodiments, displaying (e.g., or the aforementioned variants) includes displaying visual content in connection with outputting movement (e.g., movement output as described herein) .
[0157] Throughout this disclosure, reference can be made to "causing output of audio," "outputting audio," and/or "providing audio output" (e.g., by one or more audio generation components and/or by one or more audio output devices) . In some embodiments, outputting audio (e.g., or the aforementioned variants) includes outputting audio content in connection with outputting movement (e.g., movement output as described herein) .
[0158] Throughout this disclosure, reference can be made to movement of an avatar (or other representation of a user, a software object, and/or a character that is displayed) (e.g., by one or more display components) . In some embodiments, moving an avatar (e.g., or the aforementioned variants) includes displaying movement of visual content in connection with outputting movement (e.g., movement output as described herein) . For example, displaying an avatar nodding in agreement can include movement of the electronic device in a similar manner as the avatar movement (e.g., mimicking nodding) . In some embodiments, moving an avatar (e.g., or the aforementioned variants) includes outputting movement (e.g., movement output as described herein) without displaying
87
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) movement of visual content. For example, a device can perform movement output that mimics nodding without moving a displayed avatar (e.g., the avatar does not move relative to the display) . As illustrated in FIG. 5, software system 200-20 can optionally interface with external components such as external database 200-30, remote processing component 200-32, and/or remote administration component 200-34. In some embodiments, external database 200-30 represents one or more functions that provide data storage resources accessible to software system 200-20. In some embodiments, access to the data of external database 200-30 is provided directly to software system 200-20 (e.g., the software system manages the database) and/or indirectly to software system 200-20 (e.g., a database is managed by a different system, but data stored therein can be provided and/or stored for use by software system 200-20) . In some embodiments, external database 200-30 is dedicated to (e.g., only for use by) software system 200-20, is not dedicated to software system 200-20 (e.g., is a database of a web service accessible to different software systems) , and/or is a combination of both dedicated and non-dedicated database resources. In some embodiments, remote processing component 200-32 represents one or more components that function as a data processing resource that is accessible to software system 200-20. In some embodiments, access to remote processing component 200-32 is provided directly to software system 200- 20 (e.g., the software system manages the processing resources) and/or indirectly to software system 200-20 (e.g., a processing resource managed by a different system, but that can provide data processing for the benefit of software system 200-20) . In some embodiments, remote processing component 200- 32 is dedicated to (e.g., only for use by) software system 200-20, is not dedicated to software system 200-20 (e.g., is a processing resource of a web service accessible to different software systems) , and/or is a combination of both dedicated
88
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and non-dedicated processing resources. Examples of data processing include processing image data (e.g., for feature extraction and/or object detection) , processing audio data (e.g., for processing natural language speech input via a large language model) , and/or training a machine-learning algorithm and/or model. In some embodiments, remote administration component 200-34 represents functions that include and/or are related to administrative functions. For example, such administrative functions can include providing component updates to software system 200-20 (e.g., software and/or firmware updates) , managing accounts (e.g., permissions, access control, and/or preferences associated therewith) , synchronizing between different software systems and/or components thereof (e.g., such that a software object accessible via multiple devices of a user can provide a consistent user experience between such devices) , managing cooperation with other services and/or software systems, error reporting, managing backup resources to maintain software system reliability and/or software object availability, and/or other functions required by software system 200-20 to perform operations, such as those described herein.
[0159] The various components of software system 200-20 described above with respect to FIG. 5 represent functional blocks that represent functionality. This functionality can be implemented on the same and/or different hardware (e.g., physical components) and/or by the same and/or different software. For example, the functional blocks can be implemented using one or more physical components, devices (e.g., computer system 100, device 168, and/or device 200) , and/or software programs. In other words, each functional block does not necessarily represent a single, discrete physical component, device, and/or software program, but can be implemented using one or more of these. Further, software
89
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) system 200-20 can include multiple implementations of functionality represented by a respective functional block. For example, software system 200-20 can include multiple different model components representing ML models that are used in different contexts, can include multiple different API components representing different APIs that are used for different services, and/or can include multiple different visual output devices that are used for outputting different types of visual output.
[0160] Attention is now turned to discussion of concepts that can arise with respect to operation of a software object.
[0161] As discussed throughout, a software object can be capable of interacting with a user. In some embodiments, this capability includes the ability to process explicit requests, commands, and/or statements. In some embodiments, explicit requests, commands, and/or statements include and/or are interpreted as instructions directed to accomplishing a task (e.g., display X, complete task Y, and/or perform operation Z) . In some embodiments, a software object includes the ability to process implicit requests, commands, and/or statements. In some embodiments, an implicit command, request, and/or statement does not include an explicit command, request, and/or statement. For example, "I like going to Africa," can be interpreted as an implicit command, request, and/or statement which, in response to detecting, device 200 displays an itinerary in response to the statement. As another example, "This picture is for my grandmother," can be interpreted as an implicit command, request, and/or statement which, in response to detecting, device 200 displays suggestions for modifying the picture. As another example, "I'm so tired," can be interpreted as an implicit command, request, and/or statement which, in response to detecting, device 200 causes a sleep meditation application to begin a
90
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) meditation session. As yet another example, "I miss my grandad" can be interpreted as an implicit command, request, and/or statement when, in response to detecting, device 200 can initiate a live communication session (e.g., video call, telephone call, and/or text messaging session) with grandad. In some embodiments, an implicit request is more likely to be processed according to one or more current environmental context, operational context, and/or user context, while an explicit request is less likely to be processed according to one or more current environmental context, operational context, and/or user context. For example, the phrase, "call my grandad," can be an explicit request, and in response to detecting the request, device 200 will initiate a live communication session with grandad, irrespective of one or more current environmental context, operational context, and/or user context. However, the phrase, "I miss my grandad," can be an implicit request, and in response to detecting the request, device 200 can display a list of gifts to buy for grandad if a user has been recently talking about buying gifts or could call grandad in another context that does not include the user recently discussing buying gifts. In some embodiments, a request can include one or more explicit requests and one or more implicit requests. In some embodiments, an implicit request is responded to independently from an explicit request, and in other embodiments, a response to an implicit request is dependent on an explicit request.
[0162] Refe rence can be made herein to a response by a software object that is output by a device. In some embodiments, a response includes an audio portion (e.g., audio output, sound, acoustic output, and/or speech) (also referred to herein as a "verbal response," an "audio response," and/or an "acoustic response) and/or a visual portion (e.g., display and/or movement of a representation and/or avatar) . In some
91
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, a response includes a movement portion (e.g., movement of the device) . In some embodiments, a response includes a haptic portion (e.g., touch and/or vibration) .
[0163] Reference can be made herein to an internal context, internal dialogue, and/or an operational context, which can refer to a dynamic context or dynamic decision-making process of the device, a state of device 200, and/or internal data on which the device is partially basing its decision. In some embodiments, an internal dialogue includes a set of one or more rules, detections, characteristics, and/or observations that the computer system uses to generate a response to one or more commands, questions, and/or statements) . In some embodiments, the set of one or more rules, detections, characteristics, and/or observations are learned and/or generated via deep learning and/or one or more machinelearning algorithms, and/or using one or more machine-learning and/or system software objects. In some embodiments, an internal dialogue is generated in real-time. In some embodiments, an internal dialogue is locally stored and/or stored via the cloud. In some embodiments, an internal dialogue can be modified, updated, and/or deleted. In some embodiments, an internal dialogue is generated based on other internal dialogues.
[0164] Reference can be made herein to personality and/or behavior (or a representation of personality/behavior ) (e.g., of a software object, user, and/or character) . In some embodiments, personality and/or behavior refers to a set of one or more characteristics that the device detects, applies, has knowledge of, conforms to, and/or tracks. In some embodiments, the personality or behavior is used as basis to perform operations. For example, a software object can detect a user' s personality and respond in a manner based on the personality (e.g., output different responses in response to
92
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) different user personalities) . As another example, the software object can output a response having characteristics that correspond to one or more characteristics that correspond to the personality and/or behavior (e.g., output a response in different ways that depend on personality of the software object) . In some embodiments, such characteristics represent and/or mimic personality of a user, such as how the user acts and/or speaks. In some embodiments, such characteristics approximate a user's personality.
[0165] In some embodiments, a software object is a system software object. In some embodiments, a system software object is a software object that corresponds to a process that originates from and/or is controlled by an operating system of the device (e.g., the device implementing the software object) . In some embodiments, a software object is an application software object. In some embodiments, an application software object is a software object that corresponds to a process that originates from and/or is controlled by an application of (e.g., installed on and/or executed by) the device (e.g., the device implementing the software object) .
[0166] Reference can be made herein to a representation (e.g., an avatar and/or avatar representation) of a software object (and/or of a user, subject, person, animal, and/or object) and/or a user interface object (e.g., an animated character) . In some embodiments, a representation of a software object refers to a set of output characteristics (e.g., visual and/or audio) of the software object (and/or the user and/or the user interface object) . For example, a representation of a software object can include (and/or correspond to) a set of one or more visual characteristics (e.g., facial features of an animated face) and/or one or more audio characteristics (e.g., language and voice characteristics of audio output) . In some
93
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, a representation (e.g., of a software object) is used to represent output by the software object. For example, a device implementing an interactive software object outputs audio in a voice of the software object and displays an animated face of the software object moving in a manner to simulate the software object speaking the audio output. In this way, a user can feel like they are having a normal conversation with the software object. In some embodiments, a representation of a software object is (or is not) inclusive of personality and/or behavior characteristics (e.g., as described herein) . For example, a representation of a software object can include (and/or correspond to) a set of visual characteristics (e.g., facial features of an animated face) and also a set of personality characteristics. In some embodiments, a representation of a software object includes a set of user characteristics that correspond to visual representation of a user (e.g., representations of a user's appearance, voice, and/or personality are used as an avatar that appears to move and/or speak) . In some embodiments, a representation is a representation of a face (e.g., a user interface object that is output having features that simulate a face and/or facial expressions of a person (e.g., for conveying information to a viewer) ) .
[0167] In some embodiments, a character (e.g., of a software object and/or avatar) refers to a particular set of characteristics of a representation. For example, an avatar can take on (e.g., use, interact with, apply, and/or output according to) characteristics of a fictional and/or non- fictional character (e.g., from a book, a show, a movie, a series, and/or popular culture) .
[0168] In some embodiments, a voice (e.g., of a software object and/or avatar) refers to a set of one or more characteristics corresponding to sound output that resembles
94
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., represents, mimics, and/or recreates) vocal utterance (e.g., attributable and/or simulated as being output by a software object and/or avatar) . For example, device 200 can output a sentence that sounds different depending on a voice used. In some embodiments, a particular character and/or avatar can be configured to use a particular voice (e.g., have a corresponding voice) . In some embodiments, the particular voice can mimic a user's voice.
[0169] In some embodiments, an appearance (e.g., of a software object and/or avatar) refers to a set of one or more characteristics corresponding to visual output that represents an avatar (and/or a software object) . For example, device 200 can output an avatar that has a set of facial features forming an appearance that resembles a particular character from a movie .
[0170] In some embodiments, an expression of an avatar refers to a set of one or more characteristics corresponding to a particular visual appearance of a user, an avatar, and/or a software object. For example, device 200 can output an avatar that has a set of facial features arranged in a particular way to give the appearance of a facial expression (e.g., which can be used as a form of non-verbal communication to a user)
(e.g., a frown is an expression of sadness, a smile is an expression of happiness, and/or wide open eyes is an expression of surprise) . As another example, device 200 can output an avatar that has a set of body features (e.g., arms and/or legs) arranged in a particular way to give the appearance of a body expression (e.g., which can be used as a form of non-verbal communication to a user) (e.g., a hand gesture is an expression of approval, covering eyes is an expression of fear, and/or shrugging shoulders is an expression of lack of knowledge) . In some embodiments, an expression includes movement (e.g., a head nod is an
95
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) expression of agreement and/or disagreement) of the avatar. In some embodiments, device 200 can move, via the movement component, to indicate an expression with or without the avatar moving. In some embodiments, a software object performs one or more operations that depend on a user' s expression (e.g., detects if a person is sad and responds with a kind statement or question) . In some embodiments, expressions (e.g., whether and/or how they are used and/or how they are output) depends on personality. For example, a first personality can use a particular expression more than a second personality. As another example, an expression (e.g., smile, frown, and/or how wide eyes are opened) for the first personality can appear different from the expression (and/or a similar and/or equivalent expression) for a second personality (e.g., the first personality smiles in a manner that reveals teeth, but the second personality smiles without revealing teeth) .
[0171] In some embodiments, a software object (e.g., an avatar of the software object and/or a software system (e.g., hardware and/or software) implementing the software object) mimics characteristics of another user, software object, and/or character (e.g., in personality, behavior, expressions, and/or voice) . In some embodiments, mimicking includes mirroring a user (e.g., copying use of a phrase and/or movement detected from a user interacting with the software object) . In some embodiments, mimicking characteristics of a user includes attempting to reproduce the characteristics of the user (e.g., in the exact same manner and/or in manner that resembles the characteristics but is not an exact reproduction of the characteristics) . For example, a software object mimicking voice and/or expressions does not require the software object have the exact same voice and/or expressions
96
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) as the user being mimicked (e.g., but rather simply resembles the user's voice and/or expressions) .
[0172] In some embodiments, a component and/or device uses (e.g., performs operations, makes decisions, and/or determines context based on) learned characteristics (e.g., characteristics of a user, context, and/or environment that the device has learned over time, such as via detection, prior experience, and/or feedback from one or more users) . For example, characteristics learned over time can include a user's routine. In such example, if a particular user asks a software object for a summary of any new messages for the user at the same time every day, the software object can learn to perform operations automatically based on the learned characteristics of the routine (e.g., what data is needed, when the data is needed, and/or for which user) . In some embodiments, use of learned characteristics enables a software object (and/or device) to improve understanding of (and/or responses to) a user, context, and/or environment, and/or to understand a user, context, and/or environment that otherwise was not (and/or would not be) understood (e.g., not responded to or responded to incorrectly) . In some embodiments, learned characteristics are formed (e.g., by and/or for a software object) using reinforcement learning. In some embodiments, learned characteristics correspond to one or more levels of confidence, certainty, and/or reward (e.g., that are shaped by one or more reward functions) . In some embodiments, learned characteristics (and/or how they are used to affect output of a software object and/or device) can change over time (e.g., levels confidence, certainty, and/or reward change over time) . For example, output of a device before learning a set of learned characteristics can be different from output of the device after learning the set of learned characteristics. In some embodiments, a component and/or device uses learned
97
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) knowledge. For example, similar to described above with respect to learned characteristics, learned knowledge can refer to information used to update (e.g., enhance, add to, and/or augment) a knowledge base of a device (e.g., for use by a software object implemented thereon) . In some embodiments, multiple sets of learned characteristics for a user can be stored and/or used. In some embodiments, different sets of learned characteristics for different users can be stored and/or used.
[0173] Reference can be made herein to interaction with a software object (and/or a device) . In some embodiments, an interaction refers to a set of one or more inputs and/or outputs of a device implementing the software object and one or more users. For example, an interaction can be an input by a user (e.g., "Please turn on the lights") and a corresponding output (e.g., causing the lights to turn on and/or a response by the device of "Okay") . In some embodiments, interaction can include multiple inputs/outputs by one or more of the parties to the interaction (e.g., device and/or users) . For example, an interaction can include a first input by a user (e.g., "Please turn on the lights") and a corresponding first output (e.g., "Which lights?") , and also include a second input by the user (e.g., "Hallway lights") and a second output from the device (e.g., "Okay") . In some embodiments, which inputs and/or outputs are considered together as an interaction is based on a logical and/or contextual grouping (e.g., interactions within the previous thirty (30) seconds and/or interactions relating to turning on the lights) . As one of skill will appreciate, an interaction can be considered in a manner that depends on the implementation (e.g., determining when an interaction is complete can involve determining if the user still present (e.g., speaking at all) and/or if the user still talking about the lights or has moved onto a different
98
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) topic) . In some embodiments, an interaction is a current interaction (e.g., ongoing, presently occurring, and/or active) . In some embodiments, an interaction is a previous interaction. The examples above describe a device having a conversation with a user. In some embodiments, a conversation is between two or more users (e.g., users in an environment) . For example, a device can detect a conversation between to users (e.g., the users are directing speech and responses to each other, rather than to the device) .
[0174] In some embodiments, a software object (and/or device) determines and/or performs an operation based on an intent corresponding to a user. For example, a device detects user input and outputs a response that depends on an intent of the user input. For example, a device detects user input that includes a pointing gesture detected together with verbal instruction to "turn on that light," and in response, the device turns on the light that is determined to correspond to the intent of the input (e.g., the light toward which the pointing gesture directed) . In some embodiments, intent is determined (e.g., by the device that detects input and/or by one or more other devices) using one or more of one or more inputs, knowledge (e.g., learned knowledge about a user based on a history of observed behavior, personality, and interactions) , learned characteristics, and/or context. In some embodiments, intent is determined from one or more types of input (e.g., visual input via a camera, verbal input, and/or contextual input) .
[0175] Attention is now directed towards embodiments of user interfaces ("UI") and associated processes that are implemented on an electronic device, such as computer system 100, device 168, and/or device 200.
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0176] FIGS. 6A-6E illustrate exemplary user interfaces for displaying representations of software objects in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 7-9. In some embodiments, such user interfaces are examples of how a person can interact with Siri.
[0177] FIGS. 6A-6E illustrate computer system 600 as a smart phone. It should be recognized that computer system 600 can be other types of computer systems such as a tablet, a smart watch, a laptop, a communal device, a smart speaker, an accessory, a personal gaming system, a desktop computer, a fitness tracking device, and/or a head-mounted display (HMD) device. In some embodiments, computer system 600 includes and/or is in communication with one or more input devices (e.g., a sensor, a camera, a lidar detector, a motion sensor, an infrared sensor, a touch-sensitive surface, a physical input mechanism (such as a button or a slider) , and/or a microphone) . Such input devices can be used to detect presence of, attention of, statements from, inputs corresponding to, requests from, and/or instructions from a user in an environment. It should be recognized that, while some embodiments described herein refer to inputs being verbal inputs, other types of inputs can be used with techniques described herein, such as touch inputs via a touch-sensitive surface and/or air gestures detected via a camera. In some embodiments, computer system 600 includes and/or is in communication with one or more output devices (e.g., a display screen, a projector, a touch-sensitive display, a speaker, and/or a movement component) . Such output devices can be used to present information and/or display visual changes caused by operation of computer system 600. In some embodiments, computer system 600 includes and/or is in communication with
100
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) one or more movement components (e.g., an actuator, a moveable base, a rotatable component, and/or a rotatable base) . Such movement components, as discussed above, can be used to change a position (e.g., location and/or orientation) of computer system 600 and/or a portion (e.g., including one or more sensors, input components, and/or output components) of computer system 600. In some embodiments, computer system 600 includes one or more components and/or features described above in relation to computer system 100 and/or electronic device 200. In some embodiments, computer system 600 includes one or more software objects and/or functions of a software object as described above with respect to FIG. 5. In some embodiments, computer system 600 is, includes, implements, and/or is in communication with one or more software systems, as described above with respect to FIG. 5, for performing (and/or causing performance of) one or more operations of a software object.
[0178] FIGS. 6A-6E are split between a left portion and a right portion to illustrate a subject in a physical environment interacting with one or more software objects via computer system 600. In the examples illustrated in FIGS. 6A- 6E, the right portion illustrates a top-down schematic view of physical environment 606 that includes computer system representation 604 (e.g., representing computer system 600) and user 610 (e.g., a person interacting with computer system 600 by providing verbal inputs, such as verbal input 605a and/or verbal input 605b) . The left portion of FIGS. 6A-6E illustrates output of a display component in communication with computer system 600 (e.g. and represent what is currently being displayed by the display component, such as representation 604 in FIG. 6A) . While FIGS. 6A-6E illustrate computer system 600 displaying particular content within user interface 602, it should be recognized that such content is
101
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) merely for explanatory purposes and that such content can be in different locations, at different sizes, include different content and that more, fewer, and/or different content can be displayed in accordance with techniques described herein.
[0179] As discussed in more detail below, computer system 600 displays representation 604 to indicate that user 610 is interacting with a first software object (e.g., as described above with respect to FIG. 5) . In some embodiments, the first software object represents an interactive knowledge base (and/or a software system implementing the first software object) . In some embodiments, computer system 600 is in communication with the interactive knowledge base. In some embodiments, computer system 600 is in communication with the first software object to interact with the interactive knowledge base. In some embodiments, the interactive knowledge base is one or more artificial intelligence models. For example, the interactive knowledge base can be one or more large language models. In some embodiments, the interactive knowledge base corresponds to an application (e.g., system based and/or remotely located) and computer system 600 and/or the first software object interacts with the application-based interactive knowledge base (e.g., via an Application Programming Interface (API) ) to obtain information from, request responses from, and/or update capabilities based on the interactive knowledge base.
[0180] In some embodiments, the first software object is implemented on and/or by a local and/or remote software system. In some embodiments, the first software object includes a set of one or more (e.g., different, indicative, distinctive, unique, and/or avatar-specific) attributes (e.g., visual appearance, personality, and/or audio characteristics) such that representation 604 indicates which software object is active (e.g., that user 610 is interacting with) . For
102
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) example, representation 604 can be different from a software object representation for another software object, so the displayed software object representation provides an indication of which software object is currently interacting with a user of computer system 600. In some embodiments, different software objects can have different capabilities. In some embodiments, a request made to a software object that is not capable of handling the request results in computer system 600 initiating a process for transitioning to another software object (e.g., handover to another software object that can perform the request) . For example, the other software object can have a software object representation that differs in appearance (and/or other attributes) from the software object that received the request.
[0181] In some embodiments, computer system 600 performs determinations using an interactive knowledge base. For example, computer system 600 can determine steps to perform a task requested by user 610. For another example, computer system 600 can query and/or request that the first software object perform a determination for computer system 600. While FIGS. 6A-6E illustrate computer system 600 performing exemplary functionality with and/or without indicating an interaction with an interactive knowledge base, it should be understood that computer system 600 and/or the first software object can interact with an interactive knowledge base. For example, as discussed below, computer system 600 determining that the first software object cannot perform a requested task can include computer system 600 interacting with an interactive knowledge base to determine that the first software object cannot perform the requested task.
[0182] In the examples described below with respect to FIGS. 6A-6E, computer system 600 receives a request to perform a task that the first software object is not capable of
103
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) performing (e.g., lacks functionality and/or resources to do so) . In these examples, the first software object is able to be swapped with another software object to assist and/or cause performance of the task.
[0183] At FIGS. 6A-6E, the first software object has a set of one or more capabilities. In some embodiments, the set of one or more capabilities are different from one or more capabilities of computer system 600 (and/or of one or more other software objects implemented by, accessible to, and/or provided by computer system 600) . In some embodiments, the first software object corresponds to a system-based software object, and the set of one or more capabilities corresponds to an operating system of computer system 600 (e.g., a software object native to computer system 600 that requests system information from computer system 600) . In some embodiments, the set of one or more capabilities corresponds to the first software object's ability to interact with computer system 600, such as the first software object possessing permission to interact with computer system 600' s data and/or storage. In some embodiments, the set of one or more capabilities correspond to the first software object's ability to interact with applications and/or third-party applications on computer system 600 and/or remotely located. For example, user 610 can ask the first software object "When is my next meeting?", and the first software object requests computer system 600 to output the next meeting and/or event by accessing a calendar application. In some embodiments, the first software object corresponds to a user application that has a set of one or more capabilities. For example, the first software object can correspond to a navigation application and provide computer system 600 with navigation capabilities.
[0184] As illustrated in FIG. 6A, physical environment 606 includes user 610 within a f ield-of-view of computer system
104
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation 604. At FIG. 6A, computer system 600 detects user 610, such as by capturing an image and/or a video that includes user 610. In some embodiments, computer system 600 transitions from an inactive state to an active state in response to detecting user 610. In some embodiments, when computer system 600 is inactive, computer system 600 reduces screen brightness, reduces input device capabilities (e.g., turning off a touch sensitive display component until a user is detected and/or requiring a wake input to receive additional inputs) , and/or reduces content displayed on user interface 602. In some embodiments, when computer system 600 transitions to an active state, computer system 600 increases screen brightness, displays additional user interface components (e.g., representation 604) , and/or enables additional input devices. In some embodiments, transitioning between an inactive state and an active state includes (and/or is done in conjunction with) performing an animation. For example, fading out displayed content when transitioning to inactive and/or fading in content to be displayed when transitioning to active (e.g., displaying content at a reduced brightness and/or opacity and increasing the brightness and/or opacity over a predetermined amount of time) .
[0185] At FIG. 6A, after detecting user 610, computer system 600 awaits (e.g., is active and/or checking for) an input from user 610. As illustrated in FIG. 6A, computer system 600 displays representation 604 within user interface 602. In some embodiments, computer system 600 begins (and/or increases a frequency of and/or a number of manners of) detecting input (e.g., for the first software object represented by representation 604) upon detecting user 610. In some embodiments, computer system 600 waits until representation 604 is displayed to detect an input for the first software object, which provides a clear visual indication that user 610
105
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is interacting with the first software object. In some embodiments, computer system 600 displays representation 604 without any identifier of the first software object. In some embodiments, the first software object is a default software object that user 610 interacts with and is not displayed with an identifier. In some embodiments, computer system 600 displaying user interface 602 including only representation 604 indicates that computer system 600 is awaiting an input from a user (e.g., user 610) . In some embodiments, awaiting an input is and/or includes being available and/or able to detect input (e.g., is listening for verbal inputs via a microphone and/or using an image feed from a camera to watch for air gestures) .
[0186] At FIG. 6A, while and/or after computer system 600 detects user 610 and while computer system 600 is waiting for an input from a user (e.g., user 610) , user 610 asks computer system 600 to perform a task (e.g., verbal input 605a) . In some embodiments, the task includes a set of one or more steps required to perform the task (e.g., that are determined by computer system 600 and/or the first software object) . In some embodiments, verbal input 605a is directed to representation 604 and/or the first software object, such as by including an indication and/or an identification of the first software object. In some embodiments, computer system 600 detects verbal input 605a during and/or as part of an interaction with the first software object (e.g., within a predefined period of time of a previous input and/or output, a hover, an air gesture, and/or a mouse click) . As illustrated in FIG. 6A, user 610 asks, "Can you prepare my tax return?" (e.g., illustrated as a speech bubble representing verbal input 605a) . In some embodiments, computer system 600 animates and/or changes the visual characteristics of representation 604 (e.g., resizes, reshapes, repositions, and/or alters the
106
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) prominence level of representation 604) to indicate that computer system 600 is detecting (and/or has detected) verbal input 605a.
[0187] At FIG. 6B, in response to detecting verbal input 605a, computer system 600 determines that the first software object is unable to perform the task (e.g., requested by verbal input 605a) . In some embodiments, determining that the first software object is unable to perform the task includes comparing the set of one or more steps to perform the task with the one or more capabilities of the first software object. In the example illustrated and described with respect to FIG. 6B, computer system 600 determines that the first software object does not have access to a suitable knowledge base and/or does not have a tax return function that can be used to prepare the taxes.
[0188] At FIG. 6B, in response to determining that the first software object is unable to perform the task, computer system 600 determines a different software object and/or a different application that is able to perform the task. In some embodiments, the different software object and/or the different application has a set of one or more capabilities. In some embodiments, computer system 600 compares the set of one or more steps required to perform the task and the set of one or more capabilities of the different software object and/or the different application. In this example, computer system 600 determines that a tax application (e.g., represented as application icon 614 within user interface 602 in FIG. 6B) is able to perform the task based on capabilities of the tax application.
[0189] As illustrated in FIG. 6B, computer system 600 maintains display of representation 604 within user interface 602. In some embodiments, representation 604 remains displayed
107
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., in the same or different manner) in response to computer system 600 determining that the first software object is unable to perform the task and/or determining the different software object and/or the different application is able to perform the task. In some embodiments, computer system 600 displays representation 604 with different visual characteristics in such cases. In some embodiments, while determining which software object is able to perform the task, computer system 600 displays an animation that appears as though representation 604 is thinking and/or to indicate computer system 600 is making a determination (e.g., of whether an application has appropriate capabilities) . In some embodiments, computer system 600 displays representation 604 looking at (e.g., in a direction of and/or at a face of) user 610 (e.g., detected in physical environment 606 within the field of view of computer system representation 608) before, while, in response to, and/or after determining that the first software object is unable to perform the task.
[0190] As illustrated in FIG. 6B, in response to determining that the first software object is unable to perform the task and that a specialized application, e.g.., the tax application, is able to perform the task, computer system 600 outputs an indication (e.g., audible output 612) that the first software object is unable to perform the task. In some embodiments, outputting the indication includes displaying an indication of the tax application (e.g., application icon 614) that is able to perform the task and/or additional data (e.g., tax form 616) that is used to perform the task. Such additional data can be from the tax application and/or another application. For example, the additional data can include data that will be sent to the tax application if user 610 proceeds with using the tax application to perform the task.
108
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0191] In some embodiments, outputting an indication that the first software object is unable to perform the task includes displaying representation 604 with different (e.g., new and/or changed) visual characteristics. As illustrated in FIG. 6B, computer system 600 displays representation 604 at a smaller size and shifted to the left side in user interface 602 to make room for application icon 614 and tax form 616. In some embodiments, computer system 600 outputs content in response to detecting verbal input 605a. For example, at FIG. 6B, computer system 600 outputs audible output 612, which indicates that the first software object cannot perform the task (e.g., "I cannot...") . In the example illustrated in FIG. 6B, audible output 612 also includes a prompt asking for permission from user 610 to launch the tax application that is able to perform the task ("..., but I can get TaxApp to help. Is that okay?") . In some embodiments, outputting the indication that the first software object cannot perform the task includes outputting haptic feedback (e.g., haptic feedback through a haptic hardware component in communication with computer system 600 and/or haptic feedback through another computer system held and/or worn by user 610) , visual content (e.g., displayed content corresponding to computer system 600, audio content (e.g., a synthetic voice output and/or tone output) , available data (e.g., tax form 616) corresponding to the task, and/or application (e.g., application icon 616) corresponding to (e.g., capable of performing) the task.
[0192] In some embodiments, the prompt includes identification of additional and/or different permission requests than illustrated in FIG. 6B . For example, computer system 600 can identify one or more files related to the task (e.g., a tax form, an old tax return, and/or wage related documents) . In some embodiments, computer system 600 determines one or more operations for performing the task. For example, computer
109
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) system 600 determines that the task of preparing a tax return requires an operation of launching the tax application and an operation of sharing data (e.g., sharing tax form 616 with the tax application) . In some embodiments, computer system 600 requests permission from user 610 to perform an operation. For example, computer system 600 outputs a prompt to request permission to send the tax return document to the tax application. In some embodiments, computer system 600 displays the identified one or more files (and/or an indication and/or a representation thereof) on user interface 602. As illustrated in FIG. 6B, computer system 600 displays representations of tax form 616 that the tax application can use to compete the task, without prompting user 610 for permission to share the form with the tax application. In some embodiments, computer system 600 transfers (and/or otherwise provides access to) the one or more files to the tax application to perform the operation with and/or on the files. In some embodiments, computer system 600 performs the operation with and/or on the files. In some embodiments, the response does not include a prompt (e.g., permission is not necessary and/or was previously granted) .
[0193] In some embodiments, while outputting audible output 612 (e.g., including the indication and/or the prompt) , computer system 600 animates and/or alters the visual characteristics of representation 604 (e.g., resizing, reshaping, repositioning, and/or altering the prominence level of representation 604) (e.g., in synchronization with audible output 612) to indicate that the first software object is responding to (e.g., appearing to speak to) user 610. In some embodiments, computer system 600 displays animation in conjunction with movement of the portion of computer system 600 using a movement component. In some embodiments, such movement is movement such as performed by center stage using an
110
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) ultrawide camera and a neural engine to automatically pan and zoom to change a frame, such as when someone moves around.
[0194] In some embodiments, computer system 600 interacts with the tax application via one or more interfaces, such as an API. For example, the first software object can be capable of determining that the first software object cannot perform the task but have the capability of interfacing via an API with an application that can perform the task (e.g., that corresponds to a second software object different from the first software object) . The first software object can then interact with user 610 to gather input and/or data for the task and provide to the tax application that can perform the task. Likewise, the first software object can receive output from the tax application and appear to provide (e.g., via output of speech and/or visual user interface objects) such output via one or more output components of computer system 600. In some embodiments, the first software object hands over a portion of a user interface to the tax application (e.g., a portion of or all of user interface 602 for displaying a result of the requested task, such as a fillable tax application) . For example, as illustrated in FIG. 6B, representation 604 (which corresponds to the first software object that cannot complete the task) continues to be displayed along with an indication of another application, appearing to provide a suggestion of the other application. In some embodiments, in response to a determination that the first software object able to perform the task, computer system 600 does not output the indication described above and instead performs the task using the first software object.
[0195] At FIG. 6B, after prompting user 610 for permission to use the tax application to perform the task, computer system 600 detects verbal input 605b, which represents an affirmative response spoken by user 610. As illustrated in FIG. 6B, user
111
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
610 states "Yes," providing computer system 600 the affirmative response to the prompt for permission to use the tax application. As a result, computer system 600 detects verbal input 605b, providing computer system 600 permission to use the tax application to perform the task.
[0196] At FIG. 6C, in response to detecting verbal input 605 (e.g., illustrated in FIG. 6B as "Yes") , computer system 600 initiates a process to change (and/or transition between) software objects. In some embodiments, computer system 600 detects another type of input before (e.g., as a requirement of) initiating the process to change software objects. For example, computer system 600 can detect that user 610 is gesturing yes (e.g., an up and down head movement) . In some embodiments, computer system 600 transitions between representation 604 to a new software object representation (e.g., software object representation 620 as illustrated in FIG. 6E) .
[0197] In some embodiments, the transition between software objects requires and/or includes a physical movement of the portion of computer system 600. In some embodiments, the physical movement is predetermined by computer system 600. In some embodiments, the movement is the portion turning away from user 610 momentarily and then returning to face user 610. In some embodiments, computer system 600 causes the physical movement via a movement component in communication with computer system 600. In some embodiments, the physical movement includes translation and/or rotation (e.g., a 360- degree spin) . In the example illustrated in FIGS. 6C-6E, the physical movement obscures the display from user 610 (e.g., as illustrated in FIG. 6D user interface 602 (or other user interface elements such as representation 604, application icon 614, and tax form 616) are not visible to user 610 even if they are being displayed) . In some embodiments, the
112
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) physical movement is different from a haptic and/or a tactile output. In some embodiments, the physical movement includes a haptic and/or a tactile output. In this example, while moving the portion of computer system 600, representation 604 is replaced with software object representation 620.
[0198] In some embodiments, computer system 600 causes a physical movement after (e.g., in response to) receiving an indication to perform the task. The indication can include computer system 600 giving a response to the user (e.g., as shown in FIG. 6B labeled 612) . In this example, representation 604 is still displayed after starting the physical movement. In some embodiments, computer system 600 stops displaying representation 604 before starting the physical movement. In some embodiments, the indication to switch software objects causes computer system 600 to stop displaying representation 604 and initiate display of software object representation 620. In some embodiments, computer system 600 moves the portion of computer system 600 differently depending on a current context. For example, at least a portion of (and/or the entirety) of the physical movement can depend on one or more of: the old software object (being transitioned away from) , the new software object (being transitioned to) , and/or both the old software object and the new software object. In some embodiments, computer system 600 performs a physical movement before no longer displaying representation 604. In some embodiments, a physical movement is performed before changing to a new software object representation and after displaying software object representation 620. In some embodiments, the movement for the old software object and the new software object are different. In some embodiments, the movement for the transition between the software objects is the same for both software object representations, no matter the type of software object the software object
113
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representations are representing. In some embodiments, the movement is the same when transitioning between similar types of software objects. In some embodiments, the movement is different when switching between different types of software objects. For example, if the first software object represents a system software object and the second software object represents an application software object, the physical movement will be a rotation. For another example, if the first software object represents an application (e.g., a bank application) and the second software object represents a different application (e.g., a tax application) the physical movement can be a turn away and return to start position.
[0199] FIG. 6D illustrates a movement during a transition between software objects that includes a physical movement. In this example, computer system 600 is flipped around. The left portion of FIG. 6D illustrates the back side of computer system 600 (e.g., opposite side of the display component) and no longer shows user interface 602 displaying content, applications, and/or a software object representation (e.g., as illustrated by 604, 614 and 616 in FIG. 6B) . In the left portion of FIG. 6D, camera 618 located on the back side of computer system 600 is now visible. The right portion of FIG. 6D illustrates physical environment 606 that includes user 610 as described above with respect to FIG. 6A. However, the f ield-of-view of computer system 600 faces away from and does not detect user 610 because computer system 600 has rotated one half turn (e.g., compared to FIG. 6C) .
[0200] In some embodiments, computer system 600 performs a movement using (e.g., while displaying and/or an animation of) representation 604 before computer system 600 changes software objects and displays software object representation 620. In some embodiments, computer system 600 changes to software object representation 620 and performs the movement while
114
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) displaying software object representation 620. In some embodiments, computer system 600 initiates movement while displaying representation 604 and changes to display software object representation 620 before the movement is complete.
[0201] FIG. 6E illustrates computer system 600 after completion of the physical movement initiated and performed in FIGS. 6C and 6D as described above. In FIG. 6E, computer system 600 has returned to its start position (e.g., facing toward user 610 as indicated by the right portion of FIG. 6E, the position as in FIGS. 6A-6C) . At FIG. 6E, computer system 600 has launched the tax application. In some embodiments, the tax application is a remote application and computer system 600 receives content corresponding to the tax application from another computer system. For example, computer system 600 can communicate with a third-party server and/or computer system to receive remotely stored content and/or additional content. In some embodiments, the tax application determines the content to communicate (e.g., transmit and/or share) to computer system 600 based on the task requested by user 610. For example, computer system 600 communicates with the tax application that user 610 requested to prepare their taxes, and the tax application (and/or a software object thereof) determines the content that computer system 600 should display .
[0202] As illustrated at FIG. 6E, computer system 600 displays software object representation 620 as a representation of a software object corresponding to the tax application. In some embodiments, initiating display of software object representation 620 occurs while computer system 600 is performing the movement. For example, computer system 600 will begin performing the physical movement while displaying representation 604 (e.g., as illustrated in FIGS. 6A-6C as 604) and then display software object representation
115
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
620 during the movement. In some embodiments, software object representation 620 is displayed with different visual characteristics than representation 604. For example, software object representation 620 can be displayed as a different avatar with a different shape and different physical features than representation 604 (as illustrated in FIGS. 6A-6C) . In some embodiments, the different visual characteristics can include a change in mannerism, movements, color, and/or any other characteristic affected by visual appearance. In some embodiments, software object representation 620 has similar visual characteristic as representation 604.
[0203] As illustrated at FIG. 6E, software object representation 620 is displayed with an identifier 622 in close proximity to software object representation 620. In this example, identifier 622 is text (e.g., "Tax App Assistant") . In some embodiments, identifier 622 includes an image and/or animation. In some embodiments, display of identifier 622 is accompanied with output of an audio output (e.g., audible output 624) . In some embodiments, identifier 622 is displayed after detecting an input from a user. For example, the input can include an interaction with software object representation 620. In some embodiments, the interaction includes detecting a pointing gesture toward software object representation 620, a detection that user 610 is looking at software object representation 620 (e.g., via camera 608) , and/or a detection of verbal input (e.g., What is this? Who are you?) (e.g., directed to software object representation 620 and/or the second software object) . In some embodiments, the interaction occurs simultaneously with displaying software object representation 620. In some embodiments, computer system 600 ceases display of identifier 622 after detecting an interaction and/or input is complete and/or a new input is detected. In some embodiments, computer system 600 ceases
116
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) display of identifier 622 after a predetermined amount of time has passed. In some embodiments, computer system 600 ceases display of identifier 622 in order to perform an operation. For example, when computer system 600 begins to perform the operation of preparing the user' s tax return, computer system 600 displays content of the tax form and ceases display of identifier 622 to make room on the user interface 602.
[0204] As illustrated at FIG. 6E, display of software object representation 620 is the only avatar on user interface 602. In some embodiments, computer system 600 displays content related to the task requested by the user 610 and/or the tax application with software object representation 620. In some embodiments, while computer system 600 displays software object representation 620, computer system 600 displays representation 604 on user interface 602. In some embodiments, computer system 600 displays representation 604 in a less prominent manner when displaying with software object representation 620. For example, computer system 600 can display representation 604 at a smaller size and emphasis than software object representation 620. In some embodiments, computer system 600 uses both representation 604 and software object representation 620 to interact with user 610. For example, computer system 600 can use the first software object to respond to any detected interactions and/or inputs related to computer system 600 (e.g., volume, display controls, and/or file sharing) and computer system 600 can use the second software object to respond to any detected interactions and/or inputs related to the tax application (e.g., tax form application inputs) . In some embodiments, using a software object includes using the software object to process inputs and/or provide outputs) and/or attributing output to the software object (e.g., synchronizing audio output of speech with mouth movement of a corresponding software object
117
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation) . In some embodiments, the first software object and the second software object have different audio characteristics. Audio characteristics can include tone, pitch, accent, voice, and/or volume. In some embodiments, both software objects respond to user 610 (e.g., at different times in the same or different manners) . In some embodiments, computer system 600 performs movement of both representation 604 and software object representation 620 in synchronization with each other.
[0205] At FIG. 6E, while displaying software object representation 620, computer system 600 outputs audible output 624. In this example, audible output 624 indicates that computer system 600 (e.g., using the second software object) is ready to perform the task requested (e.g., "Let's get started on your taxes") . In some embodiments, while outputting audible output 624, computer system 600 animates and/or alters the visual characteristics of software object representation 620 (e.g., resizing, reshaping, repositioning, and/or altering prominence level of software object representation 620) to indicate that the second software object is responding to (e.g., appearing to speak to) user 610. In some embodiments, audio output 624 is displayed visually using text and/or through haptic feedback. In some embodiments, displaying software object representation 620 does not include an audio output .
[0206] In some embodiments, after computer system 600 performs the movement, computer system 600 performs an operation related to the task (e.g., preparing tax return) (e.g., sending tax form file 616 to the tax application and/or performing the task) while displaying software object representation 620. In some embodiments, performing the operation includes providing an output (e.g., audible output 624) .
118
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0207] In some embodiments, in order to complete the task requested (as illustrated in FIG. 6A as "Can you prepare my tax return?") by user 610, computer system 600 and the second software object communicate (e.g., transmit and/or share) one or more files between each other. In some embodiments, computer system 600 displays a prompt to request permission to share the one or more files with the second software object and then detects permission from the user before computer system 600 can perform the operation. In some embodiments, the prompt to request permission and the detection of permission from user 610 occurs after and/or while displaying software object representation 620 and/or performing a movement for a transition between software objects. In some embodiments, the prompt to request permission and the detection of permission from user 610 occurs before displaying software object representation 620 (as illustrated in FIG. 6B) . For example, after outputting audible output 624, computer system 600 can prompt the user for permission to complete the task (e.g., "Do I have permission to import your data into a tax form?") before proceeding (e.g., in addition to and/or instead of a prompt for permission prior to transitioning to the second software object, such as included in audible output 612 of FIG. 6B) .
[0208] In some embodiments, computer system 600 performs the operation related to the task after detecting input. In some embodiments, computer system 600 detects the input after displaying software object representation 620. In some embodiments, computer system 600 detects the input after computer system 600 outputs audio output 624 (e.g., including the indication and/or the prompt) to user 610. In some embodiments, the response does not include a prompt (e.g., permission is not necessary and/or was previously granted) . In
119
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) some embodiments, the prompt includes additional permission requests .
[0209] In some embodiments, computer system 600 determines multiple operations and/or steps to perform the task requested by user 610. In some embodiments, computer system 600 performs one or more of the multiple operations and/or steps without waiting for additional input. In some embodiments, computer system 600 requires additional input before performing one or more of the multiple operations and/or steps.
[0210] FIG. 7 is a flow diagram illustrating a method (e.g., method 700) for performing movement in conjunction with outputting a representation of a software object in accordance with some embodiments. Some operations in method 700 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted .
[0211] As described below, method 700 provides an intuitive way for performing movement in conjunction with outputting a representation of a software object. Method 700 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0212] In some embodiments, method 700 is performed at a computer system (e.g., 600) that is in communication with one or more input devices (e.g., a camera, a depth sensor, and/or a microphone) , a display component (e.g., a display screen, a projector, and/or a touch-sensitive display) , and a movement component (e.g., an actuator, a movable base, a rotatable component, and/or a rotatable base) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness
120
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, and/or a personal computing device.
[0213] While displaying, via the display component, a representation (e.g., an avatar and/or user interface object) of a first software object (e.g., 604) , the computer system receives (702) an indication (e.g., 605a and/or 605b) (e.g., a user input, a detection of a change in user, and/or preset input based on time) that a representation of a second software object (e.g., 620) is to be displayed, wherein the second software object is different from the first software object (e.g., as described above with respect to FIGS. 6A-6B) . In some embodiments, the indication that the representation of the second software object is to be displayed is an input by a user. In some embodiments, the indication that the representation of the second software object is to be displayed occurs when the user is detected to be in the same environment as the computer system. In some embodiments, the indication that the representation of the second software object is to be displayed occurs when a new user interacts with the computer system.
[0214] In response to receiving the indication (e.g., 605a and/or 605b) that the representation of the second software object (e.g., 620) is to be displayed, the computer system performs (704) , via the movement component, a movement (e.g., as described above with respect to FIGS. 6C-6E) (e.g., a physical movement) (e.g., of a physical portion of the computer system) (e.g., via the one or more output devices) (e.g., corresponding to a software object change operation) in conjunction with (e.g., before, while, and/or after) displaying, via the display component, the representation of the second software object. Performing a movement in conjunction with displaying the representation of the second
121
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) software object in response to receiving an indication that the representation of the second software object is to be displayed allows the computer system to respond to the indication and transition to the new software object, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0215] In some embodiments, performing the movement includes causing the display component to be obscured (e.g., not visible, covered, hidden, turned away, and/or facing a different direction) (e.g., partially and/or completely) from a first viewpoint (e.g., point of view and/or position from which the user observes the display component) (e.g., one or more points at eye level of the user) of a user (e.g., 610) (e.g., as described above with respect to FIGS. 6C-6E) (e.g., a detected user and/or a user whose location is known by the computer system) (and/or a set of one or more users) . In some embodiments, the movement results in the display component (and/or another portion of the computer system) of the computer system being in a position where the user in the same physical environment as the computer system cannot see a portion of the display component and/or what is displayed on the display component due to the changed position of the computer system. In some embodiments, the computer system determines (e.g., detects and/or receives) viewpoint information for the user (e.g., indicating a location, position, pose, height, eye level, and/or point of view of the user) . In some embodiments, the first viewpoint of the user (and/or set of one or more users) is an estimated viewpoint (e.g., based on a combination and/or weighting of multiple determined viewpoints of one or more users) . Causing the display component to be obscured from a viewpoint of a user
122
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) when performing the movement allows the computer system to provide the user with a visual response to the indication, thereby providing improved feedback to a subject and performing an operation when a set of conditions has been met without requiring further input.
[0216] In some embodiments, the movement, performed in conjunction with displaying the representation of the second software object (e.g., 604 and/or 620) , is a preconfigured movement pattern (e.g., as described above with respect to FIGS. 6C-6E) (e.g., a set of one or more components of movement (e.g., direction, speed, and/or distance) that define a preprogrammed movement) (e.g., a 360-degree spin, a 180- degree turn to face away from a direction of a start position followed by a 180-degree turn to return to face the direction of the start position, a turn downward followed by a turn upward to return to a start position, and/or a turn upward followed by a turn downward to return to a start position) . In some embodiments, after displaying the representation of the second software object, displaying a representation of a third software object (e.g., the first software object and/or a different software object) and while displaying the representation of the third software object, receiving an indication that the representation of the second software object is to be displayed and, in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, the preconfigured movement in conjunction with displaying, via the display component, the representation of the second software object.
[0217] In some embodiments, the movement performed in conjunction with displaying the representation of the second software object (e.g., 604 and/or 620) is a 360-degree rotation (e.g., as described above with respect to FIGS. 6C- 123
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
6E) (e.g., clockwise, counterclockwise, and/or rotation along an arbitrary path that results in 360 degrees of rotation with respect to a start position) . Performing a 360-degree rotation in conjunction with displaying the second software object enables the computer system to indicate a change from the first software object to the second software object using a rotation of the computer system, thereby providing improved feedback to a subject and performing an operation when a set of conditions has been met without requiring further input.
[0218] In some embodiments, the movement, performed in conjunction with displaying the representation of the second software object (e.g., 604 and/or 620) , includes moving from a first position to a second position and then returning to the first position. In some embodiments, in the first position, the display is not obscured from a second viewpoint (e.g., point of view and/or position from which the user observes the display component) (e.g., one or more points at eye level of the user) (e.g., the same as or different from the first viewpoint) of the user (e.g., 610) (e.g., a detected user and/or a user whose location is known by the computer system) (and/or a set of one or more users) . In some embodiments, in the second position the display is obscured (e.g., not visible, covered, hidden, turned away, and/or facing a different direction) (e.g., partially and/or completely) from the second viewpoint of the user (e.g., as described above with respect to FIGS. 6C-6E) . In some embodiments, after displaying the representation of the second software object, displaying a representation of a third software object and while displaying the representation of the third software object, receiving an indication that that the representation of the second software object is to be displayed and, in response to receiving the indication that the representation of the second software object is to be displayed, moving, via
124
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the movement component, the representation of the third software object from the first position to the second position and then returning to the first position in conjunction with displaying, via the display component, the representation of the second software object. In some embodiments, moving from a first position to a second position and then returning to the first position includes a 180-degree turn to face away followed by a 180-degree turn to return to a start position, a turn to one side of the display followed by a turn to the second side of the display to return to start position, a turn downward followed by a turn upward to return to a start position, and/or a turn upward followed by a turn downward to return to a start position. Performing movement to a position in which a display is obscured and returning to a first position in conjunction with displaying the second software object enables the computer system to indicate a change from the first software object to the second software object using a predefined movement of the computer system, thereby providing improved feedback to a subject and performing an operation when a set of conditions has been met without requiring further input.
[0219] In some embodiments, the indication (e.g., 605a and/or 605b) that the representation of the second software object (e.g., 604 and/or 620) is to be displayed includes an indication (e.g., an instruction, a touch input command, nontouch input command, and/or voice command) of a task to be performed (e.g., as described above with respect to FIG. 6A) . In some embodiments, a task to be performed includes (and/or is included in) a function, operation, application, and/or process to be performed by (e.g., requested to be performed by and/or determined to be performed by) the computer system. In some embodiments, the indication includes a determination that the first software object cannot perform the requested task
125
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and the second software object can perform the requested task. Performing a movement in conjunction with displaying the representation of the second software object in response to receiving an indication that includes an indication of a task to be performed allows the computer system to respond to the indication and transition to the new software object, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0220] In some embodiments, before performing the movement in conjunction with displaying the representation of the second software object (e.g., 604 and/or 620) , the computer system displays, via the display component, a notification requesting permission to share (e.g., provide, transmit, and/or give access to) information (e.g., 616) (e.g., personal information, profile information, metadata, usage history and/or information related to the task) (e.g., from the first software object and/or the computer system) with the second software object (e.g., as described above with respect to FIG. 6B) . In some embodiments, the first software object corresponds to a first application (e.g., maps application, music application, and video player application) and the second software object corresponds to a second application different from the first application. Displaying a notification requesting permission to share with a second software object before performing the movement in conjunction with displaying the representation of the second software object enables the computer system to provide feedback and security for transfer of information between the computer system and the software object, thereby providing improved feedback to a subject and increasing security.
126
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0221] In some embodiments, before performing the movement in conjunction with displaying the representation of the second software object (e.g., 604 and/or 620) , the computer system displays, via the display component, a notification (e.g., 614 and/or text transcription of audible output 612) (e.g., visual notification and/or audio notification) requesting permission to display the representation of the second software object (e.g., as described above with respect to FIG. 6B) (e.g., change the representation of the first software object to the representation of the second software object and/or display both representations simultaneously) . In some embodiments, a new user interface corresponding to the second software object is concurrently displayed with the first software object and/or the second software object. In some embodiments, a new user interface is displayed and/or a current user interface continues to be displayed. Displaying a notification requesting permission to display the representation of the second software object before performing the movement enables the computer system to receive permission before changing the software object, thereby increasing security and providing improved visual feedback to the user.
[0222] In some embodiments, the indication (e.g., 605a and/or 605b) that the representation of the second software object (e.g., 620) is to be displayed is received in accordance with a determination that the first software object (e.g., 604) cannot perform (e.g., does not support performing, is not permitted to perform, and/or is not configured to perform) a requested task (e.g., task indicated in verbal input 605a) (e.g., as described above with respect to FIG. 6B) . In some embodiments, receiving the indication that the representation of the second software object is to be displayed includes a determination that the first software object cannot perform the requested task and the second software object can perform
127
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the requested task. Performing a movement in conjunction with displaying the representation of the second software object in response to receiving an indication received in accordance with a determination that the first software object cannot perform a requested task enables the computer system to automatically change to the second software object when the first software object is unable to perform the task, thereby reducing the number of inputs needed to perform an operation, performing an operation when a set of conditions has been met without requiring further input, and providing improved visual feedback to the user.
[0223] In some embodiments, the representation of the first software object (e.g., 604) includes a first set of visual characteristics (e.g., visual characteristics of 604) (e.g., as described above with respect to FIGS. 6A-6C) . In some embodiments, the representation of the second software object (e.g., 620) includes a second set of visual characteristics different from the first set of visual characteristics (e.g., visual characteristics of 620) (e.g., as described above with respect to FIG. 6E) . In some embodiments, the representation of the first software object is a first avatar (e.g., that includes the first set of visual characteristics) . In some embodiments, the representation of the second software object is a second avatar (e.g., that includes the second set of visual characteristics) . In some embodiments, a visual characteristic includes color, shape, movements, mannerisms, facial features, and/or other characteristic affected visual appearance. The representation of the first software object and the representation of the second software object having differing visual characteristics allows the computer system to display different looking software objects and provides the user with visual confirmation that the software object has changed, thereby performing an operation when a set of
128
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) conditions has been met without requiring further input and providing improved visual feedback to the user.
[0224] In some embodiments, the computer system (e.g., 600) is in communication with one or more audio output component (e.g., smart speakers, home theater system, soundbars, headphones, earphones, earbuds, speakers, television speakers, augmented reality headset speakers, audio jacks, optical audio output, Bluetooth audio outputs, and/or HDMI audio outputs) . In some embodiments, while displaying the representation of the first software object (e.g., 604) , the computer system outputs, via the audio output component, a first audio output (e.g., 612) (e.g., audible output of a speaker and/or other sound generating component) corresponding to (e.g., synchronized with, attributable to, and/or appearing to emanate from) the representation of the first software object (e.g., as described above with respect to FIG. 6B) . In some embodiments, while displaying the representation of the second software object (e.g., 620) , the computer system outputs, via the audio output component, a second audio output (e.g., 624) corresponding to (e.g., synchronized with, attributable to, and/or appearing to emanate from) the representation of the second software object, wherein the first audio output is different from the second audio output (e.g., as described above with respect to FIG. 6E) . Outputting different audio outputs while displaying different software objects allows the computer system to provide audio feedback to the user with specific audio characteristics to distinguish the different software objects, thereby performing an operation when a set of conditions has been met without requiring further input and providing improved feedback to the user.
[0225] In some embodiments, the first software object corresponds (e.g., represented by representation 604) to a first application (e.g., a software application and/or
129
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) knowledge base) (e.g., as described above with respect to FIGS. 6A-6C) . In some embodiments, the second software object (e.g., represented by avatar 620) corresponds to a second application (e.g., 614) different (e.g., a software application and/or knowledge base) from the first application (e.g., as described above with respect to FIGS. 6B and 6E) . In some embodiments, the first application and/or the second application include overlapping information. In some embodiments, the first application and the second application do not include overlapping information. Displaying representations corresponding to different software objects corresponding to different applications allows the computer system to use multiple software objects depending on what operation needs to be performed and/or needs to be displayed, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0226] In some embodiments, in response to receiving the indication (e.g., 605a and/or 605b) that the representation of the second software object (e.g., 620) is to be displayed, the computer system ceases displaying, via the display component, the representation of the first software object (e.g., 604) (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the computer system displays the representation of the first software object and the representation of the second software object concurrently (e.g., temporarily, for a predetermined period of overlap, and/or until additional input is received) . In some embodiments, the computer system does not display the representation of the first software object and the representation of the second software object concurrently (e.g., only one representation of the first software object or
130
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) second software object is displayed at a time) . In some embodiments, the computer system displays, via the display component, visually changing the representation of the first software object to the representation of the second software object. Ceasing displaying the representation of the first software object in response to receiving an indication that the second software object is to be displayed allows the computer system to indicate transition to the second software object, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, allowing the computer to avoid burn-in of the display generation component, and performing an operation when a set of conditions has been met without requiring further input.
[0227] In some embodiments, performing the movement in conjunction with displaying the representation of the second software object (e.g., 620) includes initiating (e.g., beginning and/or starting) displaying of the representation of the second software object while performing the movement (e.g., as described above with respect to FIGS. 6C-6E) (e.g., while still moving and/or after initiating the movement but before ceasing the movement) . Initiating displaying the representation of the second software object while performing the movement enables the computer system to synchronize movement with a transition to displaying the representation of the second software object, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0228] In some embodiments, performing the movement in conjunction with displaying the representation of the second software object (e.g., 620) is initiated after the movement has started (e.g., as described above with respect to FIGS. 6C-6E) (e.g., after the movement is completed and/or after
131
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) initiating the movement but before ceasing the movement) . Displaying the representation of the second movement after the movement has started enables the computer system to synchronize movement with a transition to displaying the representation of the second software object, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0229] In some embodiments, the computer system (e.g., 600) is in communication with a second set of one or more audio output components (e.g., smart speakers, home theater system, soundbars, headphones, earphones, earbuds, speakers, television speakers, augmented reality headset speakers, audio jacks, optical audio output, Bluetooth audio outputs, and/or HDMI audio outputs) . In some embodiments, after performing the movement, the computer system outputs, via the second set of one or more audio output components, a second audio output (e.g., 624) (e.g., speech, music and/or sound) corresponding to (e.g., synchronized with, attributable to, and/or appearing to emanate from) the representation of the second software object (e.g., 620) (e.g., as described above with respect to FIGS. 6C-6E) . Outputting audio corresponding to the representation of the second software object after performing the movement allows the computer system to provide feedback and/or content corresponding to the second software object, thereby providing improved feedback to subjects, and performing an operation when a set of conditions has been met without requiring further user input.
[0230] In some embodiments, the representation of the first software object (e.g., 604) is displayed in conjunction with performing the movement (e.g., as described above with respect to FIGS. 6A-6D) (e.g., the representation of the first
132
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) software object is displayed for at least a portion of (e.g., period of time of and/or movement component of) the movement before ceasing displaying the representation of the first software object) . In some embodiments, performing the movement begins and/or continues (e.g., at least in part) while displaying the representation of the first software object. In some embodiments, the movement (e.g., and/or portion thereof) performed while displaying the representation of the first software object is the same as the movement in conjunction with displaying, via the display generation component, the representation of the second software object. In some embodiments, the movement (e.g., and/or portion thereof) performed while displaying the representation of the first software object is different from the movement in conjunction with displaying the representation of the second software object. In some embodiments, the representation of the first software object transitions to (e.g., switches to, is replaced by, and/or is overlaid by) the representation of the second software object before the movement is done being performed. Performing the movement while continuing to display the representation of the first software object in response to receiving the indication that the second software object is to be displayed allows the computer system to transition between software objects and give the user feedback that a software object change is about to occur, thereby providing improved feedback to subjects, and performing an operation when a set of conditions has been met without requiring further user input .
[0231] In some embodiments, the representation of the first software object (e.g., 604) and the representation of the second software object (e.g., 620) are not concurrently displayed (e.g., as described above with respect to FIGS. 6A- 6E) . In some embodiments, in response to receiving the
133
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) indication that the representation of the second software object is to be displayed, ceasing displaying the representation of the first software object in conjunction with displaying, via the display component, the representation of the second software object (e.g., performing the movement can occur with the first software object and/or the second software object) . The representation of the first software object and the representation of the second software object not being concurrently displayed allows the computer system to display only the necessary software object to perform an operation and not clutter the display, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, allowing the computer to avoid burn-in of the display generation component, and performing an operation when a set of conditions has been met without requiring further input.
[0232] In some embodiments, before performing (e.g., before starting and/or initiating) the movement, the computer system ceases displaying, via the display component, the representation of the first software object (e.g., 604) (e.g., as described above with respect to FIGS. 6A-6D) . Ceasing displaying the representation of the first software object before performing the movement allows the computer system to automatically reduce visual distractions before the display is transitioned to another software object and movement is performed, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, allowing the computer to avoid burn-in of the display generation component, and performing an operation when a set of conditions has been met without requiring further input.
[0233] In some embodiments, while displaying, via the display component, the representation of the second software object (e.g., 604 and/or 620) (e.g., in conjunction to performing the
134
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) movement) , the computer system receives an indication (e.g., 605a and/or 605b) that a representation of a third software object (e.g., 604 and/or 620) is to be displayed, wherein the third software object is different from the second software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the third software object is the same as the first software object. In some embodiments, the third software object is different from the first software object. In some embodiments, the indication that the representation of the third software object is to be displayed is an input by a user. In some embodiments, the indication that the representation of the third software object is to be displayed occurs when the user is detected to be in the same environment as the computer system. In some embodiments, the indication that the representation of the third software object is to be displayed occurs when a new user interacts with the computer system. In some embodiments, the indication that the representation of the third software object is to be displayed includes a request for the computer system (and/or the second software object) to complete a task. In some embodiments, the indication that the representation of the third software object is to be displayed occurs when the current software object is unable to complete the task and needs to use another application. In some embodiments, in response to receiving the indication (e.g., 605a and/or 605b) that the representation of the third software object (e.g., 604 and/or 620) is to be displayed, the computer system performs, via the movement component, the movement in conjunction with displaying, via the display component, the representation of the third software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, displaying the representation of the third software object includes switching the display of the representation of the second software object to the representation of the third software object.
135
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
Performing the same movement in conjunction with displaying a third software object allows the computer system to reuse the same movement for different software objects and provide the user with visual confirmation that a change in software objects is occurring, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0234] In some embodiments, the movement performed in conjunction with displaying the representation of the second software object (e.g., 604 and/or 620) is a first movement (e.g., as described above with respect to FIGS. 6C-6E) . In some embodiments, while displaying the representation of the second software object (e.g., 604 and/or 620) (e.g., in conjunction with performing the movement) , the computer system receives an indication (e.g., 605a and/or 605b) that a representation of fourth software object (e.g., 604 and/or 620) is to be displayed, wherein the second software object is different from the fourth software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the fourth software object is the same as the first software object and/or third software object. In some embodiments, the fourth software object is different than the first software object and/or third software object. In some embodiments, the indication that the representation of the fourth software object is to be displayed is an input by a user. In some embodiments, the indication that the representation of the fourth software object is to be displayed occurs when the user is detected to be in the same environment as the computer system. In some embodiments, the indication that the representation of the fourth software object is to be displayed occurs when a new user interacts with the computer system. In some embodiments, the indication that the
136
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation of the fourth software object is to be displayed occurs when the computer system (e.g., and/or second software object) is requested to complete a task. In some embodiments, the indication that the representation of the fourth software object is to be displayed occurs when the current software object is unable to complete the task and needs to use another application. In some embodiments, in response to receiving the indication (e.g., 605a and/or 605b) that the representation of the fourth software object (e.g., 604 and/or 620) is to be displayed, the computer system performs, via the movement component, a second movement different from the first movement in conjunction with displaying, via the display component, the representation of the fourth software object (e.g., as described above with respect to FIGS. 6C-6E) . In some embodiments, displaying the representation of the fourth software object includes switching the display of the representation of the second software object to the representation of the fourth software object. Performing a different movement in conjunction with displaying a different software object enables the computer system to use different movements that are dependent on the software object to be displayed and provide the user with visual confirmation that a change in software objects is occurring, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0235] In some embodiments, the first software object corresponds to (e.g., represented by, includes, and/or is associated with) a system process (e.g., operating system, widgets and/or utilities) and the second software object corresponds to (e.g., represented by, includes, and/or is associated with) a first application (e.g., 614) process
137
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., as described above with respect to FIGS. 6A-6E) (e.g., maps, messaging, knowledge based application, and/or downloaded application) . In some embodiments, displaying, via the display component, the representation of the second software object (e.g., 620) includes switching (e.g., transitioning, changing, and/or moving) , the display of the representation of the first software object (e.g., 604) to the representation of the second software object (e.g., as described above with respect to FIGS. 6C-6E) . In some embodiments, performing, via the movement component, the movement in conjunction with displaying, via the display component, the representation of the second software object (e.g., 620) includes moving in a first movement pattern (e.g., as described above with respect to FIGS. 6C-6E) . In some embodiments, while displaying, via the display component, the representation of the second software object (e.g., 620) , the computer system receives an indication (e.g., 605a and/or 605b) that a representation of a fifth software object (e.g., 604 and/or 620) is to be displayed (e.g., as described above with respect to FIGS. 6A-6E) , wherein: the fifth software object is different from the first software object and the second software object; and the fifth software object corresponds to a second application process. In some embodiments, in response to receiving the indication (e.g., 605a and/or 605b) that the representation of the fifth software object (e.g., 604 and/or 620) is to be displayed, the computer system performs, via the movement component, a second movement pattern different from the first movement pattern in conjunction with displaying, via the display component, the representation of the fifth software object, and wherein displaying the representation of the fifth software object includes switching (e.g., transitioning, changing, and/or moving) the display of the representation of the second software object (e.g., 604 and/or 620) to the representation
138
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of the fifth software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the second movement pattern is the same when switching between software objects corresponding to different application processes. Switching between different software objects using one movement when changing to different software objects that are different types of software object and/or using a different movement when changing to different software objects that are the same type of software object allows the computer system to provide different visual feedback to the user of the type of software object that will be displayed, thereby providing improved feedback to a user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further input.
[0236] Note that details of the processes described above with respect to method 700 (e.g., FIG. 8) are also applicable in an analogous manner to other methods described herein. For example, method 700 optionally includes one or more of the characteristics of the various methods described above with reference to method 700. For example, the displayed identifier of method 800 can correspond to the second software object of method 700. For brevity, these details are not repeated herein .
[0237] FIG. 8 is a flow diagram illustrating a method (e.g., method 800) for displaying an indication of a respective application corresponding to a software object in accordance with some embodiments. Some operations in method 800 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted .
139
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0238] As described below, method 800 provides an intuitive way for displaying an indication of a respective application corresponding to a software object. Method 800 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0239] In some embodiments, method 800 is performed at a computer system (e.g., 600) that is in communication with one or more input devices (e.g., a camera, a depth sensor, and/or a microphone) and a display component. In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, and/or a personal computing device.
[0240] The computer system detects (802) , via the one or more input devices, a request (e.g., 605a and/or 605b) (e.g., an input (e.g., a tap input and/or a non- tap input (e.g., a verbal input, an audible request, an audible command, an audible statement, a swipe input, a hold-and-drag input, a gaze input, an air gesture, and/or a mouse click) ) (e.g., as described above with respect to FIGS. 6A-6B) ) .
[0241] In response to receiving the request (e.g., 605a and/or 605b) , the computer system displays (804) , via the display component, a representation (e.g., an avatar, user interface object, an animation, a character, and/or content that is representative) of a first software object (e.g., 604 and/or 620) that corresponds to (e.g., represented by, includes, and/or is associated with) a respective application (e.g., 614) (e.g., downloaded application, systems process, and/or
140
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) widgets) (e.g., as described above with respect to FIGS. AGE) .
[0242] While displaying (806) , via the display generation component, the representation of the first software object (e.g., 604 and/or 620) that corresponds to the respective application, in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object (e.g., 604 and/or 620) corresponds to (e.g., is directed to, is selection of, is pointed in a direction of (e.g., a direction of a representation of) , includes reference to, mentions, names, identifies, and/or is configured to be associated with) a first application, the computer system displays (808) , via the display component, an identifier (e.g., 622) (e.g., indication of application (e.g., text, image and/or animation) ) corresponding to (e.g., adjacent to, directed to, alongside, identifying, describing a source of, describing an application of, and/or include one or more details related to) the first software object (e.g., as described above with respect to FIG. 6E) .
[0243] While (806) displaying, via the display generation component, the representation of the first software object that corresponds to the respective application, in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object (e.g., 604 and/or 620) corresponds to (e.g., is directed to, is selection of, is pointed in a direction of (e.g., a direction of a representation of) , includes reference to, mentions, names, identifies, and/or is configured to be associated with) a second application (e.g., system process) different from the first application, the computer system forgoes (810) displaying (e.g., not displaying) , via the
141
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) display component, the identifier (e.g., 622) corresponding to the first software object (e.g., as described above with respect to FIGS. 6A-6C) . Displaying an identifier corresponding to the first software object in accordance with satisfaction of a set of one or more criteria that includes a criterion satisfied when a software object corresponds to an application allows the computer system to selectively output an indication for a particular software object, thereby providing improved feedback to subjects, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0244] In some embodiments, displaying the representation of the first software object (e.g., 604 and/or 620) that corresponds to the respective application (e.g., 614) includes displaying a visual change over time corresponding to the representation of the first software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the visual change over time includes one or more of animation and/or movement of the representation of the first software object. In some embodiments, the visual change over time is output while displaying the identifier corresponding to the first software object. In some embodiments, the visual change overtime includes dynamic animations and/or movement that shows interaction with the first software object with the user, the computer system and/or content being displayed. Displaying a visual change over time when displaying the representation of the first software object allows the computer system to make visually dynamic changes when displaying a representation of a software object as necessary, thereby providing improved feedback to user, reducing the number of inputs needed to perform an
142
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0245] In some embodiments, the computer system (e.g., 600) is in communication with a first audio output component (e.g., smart speakers, home theater system, soundbars, headphones, earphones, earbuds, speakers, television speakers, augmented reality headset speakers, audio jacks, optical audio output, Bluetooth audio outputs, and/or HDMI audio outputs) . In some embodiments, while displaying the representation of the first software object (e.g., 604 and/or 620) that corresponds to the respective application, the computer system outputs, via the first audio output component, an audio output (e.g., 612 and/or 624) (e.g., audible output of a speaker and/or other sound generating component) corresponding to (e.g., synchronized with, attributable to, and/or appearing to emanate from) the representation of the first software object (e.g., as described above with respect to FIGS. 6A-6E) . Outputting an audio output when displaying the representation of the first software object allows the computer system to provide audible feedback to the user, thereby providing improved feedback to user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0246] In some embodiments, displaying the representation of the first software object (e.g., 604 and/or 620) that corresponds to the respective application includes performing a movement (e.g., of the representation of the first software object (e.g., shifting the first software object from a first position to a second position, moving at least a portion of the first software object, and/or changing at least a portion of the first software object) ) corresponding to (e.g., is movement of and/or is movement of one or more user interface 143
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) element different from but while displaying) the representation of the first software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the audio output (e.g., 612 and/or 624) corresponding to the representation of the first software object (e.g., 604 and/or 620) is synchronized with (e.g., accompanied by, in conjunction with, and/or simultaneously with) the movement corresponding to the representation of the first software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the computer system is in communication with a first movement component (e.g., an actuator, a movable base, a rotatable component, and/or a rotatable base) . In some embodiments, the movement is and/or is performed in conjunction with movement performed via the movement component (e.g., a physical movement) (e.g., of a physical portion of the computer system) (e.g., via the one or more output devices) . In some embodiments, the movement is on the display component (e.g., is movement of the representation of the software object) . The audio output corresponding to the representation of the first software object being synchronized with the movement corresponding to representation of the first software object allows the computer system to simultaneously communicate with the user using visual movement and audio, thereby providing improved feedback to user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0247] In some embodiments, the computer system (e.g., 600) is in communication with a one or more output devices (e.g., smart speakers, home theater system, soundbars, headphones, earphones, earbuds, speakers, television speakers, augmented reality headset speakers, audio jacks, optical audio output, Bluetooth audio outputs, and/or HDMI audio outputs) . In some
144
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, while displaying the representation of the first software object (e.g., 604 and/or 620) , the computer system receives a first input (e.g., 605a and/or 605b) (e.g., a tap input (e.g., a swipe input, a hold-and-drag input, an air gesture, and/or a mouse click) and/or a non-tap input (e.g., a verbal input, an audible request, an audible command, an audible statement, and/or a gaze input) ) , via the one or more input devices, corresponding to an interaction with the first software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, in response to receiving the input (e.g., 605a and/or 605b) corresponding to an interaction with the first software object (e.g., 604 and/or 620) , in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the first input (e.g., 605a and/or 605b) corresponds to (e.g., is directed to, is selection of, is pointed in a direction of (e.g., a direction of a representation of the software object and/or application) , includes reference to, mentions, names, identifies, and/or is configured to be associated with) the first application, the computer system outputs, via the one or more output devices, a first response (e.g., 612 and/or 624) (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the first response is output in conjunction with displaying the identifier. In some embodiments, the first response is not in conjunction with displaying the identifier. In some embodiments, in response to receiving the input corresponding to an interaction with the first software object, in accordance with a determination that a fourth set of one or more criteria is satisfied, wherein the fourth set of one or more criteria includes a criterion that is satisfied when the first input (e.g., 605a and/or 605b) corresponds to the second application, the computer system outputs, via the one or more output devices, a second response
145
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., 612 and/or 624) (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the second response is the same as the first response. In some embodiments, the second response is different from the first response. In some embodiments, the second response is output in conjunction with changing to a new software object. In some embodiments, the second response is output in conjunction with forgoing displaying the identifier. Outputting a response after receiving an input corresponding to a first application or corresponding to a second application allows the computer system to allow interaction with both the first application and the second application, thereby providing improved feedback to user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input .
[0248] In some embodiments, the computer system (e.g., 600) is in communication with a display component. In some embodiments, before detecting the request (e.g., 605a and/or 605b) , the computer system displays, via the display component, a representation of a second software object (e.g., 604) different from the representation of the first software object (e.g., as described above with respect to FIG. 6A) . In some embodiments, the representation of the second software object corresponds to a previous request and/or task. In some embodiments, the representation of the second software object corresponds to a system process and the first software object corresponds to an application process. In some embodiments, in response to detecting the request (e.g., 605a and/or 605b) , the computer system transitions (e.g., moving, altering, modifying, and/or replacing) the representation of the second software object (e.g., 604) to the representation of the first software object (e.g., 620) (e.g., as described above with
146
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) respect to FIGS. 6A-6E) . In some embodiments, the representation of the first software object replaces the representation of the second software object. In some embodiments, transitioning the representation of the second software object to a representation of the first software object includes moving the second software object in a manner that shows the second software object exiting and then moving the first software object in a manner that shows the second software object entering (e.g., a user interface and/or area of the display component) . In some embodiments, transitioning the representation of the second software object to a representation of the first software object includes ceasing displaying the representation of the second software object and displaying the representation of the first software object. In some embodiments, the detected request is related to an application that corresponds to the first software object. In some embodiments, the detected request includes a request to display the representation of the first software object and/or a representation of an application.
Transitioning the representation of the second software object to a representation of the first software object allows the computer system to easily change software objects when needed and visually inform the user that a new software object being displayed, thereby providing improved feedback to user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0249] In some embodiments, the computer system (e.g., 600) is in communication with a movement component (e.g., as described above with respect to process 700) . In some embodiments, transitioning the representation of the second software object (e.g., 604) to the representation of the first software object (e.g., 620) includes performing, via the movement component, a
147
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) movement (e.g., as described above with respect to FIGS. AE) (e.g., a physical movement of a physical portion of the computer system) (e.g., corresponding to a software object change operation) (e.g., a set of one or more components (e.g., direction, speed, and/or distance) that define a preprogrammed movement) (e.g., a 360-degree spin (e.g., clockwise, counterclockwise, and/or rotation without a specific rotational sense) , a 180-degree turn to face away followed by a 180-degree turn to return to a start position, a turn downward followed by a turn upward to return to a start position, and/or a turn upward followed by a turn downward to return to a start position) . In some embodiments, the movement results in the computer system moving the display component (and/or another portion of the computer system) to be in a position where the user (e.g., represented with respect to and/or having a viewpoint) in the same physical environment as the computer system cannot see a portion of the display component and/or what is displayed on the display component due to the changed position of the computer system. Transitioning from the representation of the second software object to the representation of the first software object including a movement allows the computer system to visually inform the user that a software object transition occurs, thereby providing improved feedback to user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0250] In some embodiments, transitioning the representation of the second software object (e.g., 604) to the representation of the first software object (e.g., 620) includes: ceasing displaying, via the display component, the representation of the second software object (e.g., 604) (e.g., as described above with respect to FIGS. 6C-6E) ; and
148
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) displaying, via the display component, the representation of the first software object (e.g., 620) (e.g., as described above with respect to FIGS. 6C-6E) . Ceasing displaying the representation of the second software object and displaying the representation of the first software object allows the computer system to visually inform the user that a software object transition occurs, thereby providing improved feedback to user, allowing the computer system to avoid burin-in of the display component, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input .
[0251] In some embodiments, in response to receiving the request (e.g., 605a and/or 605b) , the computer system concurrently displays, via the display component, the representation of the first software object and a representation of a third software object (e.g., 604 and 620) different from the representation of the first software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the representation of the first software object and the representation of the third software object are displayed at the same time. In some embodiments, the representation of the first software object is displayed immediately after the representation of the third software object. In some embodiments, the representation of the third software object is the same as the representation of the second software object. In some embodiments, the representation of the third software object is different than the representation of the second software object. In some embodiments, the representation of the first software object and the representation of the third software object correspond to different applications. Displaying the representation of the first software object concurrently with the representation
149
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of the third software object allows the computer system to display multiple software objects to the user and potentially indicate a transition is occurring, will occur, and/or may occur, thereby providing improved feedback to user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0252] In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when a second input (e.g., 605b) (e.g., a tap input and/or a non-tap input (e.g., a verbal input, an audible request, an audible command, an audible statement, a swipe input, a hold-and-drag input, a gaze input, an air gesture, and/or a mouse click) ) is detected (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, second the input corresponds to (e.g., is directed to, is at a location of, and/or indicates an intent related to) the representation of the software object and/or respective application. In some embodiments, the second input is a particular type of input (e.g., representing and/or configured to indicate an intent as a request to display the indication) . Displaying the identifier in accordance with a second input being detected allows the computer system to indicated it has received the input from the user, thereby providing improved feedback to user and performing an operation when a set of conditions has been met without requiring further user input.
[0253] In some embodiments, the second input (e.g., 605b) is an interaction with (e.g., during an exchange and/or in response to displaying the first software object) the first software object (e.g., 604 and/or 620) (e.g., as described above with respect to FIGS. 6A-6E) .
150
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0254] In some embodiments, the second input (e.g., 605b) (e.g., gaze, air gesture, and/or hover) is directed to (e.g., physically directed to and/or verbally directed to) the first software object (e.g., 604 and/or 620) (e.g., as described above with respect to FIGS. 6A-6E) .
[0255] In some embodiments, the one or more input devices includes a microphone and wherein the second input (e.g., 605b) includes verbal input (e.g., as described above with respect to FIGS. 6A-6E) (e.g., an audible request, an audible command, and/or an audible statement) . Displaying the identifier in accordance with a second input includes verbal input provides the computer system with increased flexibility and/or accessibility in receiving communication from a subject and/or enables the computer system to perform an operation based on audio, thereby reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and performing an operation when a set of conditions has been met without requiring further user input.
[0256] In some embodiments, in accordance with a determination that a fifth set of one or more criteria is satisfied (e.g., a predetermined amount of time has passed, output is completed, input has ended, and/or a new input is received) , the computer system ceases displaying the identifier (e.g., 622) corresponding to the representation of the first software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the representation of the first software object (and/or one or more other representations of software objects) continue to be displayed after ceasing displaying the identifier corresponding to the representation of the first software object. In some embodiments, the computer system ceases displaying the representation of the first software object (and/or one or more other representations of software
151
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) objects) after ceasing displaying the identifier corresponding to the representation of the first software object. Ceasing displaying the identifier corresponding to the first software object in accordance with a determination that a set of one or more criteria has been met allows the computer system to decrease clutter on the user interface when the identifier is no longer needed, thereby reducing the number of inputs needed to perform an operation, allowing the computer system to avoid burn-in of the display generation, and performing an operation when a set of conditions has been met without requiring further user input.
[0257] In some embodiments, before detecting the request (e.g., 605a) , the computer system displays, via the display component, a representation of a fourth software object (e.g., 604 or 620) , wherein the representation of the fourth software object includes (e.g., corresponds to, has, and/or is represented by) a first set of one or more visual characteristics, wherein the representation of the first software object (e.g., 604 or 620) includes (e.g., corresponds to, has, and/or is represented by) a second set of one or more visual characteristics different from the first set of one or more visual characteristics, and wherein the first software object is different from the fourth software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the fourth software object and the first software object are concurrently displayed (or not concurrently displayed) . In some embodiments, the first software object replaces the fourth software object. In some embodiments, the fourth software object is the same as the second and/or third software object. In some embodiments, the fourth software object is different than the second and/or third software object. In some embodiments, the representation of the first software object is a first avatar (e.g., that includes the
152
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) second set of visual characteristics) . In some embodiments, the representation of the fourth software object is a second avatar (e.g., that includes the first set of visual characteristics) . In some embodiments, a visual characteristic includes color, shape, movements, mannerisms, facial features, and/or other characteristic affected visual appearance. The representation of the first software object and the representation of the fourth software object having differing visual characteristics allows the computer system to display different looking software objects based on which one needs to be displayed and provide the user with visual confirmation that the software object has changed, thereby perform an operation, performing an operation when a set of conditions has been met without requiring further input and providing improved visual feedback to the user.
[0258] In some embodiments, before detecting the request (e.g., 605a) , the computer system displays, via the display component, a representation of the fifth software object (e.g., 604 or 620) , wherein the fifth software object includes (e.g., corresponds to, has, and/or is represented by) a first set of one or more audio characteristics (e.g., tone, pitch, accent, voice and/or volume) , wherein the first software object (e.g., 604 or 620) includes ( e . g . , corresponds to, includes, and/or is represented by) a second set of one or more audio characteristics (e.g., tone, pitch, accent, voice and/or volume) different from the first set of one or more audio characteristics, and wherein the first software object is different from the fifth software object (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the fifth software object and the first software object are concurrently displayed (or not concurrently displayed) . In some embodiments, the first software object replaces the fifth software object. In some embodiments, the
153
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) fifth software object is the same as the second, third, and/or fourth software object. In some embodiments, the fourth software object is different than the second, third, and/or fourth software object. In some embodiments, the computer system outputs a first audio output corresponding to the fourth software object using the first set of one or more audio characteristics (e.g., and not the second set of one or more audio characteristics) . In some embodiments, the computer system outputs a second audio output corresponding to the first software object using the second set of one or more audio characteristics (e.g., and not the first set of one or more audio characteristics) . Displaying representations of software objects that include different audio characteristics allows the computer system to output distinguishing features of different software objects, thereby performing an operation when a set of conditions has been met without requiring further input and providing improved feedback to the user.
[0259] Note that details of the processes described above with respect to method 800 (e.g., FIG. 9) are also applicable in an analogous manner to other methods described herein. For example, method 700 optionally includes one or more of the characteristics of the various methods described above with reference to method 800. For example, displaying the representation of the second software object, in accordance with the determination that the request corresponds to the second application, of method 900 can occur in conjunction with (e.g., while, after, and/or in response to) the movement of method 700. For brevity, these details are not repeated herein .
[0260] FIG. 9 is a flow diagram illustrating a method (e.g., method 900) for displaying a representation of a software object for an application that corresponds to a request in accordance with some embodiments. Some operations in method
154
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
900 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted .
[0261] As described below, method 900 provides an intuitive way for displaying a representation of a software object for an application that corresponds to a request. Method 900 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0262] In some embodiments, method 900 is performed at a computer system (e.g., 600) that is in communication with one or more input devices (e.g., a camera, a depth sensor, and/or a microphone) and a display component (e.g., a display screen, a projector, and/or a touch-sensitive display) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, and/or a personal computing device.
[0263] While displaying, via the display component, a representation of a first software object (e.g., 604) that corresponds to a first application (e.g., downloaded application, systems process, and/or widgets) , the computer system detects (902) input (e.g., a tap input (e.g., a swipe input, a hold-and-drag input, an air gesture, and/or a mouse click) and/or a non-tap input (e.g., a verbal input, an audible request, an audible command, an audible statement, and/or a gaze input) ) representing a request (e.g., 605a) (e.g., as described above with respect to FIG. 6A) (e.g., to perform a task and/or for information) .
155
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0264] In response to receiving (904) the input, in accordance with a determination that the request (e.g., 605a) corresponds to (e.g., is directed to, is selection of, is pointed in a direction of (e.g., a direction of a representation of) , includes reference to, mentions, names, identifies, and/or is configured to be associated with) a second application (e.g., 614) different from the first application, the computer system displays (906) , via the display component, a representation of a second software object (e.g., 620) that corresponds to the second application (e.g., as described above with respect to FIG. 6E) (e.g., and cease displaying the representation of the first software object) .
[0265] In response to receiving (904) the input, in accordance with a determination (908) that the request (e.g., 605a) corresponds to (e.g., is directed to, is selection of, is pointed in a direction of (e.g., a direction of a representation of) , includes reference to, mentions, names, identifies, and/or is configured to be associated with) the first application, the computer system continues (910) displaying, via the display component, the representation of the first software object (e.g., 604) (e.g., as described above with respect to FIGS. 6A-6C) . In some embodiments, in accordance with a determination that the request corresponds to the first application, the computer system ceases displaying, via the display generation component, the representation of the first software object. In some embodiments, in accordance with a determination that the request corresponds to the first application, the computer system displays, via the display generation component, a user interface (e.g., corresponding to the first application) (e.g., with or without concurrently displaying the representation of the first software object) .
156
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0266] In response to (904) receiving the input, in accordance with the determination (908) that the request corresponds to the first application, the computer system forgoes (912) displaying (e.g., not displaying) , via the display component, the representation of the second software object (e.g., 620) that corresponds to the second application (e.g., as described above with respect to FIGS. 6A-6E) . Displaying a representation of a particular software object selectively in accordance with a determination that a request corresponds to a first application or a second application allows the computer system to appropriately switch between relevant software objects, thereby providing improved feedback to user, allowing the computer system to avoid burin-in of the display component, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0267] In some embodiments, the first software object is a system software object (e.g., as described above with respect to FIGS. 6A-6C) (e.g., corresponding to an operating system process of the computer system and/or not corresponding to an application process of an application installed on the operating system) . In some embodiments, the system software object is a default software object (e.g., that is configured to receive and/or respond to requests by default and/or that is configured to handover control to other software objects when a certain set of conditions is met and/or a context is true) .
[0268] In some embodiments, in conjunction with (e.g., before, after, and/or in response to) displaying the representation of the second software object (e.g., 620) , the computer system causes an operation to be performed (e.g., by the computer system and/or one or more other computer systems) with one or more files (e.g., 616) (e.g., tax returns, images, and/or
157
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) documents) corresponding to the request (e.g., 605a) (e.g., as described above with respect to FIGS. 6B-6E) (e.g., send to the second application and/or perform the task) . In some embodiments, the request corresponds to a task (e.g., a task requested to be performed that is communicated in and/or represented by the request) . In some embodiments, the one or more files corresponds to the task (e.g., are related to the task, are relevant to the task, are necessary for completion of the task, can be used in the task, are associated with the task, and/or are requested for use in performing the task, and/or are configured to be used in performing the task) . In some embodiments, while continuing displaying the representation of the first software object and forgoing displaying the representation of the second software object, causing an operation to be performed with one or more files corresponding to the request. Causing an operation to be performed with one or more files corresponding to the request in conjunction with displaying the representation of the second software object allows the computer system to perform a relevant operation in addition to switching a displayed software object representation, thereby providing improved feedback to user, allowing the computer system to avoid burinin of the display component, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input.
[0269] In some embodiments, causing the operation to be performed with the one or more files (e.g., 616) corresponding to the request (e.g., 605a) is performed in response to receiving the input (e.g., 605b) (e.g., as described above with respect to FIGS. 6B-6E) . In some embodiments, the input is permission (e.g., an indication and/or grant of permission) that the one or more files can be shared with an application.
158
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
In some embodiments, the input includes (e.g., is and/or represents) a request to perform the task. Causing the operation to be performed with the one or more files corresponding to the request in response to receiving the input allows the computer system to receive confirmation that the operation should be performed, thereby increasing security, providing improved visual feedback to the user, and performing an operation when a set of conditions has been met without requiring further user input.
[0270] In some embodiments, causing the operation to be performed with the one or more files (e.g., 616) corresponding to the request (e.g., 605a) is performed after receiving the input (e.g., 605a) (e.g., as described above with respect to FIGS. 6B-6E) (e.g., in response to, subsequent to and not in response to, while performing and/or as part of the task being performed, and/or in response to a different input that is received subsequent to the input) . In some embodiments, the operation is performed in response to a second input different from the input. In some embodiments, the second input is permission (e.g., an indication and/or grant of permission) that the one or more files can be shared with an application. In some embodiments, the second input includes (e.g., is and/or represents) a request to perform the task.
[0271] In some embodiments, in response to receiving the input and in accordance with a determination that the request (e.g., 605a) cannot be performed by the first software object (e.g., needed data is private and/or not accessible to the first software object, security credentials and/or a choice of application is needed, and/or the first software object does not include and/or have access to functionality for performing the request (e.g., a task specified and/or represented therein) ) , the computer system outputs, via the one or more output devices, a prompt (e.g., 612) (e.g., audio, text,
159
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and/or image) (e.g., a user interface element and/or control) (e.g., a tone and/or voice prompt) for a second input (e.g., 605b) (e.g., as described above with respect to FIG. 6B) . In some embodiments, the representation of the first software object continues to be displayed while and/or during a portion of time during which the prompt is output. In some embodiments, the representation of the first software object is no longer displayed (e.g., ceases being displayed and/or is overlaid) while and/or during a portion of time during which the prompt is output. In some embodiments, the second input is permission (e.g., an indication and/or grant of permission) that the one or more files can be shared with an application. In some embodiments, the second input includes (e.g., is and/or represents) a request to perform the task. Outputting a prompt for a second input in accordance with a determination that the request cannot be performed by the first software object allows the computer system to communicate with the user that it is unable to perform the operation with the current software object, thereby providing the user with improved visual feedback to the user, increasing security, and performing an operation when a set of conditions has been met without requiring further user input.
[0272] In some embodiments, the prompt (e.g., 612) for the second input (e.g., 605b) is a request for permission to launch the first application (e.g., as described above with respect to FIG. 6B) . In some embodiments, the computer system receives the second input. In some embodiments, in response to receiving the second input and in accordance a determination that the second input corresponds to permission to launch the first application, the computer system causes the first application to be launched. In some embodiments, in response to receiving the second input and in accordance a determination that the second input does not correspond to
160
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) permission to launch the first application (e.g., corresponds to denial of permission) , the computer system forgoes causing the first application to be launched. The prompt including a request for permission to launch the first application allows the computer system to perform the operation when the user adds an input for approval, thereby increasing security and providing improved visual feedback to the user.
[0273] In some embodiments, the prompt (e.g., 612) for the second input (e.g., 605b) is a request for permission to share data (e.g., 616) (e.g., using one or more files related to the task) with the first application (e.g., as described above with respect to FIG. 6B) . In some embodiments, the computer system receives the second input. In some embodiments, in response to receiving the second input and in accordance with a determination that the second input corresponds to permission to share data with the first application, the computer system causes data to be shared with the first application. In some embodiments, in response to receiving the second input and in accordance a determination that the second input does not correspond to permission to share data with the first application (e.g., corresponds to denial of permission) , the computer system forgoes sharing data the first application. The prompt including a request for permission to share data with the first application allows the computer system to perform the operation when the user allows data to be used in the operation, thereby increasing security and providing improved visual feedback to the user.
[0274] In some embodiments, before displaying the representation of the second software object (e.g., 620) that corresponds to the second application, the computer system obtains a response (e.g., 605b) (e.g., answer to the task and/or information regarding the task) corresponding to (e.g., related to, in reference to, and/or for performing and/or
161
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) satisfying) a task specified in the request (e.g., 605a) (e.g., as described above with respect to FIGS. 6B-6E) . In some embodiments, the response is provided to the computer system (e.g., by a user and/or by another computer system) . In some embodiments, the computer system receives the response (e.g., via the one or more input devices) . In some embodiments, the response is a response to a prompt output by the computer system. In some embodiments, the response is included in the input representing the request. In some embodiments, the response is a new input (e.g., different from the input) that corresponds to an outputted prompt. Obtaining a response corresponding to the task specified in the request before displaying the representation of the second software object that corresponds to the second application allows the computer system to have a relevant response before changing software objects, thereby providing improved user feedback and providing additional control options without cluttering the user interface with additional displayed controls.
[0275] In some embodiments, obtaining the response (e.g., 605b) related to the detected input specified in the request (e.g., 605a) includes detecting, via the one or more input devices, a first request (e.g., 605a and/or 605b) (e.g., permission and/or approval) (e.g., received via input from a user (e.g., verbal input and/or physical input) ) to share information with the second software object (e.g., as described above with respect to FIG. 6B) . Obtaining the response that includes a request to share information with the second software object allows the computer to acquire needed approval before changing software objects, thereby increasing security and providing improved visual feedback to the user.
[0276] In some embodiments, the obtained response (e.g., 612 and/or 624) is a request (e.g., a prompt and/or control) to display the representation of the second software object
162
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., 620) that corresponds to the second application (e.g., as described above with respect to FIGS. 6B-6E) . Obtaining the response that includes a request to display the representation of the second software object allows the computer to acquire needed approval before changing software objects, thereby increasing security and providing improved visual feedback to the user.
[0277] In some embodiments, in response to receiving the input (e.g., 605a and/or 605b) and in accordance with a determination that the first software object cannot perform the request (e.g., as described above with respect to FIGS. 6B-6E) (e.g., and/or a task specified in the request) (e.g., needed data is private and/or not accessible to the first software object, security credentials, and/or the first software object does not include and/or have access to functionality for performing the request (e.g., a task specified and/or represented therein) ) , the computer system ceases displaying, via the display component, the representation of the first software object (e.g., 604) (e.g., as described above with respect to FIGS. 6C-6E) . In some embodiments, in response to receiving the input and in accordance with the determination that the first software object cannot perform the request, the computer system displays (and/or initiating display of) , via the display component, the representation of the second software object (e.g., 620) (e.g., as described above with respect to FIG. 6E) . In some embodiments, a task to be performed includes (and/or is included in) a function, operation, application, and/or process to be performed by (e.g., requested to be performed by and/or determined to be performed by) the computer system. In some embodiments, initiating display includes outputting a prompt requesting permission (e.g., to display and/or share data with the second software object
163
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and/or representation thereof) . Ceasing displaying the representation of the first software object and displaying the representation of the second software object when the first software object cannot perform a request allows the computer system to automatically change the software object to one that can perform the request, thereby providing increased visual feedback to the user, reducing the number of inputs needed to perform an operation, and performing an operation when a set of conditions has been met without requiring further user input .
[0278] In some embodiments, the representation of the first software object (e.g., 604) includes (e.g., corresponds to, has, and/or is represented by) a set of one or more visual characteristics (e.g., avatar (e.g., shape, color, and/or features) ) and wherein the representation of the second software object (e.g., 620) includes (e.g., corresponds to, has, and/or is represented by) a second set of one or more visual characteristics different from the first set of one or more visual characteristics (e.g., as described above with respect to FIGS. 6A-6E) . In some embodiments, the representation of the first software object is a first avatar (e.g., that has the first set of one or more visual characteristics) . In some embodiments, the representation of the second software object is a second avatar (e.g., that has the second set of one or more visual characteristics) . In some embodiments, a visual characteristic includes color, shape, movements, mannerisms, facial features, and/or other characteristic affected visual appearance. The representation of the first software object and the representation of the second software object having differing visual characteristics allows the computer system to display visually different software object representations and provide the user with visual confirmation that a software object has changed,
164
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) thereby performing an operation when a set of conditions has been met without requiring further input and providing improved visual feedback to the user.
[0279] In some embodiments, the representation of the first software object (e.g., 604) includes (e.g., corresponds to, has, and/or is represented by) a first set of one or more audio characteristics (e.g., tone, pitch, accent, voice and/or volume) and wherein the representation of the second software object (e.g., 620) includes (e.g., corresponds to, has, and/or is represented by) a second set of one or more audio characteristics different from the first set of one or more audio characteristics (e.g., as described above with respect to FIGS. 6B-6E) . In some embodiments, the computer system outputs a first audio output corresponding to (e.g., synchronized with, attributable to, and/or appearing to emanate from) the first software object using the first set of one or more audio characteristics (e.g., and not the second set of one or more audio characteristics) . In some embodiments, the computer system outputs a second audio output corresponding to (e.g., synchronized with, attributable to, and/or appearing to emanate from) the second software object using the second set of one or more audio characteristics (e.g., and not the first set of one or more audio characteristics) . Outputting different audio outputs that is dependent on which software object is displayed allows the computer system to provide audio feedback to the user with specific audio characteristics to distinguish the different software objects, thereby performing an operation when a set of conditions has been met without requiring further input and providing improved feedback to the user.
[0280] In some embodiments, the first software object (e.g., 604) corresponds to a first application (e.g., a first type of application) (e.g., a software application and/or knowledge
165
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) base) . In some embodiments, the second software object (e.g., 620) corresponds to a second application (e.g., 614) (e.g., the first type of application or a different type of application) (e.g., a software application and/or knowledge base) different from the first application (e.g., as described above with respect to FIGS. 6B-6E) . In some embodiments, the first application and/or the second application include overlapping information. In some embodiments, the first application and the second application do not include overlapping information. The first software object and the second software object corresponding to different applications allows the computer system to display different software objects corresponding to different applications, thereby providing improved feedback to the user and providing additional control without cluttering the user interface with additional displayed controls.
[0281] Note that details of the processes described above with respect to method 900 (e.g., FIG. 9) are also applicable in an analogous manner to the methods described herein. For example, method 700 optionally includes one or more of the characteristics of the various methods described herein with reference to method 900. For example, displaying the representation of the second software object, in accordance with the determination that the request corresponds to the second application, of method 900 can occur in conjunction with (e.g., while, after, and/or in response to) the movement of method 700. For brevity, these details are not repeated herein .
[0282] FIGS. 10A-10I illustrate exemplary user interfaces for directional inputs that change a mode of a device in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 11-12.
166
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0283] FIGS. 10A-10I illustrate computer system 600 as a smart phone. It should be recognized that computer system 600 can be different types of computer systems, such as a smart display, a television, a tablet, a smart watch, a laptop, a communal device, a smart speaker, a personal gaming system, a desktop computer, a fitness tracking device, and/or a head-mounted display (HMD) device. In some embodiments, computer system 600 includes and/or is in communication with one or more output devices (e.g., a display component (e.g., a display screen, a projector, and/or a touch-sensitive display) , an audio component (e.g., smart speaker, home theater system, soundbar, headphone, earphone, earbud, speaker, television speaker, augmented reality headset speaker, audio jack, optical audio output, Bluetooth audio output, and/or HDMI audio output) , and/or a haptic output component) . In some embodiments, computer system 600 includes and/or is in communication with one or more input devices (e.g., a sensor, a camera, a lidar detector, a motion sensor, an infrared sensor, a touch- sensitive surface, a physical input mechanism (such as a button or a slider) , and/or a microphone) . Such input devices can be used to detect presence of, attention of, statements from, inputs corresponding to, requests from, and/or instructions from a user in an environment. It should be recognized that, while some embodiments described herein refer to inputs being voice inputs, other types of inputs can be used with techniques described herein, such as touch inputs via a touch-sensitive surface and air gestures detected via a camera. In some embodiments, computer system 600 includes and/or is in communication with one or more movement components (e.g., an actuator, a moveable base, a rotatable component, and/or a rotatable base) . In such embodiments, the one or more movement components, as discussed above, can be used to change a position (e.g., location and/or orientation) of computer system 600 and/or a portion (e.g., including at
167
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) least one output device of the one or more output devices and/or at least one input device of the one or more input devices) of computer system 600. In some embodiments, computer system 600 is, includes, implements, and/or is in communication with one or more software systems, as described above with respect to FIG. 5, for performing (and/or causing performance of) one or more operations of one or more software objects. For example, computer system 600 can include, implement, and/or be in communication with a software object. In some embodiments, computer system 600 is a personal device of a user and/or associated with an account of the user. In other embodiments, computer system 600 is a communal device that is not associated with a single account of a user and, in some embodiments, caters to whoever is using the communal device (e.g., logged into and/or detected via an input device) .
[0284] FIGS. 10A-10I sometimes illustrate computer system 600 displaying a representation 604 of the software object in different user interfaces. In these figures, representation 604 is a simple shape. It should be recognized that, in some embodiments, representation 604 includes an animated face that has eyes and a mouth. For example, representation 604 can include a face that is directed toward the user. It should be recognized that, in some embodiments, representation 604 includes other aspects, such as a body, clothes, and/or a hat. As discussed in the embodiments below, computer system 600 displays representation 604 moving and/or changing expression in response to making determinations, detecting inputs from a user, and/or outputting content. In some embodiments, the software object provides, via one or more movements of the animated face, outputs and/or responses to a user.
[0285] FIGS. 10A-10I illustrate computer system 600 in a higher-power mode. In a higher-power mode, computer system 600
168
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) differs from a lower-power mode in terms of accepted inputs, brightness level, display of user interfaces, and/or appearance of representation 604. In some embodiments, while in a higher-power mode, computer system 600 detects, accepts, responds to, and/or acknowledges audio, gaze, verbal, and/or touch inputs. In some embodiments, while in a lower-power mode, computer system 600 does not detect, accept, respond to, and/or acknowledge audio, gaze, verbal, and/or touch inputs. In some embodiments, in a higher-power mode, computer system 600 physically moves a portion of computer system 600 and/or visually moves representation 604 to follow movements (e.g., the eyes of representation 604 moving in the direction of a user) in response to detecting the user moving around an environment as a way of tracking the user. In some embodiments, while in a lower-power mode, computer system 600 does not physically move the portion of computer system 600 and/or does not visually move representation 604 to track the movements of the user in response to detecting the user moving around the environment. Further discussion of a higher-power mode and a lower-power mode is provided below.
[0286] At FIG. 10A, computer system 600 displays user interface 602. User interface 602 includes representation 604 at an initial size (e.g., a large square) while computer system 600 is in an initial state. At FIG.10A, representation 604 is represented in an initial size (e.g., a large square) while computer system 600 is in an initial state, such as a higher-power mode. Stated differently, at FIG. 10A, representation 604, while computer system 600 is in the initial state, is displayed by computer system 600 in a large size that takes up more of user interface 602 than representation 604 in user interface 1000 as described below. In some embodiments, computer system 600 waits for an event to occur during the initial state. In some embodiments, the event
169
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is a notification, output of content, and/or an input by a user. At FIG. 10A, computer system 600 detects input 1005a on representation 604. In some embodiments, input 1005a is a tap input and/or another type of inputs, such as a tap and hold input, a voice input, a hand gesture, and/or a gaze input, and/or at different locations, such as not directed to representation 604.
[0287] As illustrated in FIG. 10B, in response to detecting input 1005a, computer system 600 displays user interface 1000. User interface 1000 includes a minimized representation of representation 604 in the bottom left corner of user interface 1000. User interface 1000 also includes representations of previous interactions. In some embodiments, an interaction is a previous request and/or output of information (e.g., that was requested by a user) , the software object, and/or an application different from the software object (such as a user application executing on computer system 600 and/or another computer system different from computer system 600) . For example, an interaction corresponding to user interface element 1004 can be a response from computer system 600 to a user's request for computer system 600 to play a song. As illustrated in FIG. 10B, user interface 1000 includes user interface element 1004 in the bottom right corner of user interface 1000, user interface element 1002 (e.g., a representation of an interaction corresponding to California) in the top right corner of user interface 1000, and user interface element 1006 (e.g., a representation of an interaction corresponding to Texas) in the top left corner of user interface 1000. It should be recognized that such positions and/or appearances of different representations in user interface 1000 as illustrated in FIG. 10B are just an example and that other positions and/or appearances can be used with techniques described herein. In some embodiments,
170
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) positions of different representations correspond to when interactions are detected. For example, computer system 600 can display representations in an order that corresponds to the recency of interactions of respective representations in a counterclockwise direction with the representation having the least recently detected interaction in the bottom right corner of user interface 1000 and the representation having the most recently detected interaction in the top left corner of user interface 1000. In some embodiments, as computer system 600 detects new interactions, computer system 600 ceases to display the representation of the least recent interaction to make room for a new representation.
[0288] At FIG. 10B, computer system 600 detects input 1005b on representation 604 towards user interface element 1002. In some embodiments, input 1005b is a tap-and-drag input and/or another type of input that is detected while displaying representation 604. For example, input 1005b can be a type of touch and/or air gesture that begins at a location corresponding to (e.g., at or directed towards) representation 604, proceeds towards user interface element 1002, and/or ends at or near user interface element 1002. In some embodiments, an input contains an indication corresponding to one of user interface element 1006, user interface element 1002, user interface element 1004, and/or another user interface element. For example, the input can include a verbal input, speech, and/or another input that indicates one of user interface element 1006, user interface element 1002, user interface element 1004, and/or another user interface element.
[0289] As illustrated in FIG. 10C, in conjunction with (e.g., while, in response to, and/or after) detecting input 1005b, computer system 600 displays representation 604 partially on top of user interface element 1002. In some embodiments, computer system 600 displays representation 604 as moving with 171
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) input 1005b until release of input 1005b . In other embodiments , computer system 600 does not display representation 604 as moving with input 1005b and only displays representation 604 at a di f ferent location than illustrated in FIG . 10B when computer system 600 detects release of input 1005b . In some embodiments , representation 604 on top of user interface element 1002 signi fies that outputs from computer system 600 in response to one or more future inputs will be catered to and/or take into account one or more characteristics of an interaction corresponding to user interface element 1002 , which will be discussed further below . At FIG . 10C, computer system 600 detects release of input 1005b .
[ 0290 ] As illustrated in FIG . 10D, in response to detecting the release of input 1005b, computer system 600 returns representation 604 to the bottom left corner of user interface 1000 and displays representation 604 as an acute triangle pointing ( e . g . , focused, appearing to look, to be facing, and/or with eyes directed) in the direction of user interface element 1002 . In some embodiments , representation 604 focused in the direction of user interface element 1002 indicates that computer system 600 will base one or more future outputs on user interface element 1002 and/or an interaction corresponding to user interface element 1002 . In some embodiments , instead of returning representation 604 to the bottom left corner of user interface 1000 , computer system 600 maintains display of representation 604 at a location of release of input 1005b ( e . g . , as illustrated in FIG . 10C ) . At FIG . 10D, computer system 600 detects verbal input 1005d "What is the weather?" . It should be recogni zed that other types of inputs can be detected via computer system 600 to cause computer system 600 to respond, such as a touch input detected
172
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) via a touch-sensitive surface and/or an air gesture detected via a camera.
[0291] At FIG. 10E, in response to detecting verbal input 1005d, computer system 600 displays user interface element 1008 in the bottom right corner of user interface 1000. User interface element 1008 corresponds to verbal input 1005d and includes a representation of the current temperature in California (e.g., 62°) . To note, verbal input 1005d did not include an indication of California. Instead, the release of input 1005b at a location corresponding to user interface element 1002 provided additional context to clarify that verbal input 605d corresponds to California and not another location. For example, computer system 600 can display the representation of the current temperature in California in response to the user' s request for the temperature after being directed towards the user interface element corresponding to California (e.g., user interface element 1002) . Therefore, computer system 600 outputs user interface element 1008 to indicate to the user that it is 62° in California. It should be recognized that display of user interface element 1008 is only one example of an output to indicate the current temperature in California and that other types of outputs can be used in addition to or instead of user interface element 1008, such as audio output that includes a verbal indication of 62°. As illustrated in FIG. 10E, computer system 600 displays representation 604 as focused on (e.g., pointing, looking, facing, and/or with eyes directed) in the direction of user interface element 1008 (e.g., instead of user interface element 1002) . Representation 604 focused at user interface element 1008 serves as an acknowledgement of the presence of user interface element 1008 and draws the attention of the user to user interface element 1008. In some embodiments, computer system 600 maintains display of
173
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) representation 604 in the direction of user interface element 1002 instead of changing to focus at user interface element 1008 to indicate that one or more future outputs will continue to be based on user interface element 1002 . In some embodiments , computer system 600 changing where representation 604 is focused is an indication that one or more future outputs will no longer be based on user interface element 1002 . As a result of displaying user interface element 1008 , computer system 600 rotates the display of other representations in user interface 1000 to indicate recency of interactions corresponding to such representations . Speci fically, computer system 600 rotates the display of user interface element 1006 , user interface element 1002 , and user interface element 1004 in a counterclockwise direction to make space for and/or indicate recency of user interface element 1008 . As a result of the rotation, there is no space left on user interface 1000 for user interface element 1006 , and, in response , computer system 600 ceases to display user interface element 1006 . At FIG . 10E , computer system 600 detects input 1005e in an upwards direction from the bottom of user interface 1000 . In some embodiments , input 1005e is a swipe input . It should be recogni zed that other types of inputs can be used to change user interfaces as described below, such as a verbal request .
[ 0292 ] As illustrated in FIG . 10F, in response to detecting input 1005e , computer system 600 displays user interface 1010 , which is a home screen interface with representations of applications of computer system 600 . User interface 1010 includes two rows of representations of applications ( e . g . , sometimes referred to as applications 1012 ) and an enlarged representation of representation 604 on the right side of user interface 1010 . In some embodiments , applications 1012 are programs saved within computer system 600 and di f fer from
174
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) interaction representations in that (1) interaction representations are based on interactions between the user and computer system 600 while applications 1012 are applications accessible by computer system 600 and/or (2) interaction representations are updated over time with updated content without detecting user input (e.g., such as a widget is updated) and/or change their order while applications 1012 do not update over time and/or do not change their order. It should be recognized that the appearance and/or location of applications 1012 and/or representation 604 can be different than as illustrated in FIG. 10F. At FIG. 10F, computer system 600 detects input 1005fl (e.g., a tap input) on photos application representation 1012b and input 1005f2 (e.g., a swipe input) in an upwards direction from the bottom of user interface 1010. It should be recognized that each input (e.g., input 1005fl and input 1005f2) are illustrated together though computer system 600 can detect each input separately and/or independently to cause functionality as described below. It should also be recognized that such inputs are examples of types of inputs to cause such functionality though different types of inputs can be used with techniques described herein.
[0293] As illustrated in FIG. 10G, in response to detecting input 1005fl on photos application representation 1012b, computer system 600 displays user interface 1014. User interface 1014 is (1) an interface of a photos application (e.g., corresponding to photos application representation 1012b) and (2) includes display of various photo albums. Note that computer system 600 does not display representation 604 in user interface 1014 as illustrated in FIG. 10G. In some embodiments, computer system 600 temporarily displays representation 604 and/or a representation of representation 604 (e.g., a glow or other visual representation) within user interface 1014 to indicate an area of user interface 1014 that
175
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) can be used to interact with the software object (e.g., the temporary display can be located on a bottom edge of user interface 1014, indicating that, in response to detecting a tap input on the bottom edge of user interface 1014, computer system 600 can display representation 604 of the software object and/or configure one or more future inputs to be directed to the software object) . At FIG. 10G, computer system 600 detects input 1005g in an upwards direction from the bottom of user interface 1014. It should be recognized that, while input 1005g is illustrated as a swipe input, such an input is an example of a type of input to cause such functionality though different types of inputs can be used with techniques described herein.
[0294] As illustrated in FIG. 10H, in response to detecting input 1005g at FIG. 10G and/or input 1005f2 (e.g., and not input 1005fl) at FIG. 10F, computer system 600 displays user interface 1000 with representation 604 in a default position (e.g., having an appearance of being focused and/or looking straight ahead and/or in a direction of the user, as represented by a square in FIG. 10H) . At FIG. 10H, computer system 600 detects hold-and-drag input 1005h2 on representation 604 toward the middle of user interface 1000. In response to detecting hold and drag input 1005h2, computer system 600 displays user interface 602 as illustrated in FIG. 10A with representation 604 in the maximized form in the initial state. In some embodiments, computer system 600 detects a hold-and-drag input directed to representation 604 in the direction of a representation of an interaction. An input dragging representation 604 to a representation of an interaction maintains the display of user interface 1000 as illustrated in FIG. 10H. For example, if computer system 600 detects an input dragging representation 604 to a user interface element 1008, computer system 600 can display
176
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation 604 partially on top of user interface element 1008 and, in some embodiments, appearing to be focused in the direction of user interface element 1008. At FIG. 10H as an alternative to detecting hold-and-drag input 1005h2, computer system 600 detects hold-and-drag input 1005hl on representation 604 toward the bottom of user interface 1000. At FIG. 10H as an alternative to detecting hold-and-drag input 1005h2 and/or hold-and-drag input 1005hl, computer system 600 detects input 1005h3 (e.g., a tap input) on representation 604. In some embodiments, in response to input 1005h3, computer system 600 maintains user interface 1000, such as at any one or more of FIGS. 10B-10E and/or 10H.
[0295] As illustrated in FIG. 101, in response to detecting hold-and-drag input 1005hl, computer system 600 enters a lower-power mode (e.g., instead of maintaining a higher-power mode and/or changing display to user interface 602 in response to detecting hold-and-drag input 1005h2) . In some embodiments, in a lower-power mode, computer system 600 changes in terms of accepted inputs, brightness level, display of user interfaces, and/or appearance of representation 604. In some embodiments, in a lower-power mode, computer system 600 does not accept and/or acknowledge audio, gaze, and/or touch inputs but does accept and/or acknowledge verbal inputs. For another example, in a lower-power mode, computer system 600: decreases the brightness level of the display, ceases to display user interfaces that computer system 600 displayed while in a higher-power mode, and/or alters the appearance of representation 604 (e.g., size and/or facial expression) from its appearance in a higher-power mode. In some embodiments, whether or not computer system 600 accepts an input depends on the location of the input. For example, computer system 600 can accept an input directed to representation 604 and reject an input that is directed to a location other than the
177
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) location of representation 604. In some embodiments, computer system 600 provides a haptic output and/or auditory output that signifies that computer system 600 has entered a lower- power mode. For example, haptic and/or auditory outputs can include a vibration, a beep, and/or a verbal output from computer system 600. As illustrated in FIG. 101, in response to entering a lower-power mode, computer system 600 displays user interface 1016. User interface 1016 is lower in brightness (e.g., as indicated by the dashed lines) than the user interfaces of FIGS. 10A-10I, which are in a higher-power mode. As illustrated in FIG. 101, user interface 1016 does not include display of representations of previous interactions (e.g., user interface element 1008 as illustrated in FIG.
10H) . However, user interface 1016 does include a date and a time indicator as current temporal information 1018 in the top left corner and representation 604 in the bottom left corner. It should be noted that, in some embodiments, computer system 600 does not display current temporal information 1018 while in a higher-power mode and/or in user interfaces as described above (e.g., FIGS. 10B-10E and/or 10H) . In some embodiments, computer system 600 does display current temporal information 1018 while in a higher-power mode. At FIG. 101, computer system 600 displays representation 604 with the appearance of an equilateral triangle to indicate that computer system 600 is in a lower-power mode. In some embodiments, while in the lower-power mode, computer system 600 accepts (e.g., responds to and/or detects) one or more inputs (e.g., inputs of a first type) . For example, computer system 600 can accept some inputs and change modes. For example, computer system 600 can proceed from user interface 1016 to another user interface, such as any one or more of those described herein. In some embodiments, while in the lower-power mode, computer system 600 rejects (e.g., does not respond to and/or does not detect) one or more inputs (e.g., inputs of a second type) . For
178
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) example, computer system 600 can reject some inputs and forgo changing modes. As such, in some embodiments, only some inputs (e.g., inputs of a first type) can change a mode of computer system 600.
[0296] In some embodiments, in response to detecting an accepted input (e.g., as described above) , such as input 10051, while in a lower-power mode (e.g., while displaying user interface 1016) , computer system 600 reverts back to a higher-power mode (e.g., raises brightness and/or changes an appearance of representation 604) and/or displays user interface 602 as illustrated in FIG. 10A or user interface 1000 as illustrated in FIG. 10H. In some embodiments, computer system 600 enters into a lower-power mode from a higher-power mode in response to detecting inactivity for a predefined period of time (e.g., 10 minutes) while in the higher-power mode. For example, inactivity can include not detecting touch, verbal, and/or air input for the predefined period of time. For example, while displaying a user interface such as illustrated in FIG. 10C, if computer system 600 does not detect an input for a certain amount of time (e.g., detects an event in which no input is detected for the certain amount of time) , computer system 600 can enter into a lower-power mode and/or displays user interface 1016 as illustrated in FIG. 101.
[0297] FIG. 11 is a flow diagram illustrating a process (e.g., process 1100) for managing modes of a computer system based on inputs at a representation of a system software object in accordance with some embodiments. Some operations in process 1100 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted .
179
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0298] As described below, process 1100 provides an intuitive way for managing modes of a computer system based on inputs at a representation of a system software object. Process 1100 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0299] In some embodiments, process 1100 is performed at a computer system (e.g., 600) that is in communication (e.g., wired communication and/or wireless communication) with one or more display components (e.g., a display screen, a projector, and/or a touch-sensitive display) and one or more input devices (e.g., a touch-sensitive surface, a camera, a depth sensor, a hardware input mechanism, and/or a rotatable input mechanism) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, and/or a personal computing device.
[0300] While (e.g., after and/or in response to) displaying (e.g., at FIGS. 10B-10F and/or 10H) , via the one or more display components, a representation (e.g., 604) (e.g., a graphical representation (e.g., a software object (e.g., a stylized humanoid and/or nonhumanoid representation) , a computer system selected representation, a generated representation, an emoji, a video, and/or an image) , a video, and/or an animation) of a software object (e.g., as described above with respect to FIG. 5) and while the computer system is in a first mode (e.g., as described above with respect to 602, 1000, and/or 1010) (e.g., a first power mode, a higher-power mode as compared to a lower-power mode, a lower-power mode as compared to the higher-power mode, an active display mode, a
180
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) disabled display mode, a mode in which the computer system causes a portion (e.g., that includes one or more input devices and/or one or more output devices of the computer system) of the computer system to follow a user, and/or a mode in which the computer system does not cause the portion of the computer system to follow a user) , the computer system detects (1102) , via the one or more input devices, an input (e.g., 605b, 605hl, and/or 605h2) (e.g., a swipe input and/or a nonswipe input (e.g., a verbal input, an audible request, an audible command, an audible statement, a tap input, a hold- and-drag input, a gaze input, an air gesture, mouse movement, and/or a mouse click) ) . In some embodiments, the computer system detects an input when the computer system is in another mode (e.g., a second power mode, such as a lower-power mode as compared to the first power mode) . In some embodiments, the computer system forgoes detection of a set of one or more types of inputs when the computer system is in the other mode.
[0301] In response to (1104) detecting the input, in accordance with a determination that the input starts at a location corresponding to the representation (e.g., overlapping the representation) of the software object and proceeds in a first direction (e.g., as described with respect to 1005hl) (e.g., away from the center of the user interface, toward a particular side of the user interface, and/or away from one or more displayed items) (and/or proceeds to a second location, different from the location, at which point the computer system detects release and/or termination of the input) , the computer system changes (1106) (e.g., at FIG. 101) the computer system to be operated in (and/or causes the computer system to be in) a second mode (e.g., as described with respect to 1016) (e.g., a second power mode, such as a lower-power mode as compared to the first mode) different from the first mode. In some embodiments, in response to detecting
181
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the input and in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the first direction, the computer system displays, via the display component, a user interface corresponding to (e.g., of, particular to, and/or reflective of) the second mode. In some embodiments, the representation of the software object is a first representation of the software object. In some embodiments, in response to detecting the input (and/or in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the first direction) , the computer system displays, via the display component, a second representation of the software object different from the first representation of the software object (e.g., updates the first representation of the software object to the second representation of the software object (e.g., in conjunction with (e.g., before, while, and/or after) ceasing display of the first representation of the software object) ) .
[0302] In response to (1104) detecting the input, in accordance with a determination that the input starts at the location corresponding to the representation of the software object and proceeds in a second direction (e.g., as described with respect to 1105b and/or 1105h2) different from the first direction (e.g., towards the center of the user interface, away from a particular side of the user interface, and/or toward one or more displayed items) (and/or proceeds to a third location, different from the location and the second location, at which point the computer system detects release and/or termination of the input) , the computer system maintains (1108) (e.g., as described with respect to FIGS.
10B-10F and/or 10H) the computer system in the first mode. In some embodiments, in response to detecting the input and in
182
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the second direction, the computer system displays, via the display component, a user interface corresponding to (e.g., of, particular to, and/or reflective of) a displayed item that is within the second direction. In some embodiments, in response to the input (and/or in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the second direction) , the computer system displays, via the display component, a third representation of the software object different from the first representation of the software object (and/or the second representation of the software object) (e.g., updates the first representation of the software object to the third representation of the software object (e.g., in conjunction with (e.g., before, while, and/or after) ceasing display of the first representation of the software object) ) . Changing a computer system to be operated in either a first mode or a second mode in accordance with a determination that an input proceeds in either a first direction or a second direction allows the computer system to alter its operation in response to user inputs, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
[0303] In some embodiments, the first mode is a first power mode (e.g., as described with respect to 602, 1000, and/or 1010) . In some embodiments, the second mode is a second power mode (e.g., as described with respect to 1016) . In some embodiments, the first power mode consumes more power than the
183
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) second power mode. In some embodiments, while the computer system is in the first mode, the computer system displays content at a higher brightness level, displays different content, has access to different and/or more content, provides different and/or more functionality than while in the second mode. In some embodiments, while the computer system is in the second mode, the computer system is unable to perform one or more of the operations described with respect to the first mode. In some embodiments, while the computer system is in the second mode, the computer system displays content at a lower brightness level, displays different content, has access to different and/or less content, provides different and/or less functionality than while in the second mode. In some embodiments, while the computer system is in the second mode, the computer system detects, via the one or more input devices, an input corresponding to a request to change to the first mode. In some embodiments, in response to detecting the input corresponding to the request to change to the first mode, the computer system changes the computer system to be operated in (and/or causes the computer system to be in) the first mode. In some embodiments, the computer system remains in the second mode until the computer system detects, via the one or more input devices, an input. Operating a computer system in different power modes in accordance with a determination that an input proceeds in either a first direction or a second direction allows the computer system to switch between operations that consumer different amounts of power in response to user inputs, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing additional control options without cluttering the user interface with additional displayed controls.
184
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0304] In some embodiments, while the computer system is in the first mode, the computer system displays, via the one or more display components, a first user interface (e.g., 602, 1000, and/or 1010) (e.g., displays the software object with animations and/or in an awake state, selectable menus and/or other visual elements, and/or a brighter screen) . In some embodiments, while the computer system is in the second mode, the computer system displays, via the one or more display components, a second user interface (e.g., 1016) (e.g., without displaying the first user interface) different from the first user interface (e.g., displays the software object without animations and/or in a sleep state, hiding selectable menus and/or other visual elements, and/or a dimmer screen) . In some embodiments, the computer system does not (and/or is not able to) display the first user interface while in the second mode. In some embodiments, the computer system does not (and/or is not able to) display the second user interface while in the first mode. Displaying different user interfaces in accordance with a determination that an input proceeds in either a first direction or a second direction allows a computer system to provide different visual outputs that correspond to different modes of the computer system, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
[0305] In some embodiments, the second user interface includes current temporal information (e.g., 1018) (e.g., time and/or date) . In some embodiments, the current temporal information is provided in the second user interface in place of one or more features that are provided in the first user interface.
185
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
In some embodiments, the first user interface does not include current temporal information (e.g., the first user interface replaces the current temporal information of the second user interface with one or more other user interface elements) . In some embodiments, the second user interface includes the current temporal information at a position of a displayed area that includes one or more other user interface elements in the first user interface. Including or not including current temporal information in the second user interface or the first user interface, respectively, allows the computer system to provide different types of information in each of the first and second modes, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing improved visual feedback to the user .
[0306] In some embodiments, the input is a first input. In some embodiments, while the computer system is in the second mode (e.g., as described with respect to FIG. 101) , the computer system detects, via the one or more input devices, a second input (e.g., 10051) different (e.g., separate and/or a different type) from the first input. In some embodiments, while the computer system is in the second mode, in response to detecting the second input, in accordance with a determination that the second input is a first type of input (e.g., air gesture, input detected via a physical input mechanism, audio, gaze, and/or touch input) , the computer system responds to the second input (e.g., changes a user interface, changes an appearance of a software object, provides information, changes modes, performs an operation) . In some embodiments, while the computer system is in the second mode, in response to detecting the second input, in accordance with a determination that the second input is a
186
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) second type of input (e.g., air gesture, input detected via a physical input mechanism, audio, gaze, and/or touch input) different from the first type of input, the computer system forgoes response to the second input. In some embodiments, while the computer system is in the first mode (e.g., as described with respect to FIGS. 10A-10I) , the computer system detects, via the one or more input devices, a third input (e.g., 1005a, 1005b, 1005e, 1005fl, 1005f2, 1005g, 1005hl, and/or 1005h2) different (e.g., separate and/or a different type) from the first input (and/or the second input) . In some embodiments, while the computer system is in the first mode, in response to detecting the third input, in accordance with a determination that the third input is the first type of input (e.g., air gesture, input detected via a physical input mechanism, audio, gaze, and/or touch input) , the computer system responds to the third input (e.g., changes a user interface, changes an appearance of a software object, provides information, changes modes, performs an operation) . In some embodiments, while the computer system is in the first mode, in response to detecting the third input, in accordance with a determination that the third input is the second type of input (e.g., air gesture, input detected via a physical input mechanism, audio, gaze, and/or touch input) , the computer system responds to the third input (e.g., changes a user interface, changes an appearance of a software object, provides information, changes modes, performs an operation) . Selectively responding to or forgoing response to different types of inputs depending on a mode of a computer system allows the computer system to avoid inadvertent inputs from the user until the mode is changed, thereby performing an operation when a set of conditions has been met without requiring further user input and/or providing additional control options without cluttering the user interface with additional displayed controls.
187
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0307] In some embodiments, while the computer system is in the first mode (e.g., follow mode) , the computer system moves (e.g., as described with respect to FIGS. 10A-10I) a portion (e.g., one or more eyes, a head, a torso, a body, and/or all) of the representation of the software object (e.g., in response to and/or while detecting, via the one or more input devices, movement of a subject in an environment and/or detecting, via the one or more input devices, an input) . In some embodiments, moving the representation of the software object includes responding to a detected position of the subject. In some embodiments, moving the representation of the software object includes displaying, via the one or more display components, the portion of the software object from a first position to a second position different from the first position. In some embodiments, while the computer system is in the first mode, the computer system detects, via the one or more input devices, the subject at a first position in the environment. In some embodiments, while the computer system is in the first mode and after detecting the subject at the first position in the environment, the computer system detects, via the one or more input devices, the subject at a second position, different from the first position, in the environment. In some embodiments, while the computer system is in the first mode and in response to detecting the subject at the second position, the computer system moves the portion of the representation of the software object (e.g., to correspond to the second position) . In some embodiments, while the computer system is in the second mode (e.g., non-follow mode) , the computer system forgoes (e.g., as described with respect to FIG. 101) movement of the portion (and/or any portion) of the representation of the software object (e.g., the representation of the software object is static or otherwise does not respond to a detected position of the subject) . In some embodiments, while the computer system is in the second
188
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) mode, the computer system moves the representation of the software object based on a set of one or more criteria that does not include a criterion based on a detected position of the subject. In some embodiments, while the computer system is in the second mode, the computer system moves the representation of the software object in accordance with a predetermined animation. In some embodiments, while the computer system is in the second mode, the computer system detects, via the one or more input devices, the subject at the first position in the environment. In some embodiments, while the computer system is in the second mode and after detecting the subject at the first position in the environment, the computer system detects, via the one or more input devices, the subject at the second position in the environment. In some embodiments, while the computer system is in the second mode and in response to detecting the subject at the second position, the computer system forgoes movement of the portion of the representation of the software object (e.g., to correspond to the second position) . Selectively moving or forgoing movement of a portion of a representation of a software object depending on a mode of a computer system allows the computer system to provide different visual outputs to indicate to the user the currently active mode, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing improved visual feedback to the user.
[0308] In some embodiments, the representation of the software object is a first representation of the software object. In some embodiments, the first representation of the software object has a first set of one or more visual characteristics that is agnostic to a current position and/or expression of the first representation of the software object. In some
189
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, in response to detecting the input and in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the first direction, the computer system displays (e.g., as described with respect to FIG. 101) , via the one or more display components, a second representation (e.g., 604) of the software object different (e.g., different size, appearance, and/or position) from the first representation of the software object. In some embodiments, the first representation of the software object is moved along a path of and/or with the input. In some embodiments, the second representation of the software object is displayed at a position of the first representation of the software object in response to the input. In some embodiments, the first representation of the software object turns to be focused in a direction of the input while detecting the input. In some embodiments, the second representation of the software object has a second set of one or more visual characteristics, different from the first set of one or more visual characteristics, that is agnostic to a current position and/or expression of the second representation of the software object. Displaying a different representation of the software object in response to a direction of an input that starts at a location of the representation allows the computer system to provide a different visual output in response to the input and to indicate a change in a mode of the computer systems, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing improved visual feedback to the user.
[0309] In some embodiments, the computer system is in communication with one or more output devices (e.g., an audio component and/or a haptic component) . In some embodiments,
190
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) changing the computer system to be operated in the second mode includes outputting (e.g., as described with respect to FIG. 101) , via the one or more output devices, an auditory output, a haptic output, or a combination thereof. In some embodiments, the auditory output and/or the haptic output is output in response to changing the computer system to be operated in the second mode. In some embodiments, the auditory output and/or the haptic output is output after changing the computer system to be operated in the second mode and/or in response to an input that is detected, via the one or more input devices, after changing the computer system to be operated in the second mode. In some embodiments, the audio component includes a speaker, a smart speaker, a home theater system, a soundbar, a headphone, an earphone, an earbud, a television speaker, an augmented reality headset speaker, an audio jack, an optical audio output, a Bluetooth audio output, and/or a HDMI audio output. Outputting an auditory output, a haptic output, or a combination thereof when mode of the computer system changes allows the computer system to provide an indication to the user of the change to the mode, thereby performing an operation when a set of conditions has been met without requiring further user input and/or providing improved visual feedback to the user.
[0310] In some embodiments, the input is a first input. In some embodiments, while the computer system is in the second mode (e.g., as described with respect to FIG. 101) , the computer system detects, via the one or more input devices, a third input (e.g., 10051) different (e.g., separate and/or a different type) from the first input. In some embodiments, in response to detecting the third input and in accordance with a determination that the third input (e.g., a swipe input and/or a non-swipe input (e.g., a verbal input, an audible request, an audible command, an audible statement, a tap input, a hold-
191
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and-drag input, a gaze input, an air gesture, mouse movement, and/or a mouse click) ) satisfies a set of one or more criteria (e.g., type, direction, and/or duration) , the computer system changes (e.g., as described with respect to FIGS. 10A-10I) the computer system to be operated in the first mode. In some embodiments, the third input includes a touch input detected via the one or more input devices. Changing a computer system to be operated in a first mode in response to an input that is detected while the computer system is in a second mode allows the computer system to return to a prior mode in response to a user input, thereby performing an operation when a set of conditions has been met without requiring further user input and/or providing additional control options without cluttering the user interface with additional displayed controls.
[0311] In some embodiments, while the computer system is in the first mode (e.g., as described with respect to FIGS. 10A- 101) , the computer system detects that an event has occurred (e.g., a predefined period of time has expired without detecting, via the one or more input devices, an input and/or a subject is no longer detected via the one or more input devices) without detecting, via the one or more input devices, an additional input. In some embodiments, in response to detecting the event and in accordance with a determination that a set of one or more criteria is satisfied (e.g., that the event occurred) , the computer system changes (e.g., as described with respect to FIG. 101) the computer system to be operated in the second mode. Changing a computer system to be operated in a second mode in response to detecting that an event has occurred without detecting an input allows the computer system to change modes even when no input is detected, thereby performing an operation when a set of conditions has been met without requiring further user input
192
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and/or reducing the number of inputs needed to perform an operation .
[0312] In some embodiments, the representation of the software object is a first representation (e.g., 604) of the software object (e.g., as described with respect to FIGS. 10A-10I) . In some embodiments, changing the computer system to be operated in the second mode includes: ceasing display of the first representation of the software object; and displaying (e.g., as described with respect to FIG. 101) , via the one or more display components, a second representation (e.g., 604) of the software object different from the first representation of the software object (e.g., the second representation of the software object appears to be sleeping, is not animated, does not follow a subject in an environment, is a different size than the first representation of the software object, and/or is located at a different position than the first representation of the software object) . In some embodiments, the second representation of the software object is at the same position as the first representation of the software object. In some embodiments, the second representation of the software object is at a different position than the first representation of the software object. In some embodiments, the second representation of the software object is not animated, does not move, and/or has another visual feature that the first representation of the software object does not have. Ceasing display of a representation of a software object and displaying a second representation of the software object allows the computer system to indicate to the user that the operating mode of the computer system has changed, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing improved visual feedback to the user.
193
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0313] In some embodiments, the representation of the software object is a first representation of the software object. In some embodiments, the first representation of the software object has a first set of one or more visual characteristics that is agnostic to a current position and/or expression of the first representation of the software object. In some embodiments, in response to detecting the input and in accordance with a determination that the input starts at the location corresponding to the representation of the software object and proceeds in a third direction (e.g., as described with respect to FIGS. 10B and/or 10H) (e.g., toward and/or ending at one or more displayed items) (and/or proceeds to a fourth location, different from the location, the second location, and/or the third location, at which point the computer system detects release and/or termination of the input) different from the first direction (and/or the second direction) , the computer system maintains the computer system in the first mode. In some embodiments, in response to detecting the input and in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the third direction different from the first direction, the computer system ceases display of the first representation of the software object. In some embodiments, in response to detecting the input and in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the third direction different from the first direction, the computer system displays (e.g., as described with respect to FIGS. IOC and/or 10D) , via the one or more display components, a third representation (e.g., 604) of the software object different (e.g., different size, appearance, and/or position) from the first representation of the software object. In some embodiments, the third representation of the software object
194
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is displayed at a position of the first representation of the software object in response to the input. In some embodiments, the first representation of the software object turns to be focused in a direction of the input while detecting the input. In some embodiments, the third representation of the software object is focused in a direction of the input in response to detecting the input. In some embodiments, the third representation of the software object has a third set of one or more visual characteristics, different from the first set of one or more visual characteristics, that is agnostic to a current position and/or expression of the third representation of the software object. Displaying a third representation of a software object while maintaining a computer system in a first mode allows the computer system to indicate a mode of operation to a user and provide additional information with the third representation of the software object, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
[0314] In some embodiments, the location is a first location. In some embodiments, the computer system is in communication with one or more output devices (e.g., the one or more display components, one or more other display components, one or more audio generation components, and/or one or more haptic generation components) . In some embodiments, in conjunction with (e.g., before, while, and/or after) detecting the input, the computer system displays, via the one or more input devices, a first user interface element (e.g., 1006, 1002, 1004, and/or 1008) (e.g., a virtual button, an icon, a widget, a control, and/or a window) and a second user interface
195
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) element (e.g., 1006, 1002, 1004, and/or 1008) (e.g., a virtual button, an icon, a widget, a control, and/or a window) different (e.g., separate and/or a different type) from the first user interface element. In some embodiments, after detecting the input (and/or in response to detecting, via the one or more input devices, another input (e.g., a verbal input and/or a non-verbal input) corresponding to a request to perform an operation) , in accordance with a determination that the input started at the first location corresponding to the representation of the software object and was released at the first user interface element (e.g., and not at the second user interface element) , the computer system outputs (e.g., as described with respect to FIGS. 10C-10E) , via the one or more output devices, an output (e.g., 1008) corresponding to (e.g., based on, related to, and/or associated with) the first user interface element. In some embodiments, the other input corresponding to the request to perform the operation is ambiguous such that additional information is needed to perform the operation, such as information provided via the input being released at a user interface element. In some embodiments, the other input corresponds to an inquiry or request for information relating to either one of the first user interface element and the second user interface element. In some embodiments, after detecting the input, in accordance with a determination that the input started at the first location corresponding to the representation of the software object and was released at the second user interface element (e.g., and not at the first user interface element) , the computer system outputs (e.g., as described with respect to FIGS. 10C-10E) , via the one or more output devices, an output (e.g., 1008) corresponding to (e.g., based on, related to, and/or associated with) the second user interface element (e.g., without outputting the output corresponding to the first user interface element) . In some embodiments, the output
196
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) corresponding to the second user interface element is different from the output corresponding to the first user interface element. In some embodiments, the first user interface element and the second user interface element are displayed at different positions within a user interface. In some embodiments, in response to detecting the input (and/or in conjunction with, such as before, while, and/or after, the other input is detected) and in accordance with the determination that the input started at the first location corresponding to the representation of the software object and was released at the first user interface element, the computer system displays, via the one or more display components, the representation of the software object to be directed at (e.g., focused on, facing, and/or looking at) the first user interface element. In some embodiments, in response to detecting the input (and/or in conjunction with, such as before, while, and/or after, the other input is detected) and in accordance with the determination that the input started at the first location corresponding to the representation of the software object and was released at the second user interface element, the computer system displays, via the one or more display components, the representation of the software object to be directed at (e.g., focused, facing, and/or looking at) the second user interface element. Outputting an output that corresponds to a respective user interface element wherein the output is in response to an input that is released at a respective user interface element allows the computer system to provide an output that is responsive to the user input with respect to the respective user interface element, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing additional control options without cluttering the user interface with additional displayed controls.
197
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0315] Note that details of the processes described above with respect to process 1100 (e.g., FIG. 11) are also applicable in an analogous manner to other processes described herein. For example, process 1200 optionally includes one or more of the characteristics of the various processes described above with reference to process 1100. For example, the representation of the system software object of process 1200 can be the representation of the software object of process 1100. For brevity, these details are not repeated herein.
[0316] FIG. 12 is a flow diagram illustrating a process (e.g., process 1200) for managing user interfaces based on inputs at a representation of a system software object in accordance with some embodiments. Some operations in process 1200 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted .
[0317] As described below, process 1200 provides an intuitive way for managing user interfaces based on inputs at a representation of a system software object. Process 1200 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0318] In some embodiments, process 1200 is performed at a computer system (e.g., 600) that is in communication (e.g., wired communication and/or wireless communication) with one or more display components (e.g., a display screen, a projector, and/or a touch-sensitive display) and one or more input devices (e.g., a touch-sensitive surface, a camera, a depth sensor, a microphone, a hardware input mechanism, a rotatable input mechanism, a heart monitor, and/or a temperature
198
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) sensor) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, and/or a personal computing device .
[0319] While (and/or after and/or in response to) displaying (e.g., as described with respect to FIGS. 10B-10F and/or 10H) , via the one or more display components, a first user interface (e.g., 1000 and/or 1010) (e.g., including a representation of a software object and/or one or more representations of previous interactions with the software object, such as an input directed to the software object and/or an output from the software object) , the computer system detects (1202) , via the one or more input devices, an input (e.g., 1005b, 1005hl, and/or 1005h2) (e.g., a swipe input and/or a non-swipe input) directed at (e.g., overlapping with and/or at a location corresponding to) a representation (e.g., 604) (e.g., as described above with respect to process 1100) of a system software object (and/or a software object, such as a system or application software object) (e.g., located in a particular region of the first user interface) . In some embodiments, the input is detected at a location corresponding to a location of the representation of the system software object in the first user interface.
[0320] In response to (1204) detecting the input directed at the representation of the system software object, in accordance with (1206) a determination that the input is a first type of input (e.g., swipe to middle of the first user interface, towards the center of the first user interface, away from a particular side of the first user interface, and/or toward one or more displayed items of the first user interface, such as one or more representations of previous interactions with the system software object (e.g., an input 199
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) to the system software object and/or an output from the system software object) ) , the computer system ceases (1208) display of the representation of the system software object.
[0321] In response to (1204) detecting the input directed at the representation of the system software object, in accordance with (1206) the determination that the input is the first type of input, the computer system displays (1210) (e.g., as described with respect to FIG. 10A) (e.g., in conjunction with (e.g., after) ceasing display of the representation of the system software object) , via the one or more display components, a second user interface (e.g., 602) (e.g., including another representation of the system software object without including a representation of a previous interaction with the system software object (e.g., an input directed to the system software object and/or an output from the system software object) ) different from the first user interface. In some embodiments, the determination that the input is the first type of input includes a determination that the input is received by a first type of input device (e.g., a touch-sensitive surface, a camera, a depth sensor, a microphone, a hardware input mechanism, a rotatable input mechanism, a heart monitor, and/or a temperature sensor) . In some embodiments, in response to detecting the input directed at the representation of the system software object and in accordance with the determination that the input is the first type of input, the computer system displays, via the one or more display components, a user interface corresponding to (e.g., of, particular to, and/or reflective of) the system software object without displaying a representation of a previous interaction with the system software object. In some embodiments, the representation of the system software object is a first representation of the system software object. In some embodiments, in response to detecting the input directed
200
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) at the representation of the system software object (and/or in accordance with the determination that the input is the first type of input) , the computer system displays, via the one or more display components, a second representation of the system software object different from the first representation of the system software object (e.g., updates the first representation of the system software object to the second representation of the system software object (e.g., in conjunction with (e.g., before, while, and/or after) ceasing display of the first representation of the system software object) ) .
[0322] In response to (1204) detecting the input directed at the representation of the system software object, in accordance with (1212) a determination that the input is a second type of input (e.g., swipe to bottom of the first user interface, away from the center of the first user interface, toward a particular side of the first user interface, and/or away from one or more displayed items of the first user interface, such as one or more representations of previous interactions with the system software object (e.g., an input to the system software object and/or an output from the system software object) ) different from the first type of input, the computer system ceases (1214) display of the representation of the system software object.
[0323] In response to (1204) detecting the input directed at the representation of the system software object, in accordance with (1212) the determination that the input is the second type of input different from the first type of input, the computer system displays (1216) (e.g., as described with respect to FIG. 101) , via the one or more display components, a third user interface (e.g., 1016) (e.g., including another representation of the system software object without including a representation of a previous interaction with the system software object (e.g., an input to the system software object
201
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) and/or an output from the system software object) and/or while the computer system is in a second mode as described above with respect to process 1100) different from the first user interface and the second user interface. In some embodiments, the determination that the input is the second type of input includes a determination that the input is received by a second type of input device (e.g., a touch-sensitive surface, a camera, a depth sensor, a microphone, a hardware input mechanism, a rotatable input mechanism, a heart monitor, and/or a temperature sensor) different from the first type of input device. In some embodiments, in response to detecting the input directed at the representation of the system software object and in accordance with the determination that the input is the second type of input, the computer system displays, via the one or more display components, a user interface corresponding to (e.g., of, particular to, and/or reflective of) another mode (e.g., power mode) (e.g., as described above with respect to process 1100, such that the input is detected while the computer system is in the first mode of process 1100 and the second user interface is displayed while the computer system is in the second mode of process 1100) . In some embodiments, in response to detecting the input directed at the representation of the system software object (and/or in accordance with the determination that the input is the second type of input) , the computer system displays, via the one or more display components, a third representation of the system software object different from the first representation of the system software object and the second representation of the system software object (e.g., updates the first representation of the system software object to the third representation of the system software object (e.g., in conjunction with (e.g., before, while, and/or after) ceasing display of the first representation of the system software object) ) . Switching from a first user
202
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) interface to either a second user interface or a third user interface depending on a direction of an input provided at a representation of a software object in the first user interface allows the computer system to respond to different inputs at the representation of a software object with different visual outputs, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user .
[0324] In some embodiments, in response to detecting the input directed at the representation of the system software object and in accordance with a determination that the input is a third type of input (e.g., tap input and/or a swipe from the representation of the system software object to another region of the first user interface) different from the first type of input and the second type of input, the computer system maintains (e.g., as described with respect to FIGS. 10B-10E and/or 10H) display of the first user interface (e.g., maintains the representation of the system software object and/or the computer system in a first mode as described above with respect to process 1100) . In some embodiments, the determination that the input is the third type of input includes a determination that the input is received by a third type of input device (e.g., a touch-sensitive surface, a camera, a depth sensor, a microphone, a hardware input mechanism, a rotatable input mechanism, a heart monitor, and/or a temperature sensor) different from the first type of input device and/or the second type of input device. In some embodiments, the determination that the input is the third type of input includes a determination that the input is in a
203
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) different direction that a direction of the first type of input and/or the second type of input. Maintaining display of a first user interface in response to a third type of input allows the computer system to receive other types of inputs without changing the user interface, thereby providing additional control options without cluttering the user interface with additional displayed controls and/or providing improved visual feedback to the user.
[0325] In some embodiments, the input directed at the representation of the system software object is a first input. In some embodiments, while (and/or after and/or in response to) displaying the first user interface (and/or the second user interface and/or the third user interface) , the computer system detects (e.g., as described with respect to FIG. 10E) , via the one or more input devices, a second input (e.g., 1005e) (e.g., a swipe input and/or a non-swipe input) . In some embodiments, in response to detecting the second input and in accordance with a determination that the second input is a fourth type of input (e.g., swipe not directed to the representation of the system software object and/or swipe in a different direction than a direction of the first type of input, the second type of input, and/or the third type of input) , the computer system displays (e.g., as described with respect to FIG. 10F) (e.g., in conjunction with (e.g., after) ceasing display of the first user interface) , via the one or more display components, a fourth user interface (e.g., 1010) including a list of applications (e.g., 1012) (e.g., of the computer system and/or another computer system different from the computer system) , wherein the fourth user interface is different from the first user interface, the second user interface, and the third user interface. In some embodiments, the determination that the second input is the fourth type of input includes a determination that the input is received by a
204
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) fourth type of input device (e.g., a touch-sensitive surface, a camera, a depth sensor, a microphone, a hardware input mechanism, a rotatable input mechanism, a heart monitor, and/or a temperature sensor) different from the first type of input device, the second type of input device, and/or the third type of input device. In some embodiments, the determination that the second input is the fourth type of input includes a determination that the second input is in a different direction that a direction of the first type of input, the second type of input, and/or the third type of input. In some embodiments, the determination that the second input is the fourth type of input includes a determination that the second input is not directed to the representation of the system software object. In some embodiments, the fourth user interface includes one or more features that are not provided in the first user interface, the second user interface, and/or the third user interface. Displaying a fourth user interface including a list of applications in response to a second input of a fourth type allows the computer system to provide access to the list of applications for selection by the user, thereby performing an operation when a set of conditions has been met without requiring further user input, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
[0326] In some embodiments, the representation of the system software object is a first representation (e.g., 604 at FIG. 10E) of the system software object. In some embodiments, the fourth user interface includes a second representation (e.g., 604 at FIG. 10F) of the system software object (e.g., the same as, with one or more of the same or similar visual characteristics of, and/or similar to the first representation
205
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of the system software object) . In some embodiments, display of the representation of the system software object is maintained in the fourth user interface. In some embodiments, the second representation of the system software object is located the same position as, has the same movement pattern of, has the same animation as, and/or has the one or more visual features of the first representation of the system software object. Including a second representation of a system software object in a fourth user interface allows the computer system to display a representation of the software object across multiple user interfaces, thereby performing an operation when a set of conditions has been met without requiring further user input and/or providing improved visual feedback to the user.
[0327] In some embodiments, while displaying the fourth user interface including the list of applications (e.g., the fourth user interface includes a first user interface element corresponding to a first application and a second user interface element, different from the first user interface element, corresponding to a second application different from the first application) , the computer system detects (e.g., as described with respect to FIG. 10F) , via the one or more input devices, an input (e.g., 1005fl) corresponding to a respective application (e.g., 1012b) in the list of applications. In some embodiments, in conjunction with (e.g., as a result of, after, and/or in response to) detecting the input corresponding to the respective application in the list of applications, in accordance with a determination that the respective application is a first application, the computer system displays (e.g., as described with respect to FIG. 10G) , via the one or more display components, a user interface (e.g., 1014) of the first application without displaying a (e.g., any) representation of the system software object (e.g., the
206
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation of the system software object described above) . In some embodiments, in response to detecting the input corresponding to the respective application in the list of applications (and/or after a predefined period of time has elapsed) , the computer system ceases display of the representation of the system software object. In some embodiments, in conjunction with detecting the input corresponding to the respective application in the list of applications, in accordance with a determination that the respective application is a second application different from the first application, the computer system displays (e.g., as described with respect to FIG. 10G) , via the one or more display components, a user interface (e.g., 1014) of the second application without displaying a (e.g., any) representation of the system software object. Displaying a user interface of an application without displaying a representation of a system software object allows the computer system to provide a view based on the application without visual features that might distract from the application, thereby performing an operation when a set of conditions has been met without requiring further user input, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
[0328] In some embodiments, the determination that the input is the first type of input includes a determination that the input proceeds in a first direction (e.g., as described with respect to FIG. 10H) . In some embodiments, the determination that the input is the second type of input includes a determination that the input proceeds in a second direction (e.g., as described with respect to FIG. 10H) different from the first direction. In some embodiments, the first direction is towards the middle of the first user interface, towards the
207
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) center of the first user interface, away from a particular side of the first user interface, and/or toward one or more displayed items of the first user interface, such as one or more representations of previous interactions with the system software object (e.g., an input directed to the system software object and/or an output from the system software object) . In some embodiments, the second direction is towards the bottom of the first user interface, away from the center of the first user interface, toward a particular side of the first user interface, and/or away from one or more displayed items of the first user interface, such as one or more representations of previous interactions with the system software object (e.g., an input directed to the system software object and/or an output from the system software object) . Determining input types based on a direction of an input allows the computer system to display different user interfaces in response to different directions of user input, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, and/or providing improved visual feedback to the user.
[0329] In some embodiments, the representation of the system software object is a first representation of the system software object. In some embodiments, the second user interface includes a second representation (e.g., 604 at FIG. 10A) (e.g., as described above with respect to process 1100) of the system software object (e.g., without including the first representation of the system software object) different from the first representation of the system software object (e.g., the second representation of the system software object is at a different position and/or has a different appearance
208
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) than the first representation of the system software object) . In some embodiments, the third user interface includes a third representation (e.g., as described above with respect to process 1100) of the system software object (e.g., without including the first representation of the system software object and/or the second representation of the system software object) different from the first representation of the system software object and/or the second representation of the system software object (e.g., the third representation of the system software object is at a different position and/or has a different appearance than the first representation of the system software object and/or the second representation of the system software object) . Including a second representation of a system software object in a second user interface allows the computer system to provide different representations in different user interfaces and accommodate other elements of each of the different user interfaces, thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback to the user.
[0330] In some embodiments, the first representation of the system software object is a first size (e.g., as described with respect to FIGS. 10B-10F and/or 10H) . In some embodiments, the second representation of the system software object is a second size (e.g., as described with respect to FIG. 10A) different (e.g., larger or smaller) from the first size. In some embodiments, the third representation of the system software object is a third size different from the first size and/or the second size. In some embodiments, the third size is the first size or the second size. In some embodiments, the second size is the first size. Displaying representations of a system software object with different sizes allows a computer system to provide visual features for the user to distinguish between different representations,
209
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback to the user .
[0331] In some embodiments, the first representation of the system software object has a first expression (e.g., as described with respect to FIGS. 10B-10F and/or 10H) (e.g., a facial feature, an emption expression, a smile, a frown, a sleeping expression, a bored expression, a tired expression, an expression representing that the system software object is anticipating an input, an expression representing that the system software object and/or the computer system is responding to an input, an expression looking in a particular direction, a visual appearance representing sentiment and/or a current state, and/or another visual appearance) . In some embodiments, the second representation of the system software object has a second expression (e.g., as described with respect to FIG. 10A) different from the first expression. In some embodiments, the third representation of the system software object has a third expression different from the first expression and/or the second expression. In some embodiments, the third expression is the first expression or the second expression. In some embodiments, the second expression is the first expression. Displaying representations of a system software object with different expressions allows a computer system to provide visual features for the user to distinguish between different representations, thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback to the user.
[0332] In some embodiments, the first representation of the system software object is at a first location (e.g., as described with respect to FIGS. 10B-10F and/or 10H) in a displayed area (e.g., displayable area) of the one or more display components. In some embodiments, the second
210
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) representation of the system software object is at a second location (e.g., as described with respect to FIG. 10A) , different from the first location, in the displayed area of the one or more display components. In some embodiments, the third representation of the system software object is at a third location, different from the first location and/or the second location, in the displayed area of the one or more display components. In some embodiments, the third location is the first location or the second location. In some embodiments, the second location is the first location. Displaying representations of a system software object at different locations allows a computer system to provide visual features for the user to distinguish between different representations and to accommodate other elements of each of the different user interfaces, thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback to the user.
[0333] In some embodiments, the first user interface includes (e.g., as described with respect to FIGS. 10B-10F and/or 10H) a set of one or more representations (e.g., 1006, 1002, 1004, and/or 1008) of interactions (e.g., a representation of a previous interaction with the system software object, such as an input directed to the system software object and/or an output from the system software object) . Including a representation of interactions with a first user interface allows the computer system to indicate to a user one or more inputs and/or outputs that occurred previously, thereby reducing the number of inputs needed to perform an operation and/or providing improved visual feedback to the user.
[0334] In some embodiments, the input directed at the representation of the system software object is a first input (e.g., 1005b, 1005hl, and/or 1005h2) . In some embodiments, while displaying the second user interface, the computer
211
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) system detects (e.g., as described with respect to FIG. 10A) , via the one or more inputs devices, a third input (e.g., 1005a) different (e.g., separate and/or a different type) from the first input. In some embodiments, in response to detecting the third input and in accordance with a determination that the third input satisfies a first set of one or more criteria (e.g., is accepted by the computer system while displaying the second user interface and/or of a particular type) , the computer system ceases display of the second user interface. In some embodiments, in response to detecting the third input and in accordance with the determination that the third input satisfies the first set of one or more criteria, the computer system displays (e.g., as described with respect to FIGS. 10B- 10F and/or 10H) (e.g., in conjunction with (e.g., after) ceasing display of the second user interface) , via the one or more display components, the first user interface. In some embodiments, in response to detecting the third input and in accordance with a determination that the third input does not satisfy the set of one or more criteria, the computer system maintains display of the second user interface. Ceasing display of a second user interface and displaying a first user interface in response to a third input allows the computer system to provide a way for the user to return to a previous user interface, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing improved visual feedback to the user.
[0335] In some embodiments, while displaying the third user interface, the computer system detects (e.g., as described with respect to FIG. 101) , via the one or more inputs devices, a fourth input (e.g., 10051) different (e.g., separate and/or a different type) from the first input. In some embodiments, in response to detecting the fourth input and in accordance
212
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) with a determination that the fourth input satisfied a second set of one or more criteria (e.g., is accepted by the computer system while displaying the third user interface and/or of a particular type) , the computer system ceases display of the third user interface. In some embodiments, in response to detecting the fourth input and in accordance with the determination that the fourth input satisfied the second set of one or more criteria, the computer system displays (e.g., as described with respect to FIGS. 10B-10F and/or 10H) (e.g., in conjunction with (e.g., after) ceasing display of the third user interface) , via the one or more display components, the first user interface. In some embodiments, in response to detecting the fourth input and in accordance with a determination that the fourth input does not satisfy the second set of one or more criteria, the computer system maintains display of the third user interface. Ceasing display of a third user interface and displaying a first user interface in response to a fourth input allows the computer system to provide a way for the user to return to a previous user interface, thereby performing an operation when a set of conditions has been met without requiring further user input, reducing the number of inputs needed to perform an operation, and/or providing improved visual feedback to the user.
[0336] Note that details of the processes described above with respect to process 1200 (e.g., FIG. 12) are also applicable in an analogous manner to the processes described herein. For example, process 1100 optionally includes one or more of the characteristics of the various processes described herein with reference to process 1200. For example, the input of process 1100 can be the input of process 1200. For brevity, these details are not repeated herein.
[0337] FIGS. 13A-13H illustrate exemplary user interfaces for displaying a representation of a software object and
213
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) environments in accordance with some embodiments. The user interfaces and environments in these figures are used to illustrate the processes described below, including the processes in FIGS. 13, 14, and 15.
[0338] FIGS. 13A-13H include a left portion and a right portion. The right portion of FIGS. 13A-13H illustrate environment 1302 including different states, such as light conditions (e.g., as illustrated by light 1320) , and computer system 600 having different fields of view. In some embodiments, environment 1302 is a locality and/or portion of a home, office, apartment, and/or physical location. The left portion of FIGS. 13A-13H illustrates outputs of computer system 600, such as display of content (e.g., weather indication 1310) and a representation (e.g., representation 604) of a software object (e.g., as described above with respect to FIG. 5 and FIGS. 10A-10I) based on environment 1302.
[0339] In FIGS. 13A-13H, computer system 600 is illustrated as a smart phone. It should be recognized that computer system 600 can be other types of computer systems such as a communal device, a tablet, a smart watch, a laptop, an accessory, a personal gaming system, a desktop computer, a fitness tracking device, and/or a head-mounted display (HMD) device. In some embodiments, computer system 600 of FIGS. 13A-13H is the same as computer system 600 of FIGS. 10A-10I. In some embodiments, computer system 600 includes and/or is in communication with one or more input devices (e.g., a sensor, a camera, a lidar detector, a motion sensor, an infrared sensor, a touch- sensitive surface, a physical input mechanism, and/or a microphone) . For example, computer system 600 can detect one or more features in environment 1302 through one or more cameras that provide computer system 600 with a field of view of environment 1302. In some embodiments, computer system 600
214
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) includes and/or is in communication with one or more output devices (e.g., a display screen, a projector, a touch- sensitive display, a speaker, and/or a movement component) . In some embodiments, computer system 600 includes one or more movement components. In such embodiments, computer system 600 can be capable of movement (1) in response to detecting an input and/or (2) as a preemptive action to a predicted input. In some embodiments, computer system 600 is a communal device and/or resident device that is positioned within an environment (e.g., environment 1302) . In some embodiments, computer system 600 is physically affixed to a location within the environment (e.g., environment 1302) .
[0340] In some embodiments, computer system 600 is included within and/or manages an ecosystem of devices due to being a communal device and/or being assigned as a communal device. In such embodiments, the communal device can be a device that is not associated with a particular user. For example, multiple subjects can utilize and/or interact with the communal device based on the communal device's location (e.g., within a kitchen and/or living room of a home) . In some embodiments, the ecosystem of devices is a set of devices that have been paired and/or associated with the communal device to create a network of devices. For example, the communal device can facilitate pairing of one or more accessory devices to a mesh network and/or wireless network of devices to allow the one or more accessory devices to communicate. For another example, the network of devices allows devices outside of the network to communicate to devices within the network of devices through the communal device.
[0341] In FIGS. 13A-13H, light 1320 is included to show a status of light within environment 1302. It should be recognized that light 1320 (e.g., illustrated as a lightbulb) can be different sources of light within environment 1302. In
215
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) some embodiments, one or more sources of light are artificial (e.g., a light bulb, a screen of another device, and/or a flashlight) . In some embodiments, one or more sources of light are natural (e.g., light from a window and/or ambient light level of environment 1302) . It should also be recognized that the illustration of light 1320 in FIGS. 13A-13H can indicate whether or not light in environment 1302 is sufficient for one or more input devices of computer system 600 to perform one or more designated detections. For example, the one or more input devices of computer system 600 can be used to detect a level of activity within environment 1302. Such a detection can require a certain level of light. In FIGS. 13A and 13H, light 1320 is illustrated as "off" when there is not enough light within environment 1302 for computer system 600 to detect an activity and/or points of interest within environment 1302, as discussed further below. In FIGS. 13B-13G, light 1320 is illustrated as "on" when there is enough light within environment 1302 for computer system 600 to detect an activity and/or points of interest within environment 1302, as discussed further below.
[0342] At FIG. 13A, while light 1320 is off, computer system 600 is in a lower-power mode (e.g., asleep, inactive, and/or idle) . In some embodiments, computer system 600 is in the lower-power mode in response to detecting that an input has not been received for a threshold amount of time. For example, computer system 600 can be in the lower-power mode due to a subject not interacting with computer system 600 within the threshold amount of time. For another example, computer system 600 can be in the lower-power mode due to computer system 600 detecting that no activity and/or points of interest are within environment 1302 presently and/or for a threshold amount of time. In some embodiments, computer system 600 is in the lower-power mode due to a current light level of
216
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) environment 1302 being below a value that is required for the one or more input devices of computer system 600 to operate (e.g., to perform a detection) . For example, computer system 600 can determine that the current level of light within environment 1302 is too low for computer system 600 to detect, via one or more cameras, activity within environment 1302.
[0343] In some embodiments, while in the lower-power mode, computer system 600 turns off one or more components. In such embodiments, while in the lower-power mode, computer system 600 does not utilize one or more of the one or more input devices. For example, while in the lower-power mode, computer system 600 turns off one or more cameras but does not turn off a lidar detector and/or an infrared sensor (e.g., to keep motion detection capabilities while in a lower light environment) . In some embodiments, while computer system 600 is in the lower-power mode, a display of computer system 600 is turned off.
[0344] In some embodiments, while computer system 600 is in the lower-power mode, computer system 600 displays content without displaying a software object representation (e.g., representation 604 as discussed below) . For example, while in the lower-power mode, computer system 600 can display generalized content such as a current time, a current weather forecast, and/or one or more messages from other devices within the ecosystem of devices (e.g., camera notifications) . For another example, while in the lower-power mode, computer system 600 can display information corresponding to a user such as recently received text messages, a news update from one or more subscribed news services, and/or an overview of the user's calendar.
[0345] In some embodiments, at FIG. 13B, in response to detecting a state of environment 1302 (e.g., that light 1320
217
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is turned on and/or that a current level of light within environment 1302 is above a threshold level of light) , computer system 600 transitions from the lower-power mode to a higher-power mode (e.g., active mode, mode that enables detection, and/or mode that awaits interaction and/or input from a user) . It should be recognized that computer system 600 can detect that light 1320 is turned on (e.g., environment 1302 has a current level of light that is above the threshold level of light, a current amount of light that is above a threshold amount of light, and/or a current brightness of light that is above a threshold brightness of light) through one or more different manners, including receiving communication from a light and/or via different types of input devices such as through one or more cameras, one or more light sensors, and/or one or more other input devices capable of determining a level of light within environment 1302. In some embodiments, computer system 600 transitions from the lower- power mode to the higher-power mode based on a determination that the current level of light within environment 1302 is over a level of light that is required to operate one or more cameras of computer system 600. In some embodiments, computer system 600 transitions from the lower-power mode to the higher-power mode based on a determination that a change in the level of light within environment 1302 is over a threshold amount of change (e.g., that the change in the level of light exceeds the threshold amount of change and is within a threshold amount of time) . In some embodiments, computer system 600 transitions from the lower-power mode to the higher-power mode based on a determination that a particular source of light (e.g., among and/or distinguishable from one or more other sources of light) is on and/or providing light. In some embodiments, computer system 600 transitions from the lower-power mode to the higher-power mode based on a determination that at least a threshold level (and/or amount)
218
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of activity (and/or movement) is present within environment 1302. For example, computer system 600 can transition from the lower-power mode to the higher-power mode in response to detecting movement over the threshold level of movement via the one or more cameras (e.g., environment 1302 has enough light to detect and/or make out the movement) . In some embodiments, computer system 600 transitions to the higher- power mode based on a determination that objects within environment 1302 are detected with a threshold level of clarity. For example, computer system 600 can transition from the lower-power mode to the higher-power mode when computer system 600 is able to make out and/or recognize one or more objects within environment 1302 (e.g., enough light to detect the one or more objects with at least a threshold level of accuracy) . In some embodiments, computer system 600 transitions from the lower-power mode to the higher-power mode based on a determination that particular activity has occurred within environment 1302. For example, computer system 600 can transition from the lower-power mode to the higher-power mode when computer system 600 detects a particular input (e.g., a gesture input) and/or a particular activity (e.g., a manipulation of a particular object) .
[0346] As illustrated in FIG. 13B, in response to detecting the state of environment 1302 (e.g., that light 1320 turned on and/or that the current level of light within environment 1302 is above the threshold level of light) , computer system 600 displays representation 604. It should be recognized that computer system 600 displays representation 604 to provide a visual cue that computer system 600 is ready for input and/or that interacting with computer system 600 includes interacting with the software object. As illustrated in FIG. 13B, computer system 600 displays representation 604 as a square that is centered and occupies a majority of computer system 600. It
219
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) should be recognized that computer system 600 can display representation 604 in other manners. For example, computer system 600 can display representation 604 as another shape or color, such as a yellow hexagon or a green cloud. For another example, computer system 600 can display representation 604 with facial features, such as eyes and a mouth. In some embodiments, computer system 600 does not display representation 604 until a subject is detected.
[0347] At FIG. 13B, while detecting the state of environment 1302 (e.g., that light 1320 is turned on and/or that the current level of light within environment 1302 is above the threshold level of light) , computer system 600 is able to utilize one or more input devices. In some embodiments, while in the higher-power mode, computer system 600 enables one or more components. In such embodiments, computer system 600 enables one or more input devices such as one or more cameras, light sensors, and/or motion sensors. For example, computer system 600 can turn on one or more cameras (e.g., previously disabled in the lower-power mode, as discussed above) as part of transitioning to the higher-power mode. In some embodiments, while in the higher-power mode, computer system 600 enables one or more movement components. In such embodiments, computer system 600 moves, via the one or more movement components, the portion of computer system 600 (e.g., a portion including a display and/or one or more input devices) . For example, computer system 600 can move the portion of computer system 600 to greet a detected subject. For another example, after enabling the one or more movement components, computer system 600 can move the portion of computer system 600 to attempt to detect a subject within environment 1302.
[0348] It should be recognized that computer system 600 can transition from the lower-power mode to the higher-power mode
220
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) based on detecting an input rather than based on light 1320. In such embodiments, computer system 600 can transition from the lower-power mode to the higher-power mode in response to detecting an input, via the one or more input devices, from a subject. For example, computer system 600 can transition from the lower-power mode to the higher-power mode in response to a subject tapping computer system 600 (e.g., tapping a touch sensitive display and/or pressing button to wake computer system 600) . For another example, computer system 600 can transition from the lower-power mode to the higher-power mode in response to detecting a voice input and/or speech input directed to and/or referencing computer system 600 (e.g., a request for information that computer system 600 can provide and/or a request including an activation phrase corresponding to computer system 600) .
[0349] In some embodiments, in response to detecting the state of environment 1302 (e.g., that light 1320 is turned on and/or that the current level of light within environment 1302 is above the threshold level of light) , computer system 600 performs a sweep to detect whether any subjects are in environment 1302. In such embodiments, after transitioning to the higher-power mode (and, optionally, in response to initially detecting a lack of subjects) , computer system 600 moves the portion of computer system 600 (e.g., that includes one or more input devices and/or one or more displays) in an attempt to detect a subject. For example, after transitioning to the higher-power mode, computer system 600 can be orientated towards an initial portion of environment 1302 and/or with an initial field of view (e.g., looking into a kitchen) , and computer system 600 moves to try and detect a subject in a different portion of environment 1302 and/or within a different field of view (e.g., looking into a living room) . In some embodiments, computer system 600 determines
221
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) where to move based on activity within environment 1302. For example, computer system 600 can move the portion of computer system 600 to direct the one or more sensors of computer system 600 towards a location of the activity (e.g., for further detection of one or more subjects involved in the activity) . For another example, computer system 600 can move the portion of computer system 600 to direct the portion of computer system 600 and/or representation 604 towards the location of the activity (e.g., for presentation to the one or more sub ects involved in the activity) .
[0350] As illustrated in FIG. 13B, computer system 600 does not detect any points of interest (e.g., subjects) . For example, one or more users can turn on light 1320 with a light switch that is outside of the field of view of computer system 600. For another example, light 1320 can be a natural light source such as an opened window and/or a rising sun. In some embodiments, computer system 600 transitions back to the lower-power mode in response to detecting a lack of any input and/or detecting a lack of any subject (e.g., within a threshold amount of time) . For example, after computer system 600 has transitioned to the higher-power mode, computer system 600 detects that no people are within environment 1302, so computer system 600 transitions back to the lower-power mode.
[0351] As illustrated in FIG. 13C, computer system 600 detects subject 1308, subject 1306, and subject 1304, which are now within the field of view of computer system 600. At FIG. 13C, subject 1308, subject 1306, and subject 1304 are people within environment 1302. In such embodiments, the gaze (e.g., field of view, eye direction, and/or attention) of subject 1308, subject 1306, or subject 1304 can correspond to an orientation of subject 1308, subject 1306, or subject 1304. For example, computer system 600 can detect that subject 1308 and subject 1306 are oriented towards each other. It should be recognized
222
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) that, while in this example subject 1308, subject 1306, and subject 1304 are people within environment 1302, one or more of the subjects can correspond to non-people points of interest within environment 1302. For example, subject 1308, subject 1306, and subject 1306 can be inanimate objects, such as doors, walls, and/or furniture, in environment 1302. It should also be recognized that subject 1308, subject 1306, and/or 1316 can be different types of points of interest. For example, subject 1308 can be a person while subject 1308 and subject 1306 can be two objects (e.g., two pieces of artwork on opposite sides of a hallway and/or two display screens oriented towards each other due to back-to-back desk arrangements) . In some embodiments, computer system 600 detects the subjects through detecting speech corresponding to the subjects. For example, computer system 600 can detect location and/or orientation of subject 1308 and subject 1306 in response to detecting subject 1308 and subject 1306 speaking to each other. In some embodiments, computer system 600 detects the subjects via one or more cameras (e.g., of and/or in communication with computer system 600) . For example, computer system 600 can detect subject 1308, subject 1306, and subject 1304 through one or more cameras.
[0352] In some embodiments, in response to detecting the three subjects (and/or after displaying representation 604) , computer system 600 outputs a greeting. For example, in response to detecting the three subjects, computer system 600 can output "Good morning" or "How can I help?" in order to provide a cue that computer system 600 (and/or the software object) is active. For another example, after detecting the three subjects and in response to recognizing a subject (e.g., subject 1304) of the three subjects, computer system 600 can output "Hello, John" as computer system 600 is able to match the subject with an account for a user named John.
223
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0353] At FIG. 13C, while detecting subject 1308, subject 1306, and subject 1304, computer system 600 analyzes subject 1308, subject 1306, and subject 1304 to determine whether to direct the portion of computer system 600 and/or representation 604 towards subject 1308, subject 1306, or subject 1304 (e.g., based on detected levels of attention) . It should be recognized that directing the representation 604 can involve moving the portion of computer system 600 (e.g., with or without moving representation 604 within the display of computer system 600) . It should be further recognized that directing the representation 604 can involve altering representation 604 (e.g., with or without moving the portion of computer system 600) . One or more of the above actions can be taken (e.g., alone or in combination) to present representation 604 towards one or more subjects within environment 1302. In some embodiments, computer system 600 determines which subject to direct the portion of computer system 600 and/or representation 604 towards based on one or more of multiple factors corresponding to the three subjects and/or environment 1302 as described further below.
[0354] In some embodiments, computer system 600 determines which subject to direct representation 604 towards based on which subject is directing attention to computer system 600 (and/or more attention to computer system 600) . For example, computer system 600 can detect a gaze and/or orientation of each of subject 1308, subject 1306, and subject 1304 and determine whether a respective gaze and/or orientation is directed towards computer system 600. For another example, computer system 600 can detect an input from a subject and determine whether the input is a question or request that computer system 600 can answer. For another example, computer system 600 can detect an activity of each of subject 1308, subject 1306, and subject 1304 and determine whether the
224
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) activity indicates attention directed at computer system 600 or whether the activity indicates attention to something else (e.g., talking with another subject and/or simply moving throughout environment 1302) .
[0355] In some embodiments, computer system 600 can determine which subject to direct representation 604 towards based on an arrangement (e.g., locations and/or orientations) of the subjects. For example, computer system 600 can detect a distance from computer system 600 to each of subject 1308, subject 1306, and subject 1304 and determine which of the subjects is closest to computer system 600. For another example, computer system 600 can detect a distance between a pair of subject 1308, subject 1306, and subject 1304 and determine whether any of them are outside of a group formed by the others .
[0356] In some embodiments, computer system 600 can determine which subject to direct representation 604 towards based on a detected identity and/or other feature of the subjects. For example, computer system 600 can detect each of subject 1308, subject 1306, and subject 1304 and determine whether any of them is recognized (e.g., whether computer system 600 can match the subject to an account and/or user known by computer system 600) . For another example, computer system 600 can detect each of subject 1308, subject 1306, and subject 1304 and identify each of them as a particular type of subject (e.g., whether the subject is a person and/or an inanimate ob j ect ) .
[0357] In some embodiments, computer system 600 transitions from the higher-power mode to the lower-power mode if detections with respect to subject 1308, subject 1306, and subject 1304 indicate that certain criteria are not satisfied. For example, computer system 600 can transition to the lower- 225
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) power mode if none of the subjects are directing attention to computer system 600 (e.g., people are within environment 1302 but located in another part of environment 1302 such as across a large room and/or the people are talking to each other) .
[0358] At FIG. 13C, computer system 600 detects one or more conditions (e.g., a state of environment 1302, subject 1308, subject 1306, and/or subject 1304) that serve as a basis for directing representation 604. As illustrated in FIG. 13C, computer system 600 detects that subject 1304 is the subject within environment 1302 that is closest to computer system 600
(e.g., closer than either one of subject 1308 or subject 1306) , that subject 1304 is the only subject within environment 1302 directing attention to computer system 600. In some embodiments, computer system 600 detects one or more other conditions, such an input from subject 1304.
[0359] As illustrated in FIG. 13D, in response to detecting subject 1304 (and/or subject 1308 and/or subject 1306) , computer system 600 moves the portion of computer system 600 towards subject 1304. In some embodiments, the portion of computer system 600 includes the one or more cameras that provide the field of view of computer system 600. As illustrated in FIG. 13D, computer system 600 moves the portion of computer system 600 towards subject 1304 to provide a visual cue that computer system 600 is directing attention to subject 1304 and/or to prioritize subject 1304 over other subjects (e.g., providing subject 1304 an optimal viewing angle of representation 604 and/or content displayed by computer system 600) . In some embodiments, such movement is movement such as performed by center stage using an ultrawide camera and a neural engine to automatically pan and zoom to change a frame, such as when someone moves around.
226
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0360] As further illustrated in FIG. 13D, in response to detecting subject 1304 (and/or subject 1308 and/or subject 1306) , computer system 600 alters representation 604 to direct representation 604 towards subject 1304. In some embodiments, altering representation 604 can include changing one or more visual characteristics of representation 604 such as size, position and/or location, color, orientation (e.g., 2D and 3D) , and/or expression (e.g., facial expression) . For example, computer system 600 can alter representation 604 by increasing a size of representation 604 to provide subject 1304 with a visual cue that representation 604 is directing attention to subject 1304. For another example, computer system 600 can alter representation 604 by changing an expression of representation 604 (e.g., changing positioning of eyes and/or emoting representation 604 to provide a visual cue that computer system 600 is listening to subject 1304) . For another example, computer system 600 can turn representation 604 to emulate representation 604 turning in 3D space to direct attention to subject 1304. As illustrated in FIG. 13D, computer system 600 alters representation 604 by tilting representation 604 to provide a visual cue that computer system 600 is listening to subject 1304. In some embodiments, computer system 600 alters representation 604 without movement of the portion of computer system 600. In such embodiments, computer system 600 uses one or more visual changes to representation 604 to compensate for a lack of movement. For example, computer system 600 can increase a size of representation 604 and turn representation 604 to emulate representation 604 looking out of a display towards subject 1304. In some embodiments, computer system 600 moves the portion of computer system 600 without altering representation 604.
227
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0361] As illustrated in FIG. 13D, computer system 600 detects input 1305d (e.g., as illustrated by a speech bubble coming from subject 1304) . At FIG. 13D, input 1305d is a verbal input requesting the current weather (e.g., "What is the weather?") . It should be recognized that input 1305b is an example of one type of input and can be other types of inputs, such as an input into a request field provided by computer system 600 and/or another device in communication with computer system 600. For example, subject 1304 can input "current weather forecast?" or "current temperature" into a textbox and receive the same response as discussed below.
[0362] At FIG. 13E, in response to detecting input 1305d requesting the current weather, computer system 600 displays weather indication 1310. In some embodiments, weather indication 1310 is a widget and/or interactable piece of content corresponding to a weather application of computer system 600. For example, in response to detecting an input directed to weather indication 1310, computer system 600 can display a weather application user interface that includes additional weather information (e.g., weekly forecast, weather related health information such as pollen count and/or air quality, and/or weather map of a locality corresponding to computer system 600) . In some embodiments, the software object obtains a current temperature to be included in weather indication 1310. For example, computer system 600, through the software object, can access a search engine to obtain a current temperature to display in weather indication 1310 rather than utilizing a system weather application. In some embodiments, weather indication 1310 contains additional content for subject 1304. For example, weather indication 1310 includes weather information for a commute of subject 1304 and/or at a time that subject 1304 normally arrives at work.
228
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0363] As illustrated in FIG. 13E, in response to detecting input 1305d, computer system 600 alters an appearance of representation 604. As illustrated in FIG. 13E, computer system 600 shrinks representation 604. In this embodiment, computer system 600 shrinks representation 604 in order to accommodate for weather indication 1310. For example, computer system 600 can display representation 604 at different sizes based on newly displayed content (e.g., weather indication 1310) . For another example, where weather indication 1310 is displayed larger, representation 604 can be displayed at a smaller size to accommodate the relatively larger amount of space occupied by weather indication 1310. For another example, where weather indication 1310 is displayed at a smaller size, representation 604 can be displayed at a larger size owing to the remaining space that is not occupied by weather indication 1310. In some embodiments, computer system 600 shrinks representation 604 in order to prioritize display of weather indication 1310. For example, computer system 600 can visually deemphasize (e.g., by shrinking and/or changing other visual characteristics, as discussed above) to draw a subject's attention towards newly displayed content (e.g., weather indication 1310) over representation 604.
[0364] At FIG. 13F, computer system 600 detects that subject 1304 is no longer directing attention to computer system 600 and, in response, computer system 600 ceases display of weather indication 1310. In some embodiments, computer system 600 detects that subject 1304 is no longer directing attention to computer system 600 by detecting a location of subject 1304. For example, computer system 600 can detect that subject 1304 has moved beyond a threshold distance (e.g., a predefined distance and/or a detection distance of one or more input devices) away from computer system 600 and, in response, cease display of weather indication 1310. In some embodiments,
229
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) computer system 600 detects that subject 1304 is no longer directing attention to computer system 600 by detecting movement of subject 1304 within environment 1302. For example, computer system 600 can detect that subject 1304 is walking away from computer system 600 and, in response, cease display of weather indication 1310. In some embodiments, computer system 600 detects that subject 1304 is no longer directing attention to computer system 600 by detecting that subject 1304 is directing attention to an object other than computer system 600 and/or with a field of view that does not encompass computer system 600.
[0365] In some embodiments, computer system 600 modifies displayed content based on a detection of a subject other than subject 1304 and/or based on a type of content being displayed. For example, computer system 600 can cease display of certain content (e.g., personalized and/or private content associated with subject 1304) in response to detecting that subject 1306 is directing attention to computer system 600 (e.g., to prevent onlookers from viewing content for a particular subject and/or preventing exposure of personalized content and/or private information) . It should be recognized that display of the content can be maintained when another subject 1306 is not detected as directing attention to computer system 600. It should also be recognized that, even when another subject is directing attention to computer system 600, display of some content can be maintained depending on the type of content and one of more determined characteristics of the other subject.
[0366] In some embodiments, even when computer system 600 determines that display of certain content is to be ceased, computer system 600 maintains display of the content for an amount of time before ceasing display of the content (e.g., allowing for brief lapses in attention and/or brief looks away
230
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) from computer system 600) . In some embodiments, after ceasing display of the content and in response to detecting subject 1304 directing attention back to computer system 600, computer system 600 redisplays the content. For example, computer system 600 can redisplay the content to allow subject 1304 to resume the interaction that caused display of the content. In some embodiments, after ceasing display of the content and in response to detecting subject 1304 directing attention back to computer system 600, computer system 600 does not display the content. For example, computer system 600 might not display the content to allow subject 1304 to start a new interaction with computer system 600 (e.g., based on determination that an amount of time since the prior interaction exceeds a threshold amount of time) .
[0367] In some embodiments, computer system 600 maintains display of content (e.g., despite detecting that subject 1306 is directing attention to computer system 600) due to a particular relationship between subject 1304 and subject 1306. For example, computer system can maintain display of certain content (e.g., content for which display would otherwise be modified or ceased) based on a determination that subject 1306 and subject 1304 are members of a common group of users (e.g., a group of users previously linked through a common account and/or content sharing group) and/or a family of users (e.g., a household of users and/or assignment of users) .
[0368] In some embodiments, computer system 600 maintains display of certain content (e.g., despite detecting that subject 1306 is directing attention to computer system 600) based on the type of content being displayed. For example, computer system 600 can maintain display of weather indication 1310 due to weather indication 1310 being general content (e.g., content that does not include user information and/or content that includes information corresponding to any user
231
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) within environment 1302) . For another example, computer system 600 can maintain display of system information, such as a current time, battery level, and/or connectivity status, as the system information is relevant to any subject interacting with computer system 600. It should be recognized that display of other types of content can be modified and/or ceased, as described herein.
[0369] As illustrated in FIG. 13F, in response to detecting that subject 1304 is no longer directing attention to computer system 600, computer system 600 alters an appearance of representation 604 (e.g., after ceasing display of weather indication 1310) . As illustrated in FIG. 13F, computer system 600 displays representation 604 at its previous size (e.g., size of representation 604 prior to displaying additional content and/or a size greater than the reduced size in FIG. 13E) . As further illustrated in FIG. 13F, computer system 600 repositions representation 604 proximately within a user interface by displaying representation 604 as centered and as the only piece of content. In some embodiments, computer system 600 displays representation 604 at the increased size (e.g., as compared to in FIG. 13E) to provide a visual indication that representation 604 has reclaimed the display. For example, computer system 600 can display representation 604 as reclaiming the display in order to provide a visual cue to subjects within environment 1302 that the software object is ready for requests and/or interactions. For another example, computer system 600 can display representation 604 as reclaiming the display in order to provide a visual indication that the previous interaction (e.g., weather request from subject 1304) is completed. As illustrated in FIG. 13F, in response to detecting that subject 1304 is no longer directing attention to computer system 600, computer system 600 moves
232
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the portion of computer system 600 to its initial position (e.g., the position illustrated in FIGS. 13A-13C) .
[0370] At FIG. 13F, after moving the portion of computer system 600 towards the initial position, computer system 600 detects that the three subjects (e.g., subject 1308, subject 1306, and subject 1304) are not directing attention to computer system 600. In some embodiments, in response to detecting the three subjects still within the field of view of computer system 600 but not directing their attention to computer system 600, computer system 600 remains active (e.g., as discussed above with respect to transitioning to the higher-power mode) and ready to receive an input.
[0371] At FIG. 13G, computer system 600 detects that subject 1308 and subject 1306 are directing attention to computer system 600 (e.g., as illustrated by their respective field of views) and, in response, computer system 600 moves the portion of computer system 600 and/or representation 604 towards subject 1308 and/or subject 1306 rather than towards subject 1304. At FIG. 13G (e.g., after moving to the initial position) , computer system 600 detects that subject 1308 and subject 1306 have moved within environment 1302 and are directing attention to computer system 600. As discussed above, computer system 600 analyzes a number of factors corresponding to subject 1308 and subject 1306 and/or environment 1302 to determine where to direct the portion of computer system 600 and/or representation 604. In some embodiments, computer system 600 directs representation 604 and/or the portion of computer system 600 towards subject 1308 because subject 1308 is closer to computer system 600 than is subject 1306. It should be recognized that computer system 600 can utilize one or more additional or other factors to determine where to direct the portion of computer system 600 and/or representation 604. For example, computer system 600
233
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) can direct the portion of computer system 600 and/or representation 604 towards subject 1308 based on recognition (e.g., obtaining an account and/or credential) of subject 1308 and not subject 1306. In some embodiments, computer system 600 directs attention to subject 1308 in anticipation of an input and/or to reciprocate attention with subject 1308. For example, computer system 600 can direct the portion of computer system 600 and/or representation 604 towards subject 1308 in response to detecting the body language of subject 1308 (e.g., pointing towards computer system 600 and/or looking puzzled) .
[0372] In some embodiments, computer system 600 directs representation 604 towards subject 1308 based on a detection of subject 1304. For example, while subject 1304 remains the closest subject to computer system 600 (e.g., closer than subject 1308 or subject 1306) , computer system 600 can determine that subject 1304 remains not directing attention to computer system 600 (and subject 1308 is directing attention to computer system 600) . Thus, computer system 600 does not consider subject 1304 in the determination of whether to direct the portion of computer system 600 and/or representation 604 towards subject 1308 and/or subject 1306.
[0373] In some embodiments, computer system 600 directs attention at multiple subjects. For example, computer system 600 can determine an average or midpoint of a set of positions within environment 1302 corresponding to all of the multiple subjects within environment 1302 (e.g., to provide as many of the subjects as possible with a view of computer system 600 and/or to draw their attention towards computer system 600) . For another example, computer system 600 can determine an average or midpoint of a set of positions within environment 1302 corresponding to only some (e.g., fewer than all) of the multiple subjects based on their respective activities (e.g.,
234
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) computer system 600 directs attention to a point among only those subjects directing attention to computer system 600) . In some embodiments, computer system 600 directs attention based on a detection relating to multiple subjects (e.g., even if not directed at any one of the subjects) . For another example, computer system 600 can determine a point corresponding to activities of the multiple subjects (e.g., computer system 600 directs attention to a common point of interest to which one or more subjects are directing attention, such as one of the subjects and/or an object in environment 1302) .
[0374] At FIG. 13G, in response to a detection of subject 1308, subject 1306, and/or 1304, computer system 600 moves the portion of computer system 600 towards subject 1308 and/or subject 1306. As discussed above, the portion of computer system 600 can include one or more cameras (e.g., the one or more cameras providing the field of view of computer system 600) and/or a display (e.g., that displays representation 604 as illustrated in the left portion of FIG. 13G) .
[0375] In some embodiments, after directing representation 604 towards subject 1308, computer system 600 detects subject 1308 request content. In such embodiment, computer system 600 can similarly display content for subject 1308 as discuss above with respect to weather indication 1310. As well, computer system 600 can display content different from weather indication 1310 in response to a different request from subject 1308. In some embodiments, the content corresponds to the requesting subject. For example, even though subject 1308 and subject 1304 request weather content, computer system 600 can display different content that is tailored to the respective subject (e.g., different work commutes and/or different work locations) .
235
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0376] At FIG. 13H, in response to detecting a state of environment 1302 (e.g., that light 1320 is off and/or blocked from environment 1302) , computer system 600 transitions from the higher-power mode to the lower-power mode. At FIG. 13H, computer system 600 detects that light 1320 has been turned off and/or is blocked from environment 1302 (e.g., environment 1302 does not have a current level of light that is above the threshold amount of light) . As illustrated in FIG. 13H, computer system 600 ceases display of representation 604 (e.g., as part of transitioning from the higher-power mode to the lower-power mode) . As discussed above, while in the lower- power mode, computer system 600 can turn off one or more components such as a display, one or more input devices, and/or one or more output devices. Additionally, while in the lower-power mode, computer system 600 can utilize different input devices to detect points of interest within environment 1302. For example, as discussed above, while in a lower light environment, computer system 600 can utilize a lidar sensor and/or infrared sensor rather than one or more cameras.
[0377] In some embodiments, computer system 600 transitions from the higher-power mode to the lower-power mode based on one or more other criteria. For example, computer system 600 can transition from the higher-power mode to the lower-power mode in response to detecting that a subject is not directing attention to computer system 600 (e.g., within or exceeding a threshold amount of time) . For another example, computer system 600 can transition from the higher-power mode to the lower-power mode in response to detecting the lack of a subject (e.g., within or exceeding a threshold amount of time) within environment 1302. In some embodiments, prior to transitioning from the higher-power mode to the lower-power mode, computer system 600 performs movement, via one or more movement components, to attempt to detect a subject. In some
236
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) embodiments, even though computer system 600 detects one or more points of interest, computer system 600 transitions from the higher-power mode to the lower-power mode. For example, computer system 600 can detect that none of the points of interest correspond to subjects and/or none of the points of interest are directing attention to computer system 600.
[0378] FIG. 14 is a flow diagram illustrating a process (e.g., process 1400) for managing display of a representation of a software object based on a state of an environment in accordance with some embodiments. Some operations in process 1400 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted .
[0379] As described below, process 1400 provides an intuitive way for managing display of a representation of a software object based on a state of an environment. Process 1400 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0380] The devices, methods, and/or computer-readable storage mediums described below enhance the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and/or improves battery life of the device by enabling the user to use the device more quickly and efficiently. Displaying user interface elements with different appearances at different times (such as by selectively displaying a representation of a software object at different times) helps to avoid image persistence or burn
237
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in effects that can occur with some display technologies when the same object is displayed with the same appearance at the same location repeatedly or for a long period of time. Reducing the number of inputs needed to perform an operation (such as by displaying a representation of a software object in response to a detection of a state of an environment) enhances the operability of the device by reducing the number of inputs and time required to perform a particular operation, reducing energy usage by the device. Providing additional control options without cluttering the UI with additional displayed controls (such as by selectively displaying a representation of a software object in response to a detection of a state of an environment) enhances the operability of the device by reducing unnecessary inputs and/or steps to navigate through different user interfaces or sets of controls, reducing energy usage by the device. Performing an operation when a set of conditions has been met without requiring further user input (such as by selectively displaying a representation of a software object based on a state of an environment) enhances the operability of the device by reducing unnecessary inputs and/or steps to navigate through different user interfaces or sets of controls, reducing energy usage by the device.
[0381] In some embodiments, process 1400 is performed at a computer system (e.g., 600) that is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more input devices (e.g., a camera, a depth sensor, a microphone, a light sensor, a flicker sensor, a hardware input mechanism, a rotatable input mechanism, a physical input mechanism, a mechanical button, a touch- sensitive button, a button, a crown, a knob, a dial, a physical slider, an accelerometer, a mouse, a keyboard, a touchpad, and/or a touch-sensitive surface) and one or more
238
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) display components (e.g., a display screen, a projector, a head mounted display, and/or a touch-sensitive display) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, an electronic device, and/or a personal computing device.
[0382] The computer system detects (1402) , via the one or more input devices, a state (e.g., 1320, 1308, 1306, 1304, 1305d, and/or a state as described herein with respect to FIG. 13B) (e.g., an amount of light, an amount of activity, an amount of movement, an input, a gesture, a manipulation of an object, a brightness of light, a source of light, and/or a level of light) of an environment (e.g., 1302) (e.g., a locality, a room, a location within a dwelling and/or structure, and/or a physical and/or virtually defined space) . In some embodiments, detecting the state of the environment includes detecting a level of light and/or magnitude of light within an environment that includes the computer system (e.g., a room with the computer system and an artificial light source and/or environmental light source) . In some embodiments, detecting the state of the environment corresponds to the computer system' s ability to detect a state through the one or more input devices (e.g., enough light to enable a camera to detect an activity level within the environment and/or enough light to enable one or more hardware functions) .
[0383] In response to (1404) detecting the state of the environment, in accordance with a determination that the state of the environment satisfies a set of one or more criteria (e.g., a criterion satisfied based on enabling the one or more input devices, enabling viewing of the one or more display components, one or more environmental factors, enabling one or more hardware functions, and/or the light being over a
239
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) threshold magnitude) , the computer system displays (1406) , via the one or more display components, a representation (e.g., 604) (e.g., a character and/or a system avatar) of a software object (e.g., as described above with respect to FIG. 5) (e.g., as described herein with respect to FIGS. 13A and 13H) . In some embodiments, the one or more hardware functions include functionality provided by the one or more input devices such as detection and/or recognition of a subject (e.g., a person, a user, an object, and/or a point of interest) via one or more cameras. In some embodiments, the one or more hardware functions include functionality provided by the one or more display components (e.g., sufficient light to display content without harming a person's eyes and/or disrupting an environment through the displayed content) . In some embodiments, the threshold magnitude is a light level that allows the one or more hardware functions to be performed (e.g., light of a sufficient level to allow one or more cameras to recognize a subject within an environment) . In some embodiments, the threshold magnitude is a predefined value enabling operation of the one or more hardware functions (e.g., sufficient light for displayed content to not disrupt a dark room such as at night and/or while people are sleeping) . In some embodiments, the representation of the software object is a system avatar and/or visual representation to depict that a subject is interacting with the computer system (e.g., providing a visual cue that the subject is talking with and/or detected by the computer system by providing an avatar that is listening to the subject) . In some embodiments, displaying the representation of the software object includes performing an operation on the representation of the software object to depict a status of the computer system (e.g., waking up, ready to perform an activity, listening to an input, and/or outputting a reply to the input) .
240
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0384] In response to (1404) detecting the state of the environment, in accordance with a determination that the state of the environment does not satisfy the set of one or more criteria, the computer system forgoes (1408) (and/or ceases) display of the representation of the software object (e.g., as described herein with respect to FIGS. 13B-13G) .
[0385] In some embodiments, in response to detecting the state of the environment and in accordance with the determination that the state of the environment satisfies the set of one or more criteria, the computer system enables at least one of the one or more input devices (e.g., as described herein with respect to FIGS. 13B and 13C) (e.g., one or more cameras, one or more motion sensors, and/or one or more sensors for detecting aspects of the environment such as people, objects, and/or changes to the environment) . In some embodiments, the computer system enables the at least one of the one or more input devices based on a determination that there is sufficient light within the environment to obtain information from the at least one of the one or more input devices. In some embodiments, the computer system enables the at least one of the one or more input devices in response to detecting an input by another input device (e.g., enabling a camera after detecting a threshold amount of light through a light sensor and/or enabling a motion sensor after detecting movement through a camera) .
[0386] In some embodiments, the computer system is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more movement components. In some embodiments, the one or more movement components include an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base. In some embodiments, in response to detecting the state of the environment and in accordance with the determination that the 241
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) state of the environment satisfies the set of one or more criteria, the computer system enables the one or more movement components (e.g., as described herein with respect to FIG. 13B) . In some embodiments, enabling the one or more movement components includes moving, via the one or more movement components, a portion of the computer system (e.g., with respect to the environment and/or another portion of the computer system) . In some embodiments, after enabling the one or more movement components, the computer system moves the one or more input devices to attempt to detect a subject within the environment. In some embodiments, after (and/or in response to) enabling the one or more movement components and detecting a subject within the environment, the computer system moves a portion of the computer system towards (e.g., to face and/or be directed towards) the subject.
[0387] In some embodiments, the one or more input devices includes one or more cameras. In some embodiments, detecting the state of the environment includes detecting, via the one or more cameras, the state of the environment (e.g., as described herein with respect to FIGS. 13A-13D) . In some embodiments, detecting the state of the environment includes detecting, via the one or more cameras, a level of light within the environment (e.g., an amount of light relative to a required threshold amount of light and/or an amount of light measured against a range of light values of the environment) . In some embodiments, detecting the state of the environment includes detecting, via the one or more cameras, one or more subjects and/or activity of the one or more subjects within the environment.
[0388] In some embodiments, the one or more input devices includes one or more light sensors (e.g., one or more ambient light sensors) . In some embodiments, detecting the state of the environment includes detecting, via the one or more light
242
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) sensors, the state of the environment (e.g., as described herein with respect to FIGS. 13A and 13B) . In some embodiments, the one or more light sensors is configured to detect an intensity, brightness, and/or luminosity of ambient light in the environment. In some embodiments, detecting the state of the environment includes detecting, via the one or more light sensors, a level of light within the environment (e.g., an amount of light relative to a required threshold amount of light and/or an amount of light measured against a range of light values of the environment) . In some embodiments, detecting the state of the environment includes detecting, via the one or more light sensors, one or more subjects and/or activity of the one or more subjects within the environment.
[0389] In some embodiments, detecting the state of the environment includes detecting a level of light (e.g., 1320) within the environment (e.g., an amount of light relative to a required threshold amount of light and/or an amount of light relative to a range of light levels of the environment) (e.g., as described herein with respect to FIG. 13B) . In some embodiments, detecting the level of light within the environment includes detecting (and/or distinguishing between) , via the one or more input devices, natural light (e.g., sunlight and/or light from a window) and/or artificial light (e.g., light from a light bulb and/or device) . In some embodiments, the set of one or more criteria includes a criterion that is satisfied when the light is a particular type of light (e.g., not satisfied based on artificial light and/or satisfied based on natural light) .
[0390] In some embodiments, the set of one or more criteria includes a criterion that is satisfied when a change (e.g., between a first time and a second time) in a level of light (e.g., 1320) within the environment exceeds a threshold (e.g.,
243
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) as described herein with respect to FIG. 13B) . In some embodiments, the change in the level of light within the environment exceeds the threshold when the difference between a first level of light at a first time and a second level of light (e.g., different from the first level of light) at a second time (e.g., different from the first time) exceeds a light threshold. In some embodiments, the change in the level of light within the environment exceeds the threshold when a duration of time between the first time and the second time is within a time threshold.
[0391] In some embodiments, the set of one or more criteria includes a criterion that is satisfied when one or more subjects (e.g., 1308, 1306, and/or 1304) (e.g., one or more persons, one or more users, one or more objects, and/or one or more points of interest) are detected in the environment
(e.g., as described herein with respect to FIGS. 13C-13G) . In some embodiments, detecting the state of the environment includes detecting whether the one or more subjects are in the environment. In some embodiments, in response to detecting the state of the environment and in accordance with the determination that the state of the environment does not satisfy the set of one or more criteria, the computer system transitions into a lower-power mode (e.g., turning off the one or more display components, deactivating at least one of the one or more input devices, and/or moving a portion of the computer system to depict the lower-power mode) . In some embodiments, in response to detecting the state of the environment and in accordance with the determination that the state of the environment does not satisfy the set of one or more criteria, the computer system maintains one or more of the one or more input devices in an active state (e.g., maintaining detectability of a subject after ceasing display of the representation of the software object and/or
244
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) maintaining detectability to allow displaying the representation of the software object in response to detecting a subject) . In some embodiments, while displaying the representation of the software object, the computer system detects that no subject is currently in the environment. In some embodiments, in response to detecting that no subject is currently in the environment, the computer system ceases display of the representation of the software object. In some embodiments, while displaying the representation of the software object, the computer system detects at least one subject in the environment. In some embodiments, while and/or as part of detecting the state of the environment, the computer system detects at least one subject in the environment .
[0392] In some embodiments, the computer system is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more movement components. In some embodiments, the one or more movement components include an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base. In some embodiments, in conjunction with (e.g., before, while, or after) detecting the state of the environment (and/or before displaying the representation of the software object) , the computer system moves, via one or more movement components, a portion (e.g., that includes at least one of the one or more input devices and/or at least one of the one or more display components) of the computer system (e.g., as described herein with respect to FIG. 13B) . In some embodiments, the computer system moves the portion of the computer system through a sweep and/or through an angular distance to attempt to detect a subject within the environment. In some embodiments, the computer system moves the portion of the computer system by a threshold amount
245
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., an amount of movement to allow detection of the entire environment and/or an amount of movement to cover a designated locality of the environment) . In some embodiments, while displaying the representation of the software object, the computer system moves the portion of the computer system, such as moves the portion through a sweep and/or through an angular distance to attempt to detect a subject within the environment .
[0393] In some embodiments, the computer system is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more output devices (e.g., the one or more display components, one or more audio components, one or more haptic components) . In some embodiments, the audio component includes a speaker, a smart speaker, a home theater system, a soundbar, a headphone, an earphone, an earbud, a television speaker, an augmented reality headset speaker, an audio jack, an optical audio output, a Bluetooth audio output, and/or a HDMI audio output. In some embodiments, in response to detecting the state of the environment and in accordance with the determination that the state of the environment satisfies the set of one or more criteria, the computer system outputs, via the one or more output devices, a greeting (e.g., as described herein with respect to FIG. 13C) (and/or content corresponding to, referencing, and/or based on the subject) . In some embodiments, the greeting includes a reference to the subject (e.g., acknowledgement of the subject and/or the detection of the subject) , an identification of the subject, content corresponding to the subject (e.g., calendar information, information from one or more applications, and/or one or more messages for the subject) , and/or generalized content (e.g., a news report and/or a current weather forecast) .
246
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0394] In some embodiments, the one or more input devices include one or more cameras, one or more microphones, or any combination thereof. In some embodiments, detecting the state of the environment includes detecting, via the one or more cameras, the one or more microphones, or any combination thereof, whether the one or more subjects are in the environment (e.g., as described herein with respect to FIG. 13D) . In some embodiments, as part of detecting the one or more subjects, the computer system recognizes the one or more subjects (e.g., performs facial recognition and/or image recognition to match the subject to a known user and/or account and/or performs speech recognition and/or audio recognition to match the subject to a known user and/or account) .
[0395] In some embodiments, the state of the environment is a first state of the environment (e.g., at a first time) . In some embodiments, while (or after) displaying the representation of the software object, the computer system detects, via the one or more input devices, a second state (e.g., 1320, 1308, 1306, 1304, 1305d, and/or a state as described herein with respect to FIG. 13B) (e.g., an amount of light, an amount of activity, an amount of movement, an input, a gesture, a manipulation of an object, a brightness of light, a source of light, and/or a level of light) of the environment (e.g., at a second time different from the first time) . In some embodiments, the second state of the environment is separate from the first state of the environment. In some embodiments, the second state of the environment is a reduced level of light relative to the first state of the environment. In some embodiments, the second state of the environment is a same type of state as the first state of the environment. In some embodiments, detecting the second state of the environment includes detecting a level of light and/or
247
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) magnitude of light within an environment that includes the computer system (e.g., a room with the computer system and an artificial light source and/or environmental light source) . In some embodiments, detecting the second state of the environment corresponds to an ability of the computer system to detect a state via the one or more input devices (e.g., that enough light is present to enable a camera to detect an activity level within the environment and/or that enough light is present to enable one or more hardware functions) . In some embodiments, in response to detecting the second state of the environment and in accordance with a determination that the second state of the environment does not satisfy the set of one or more criteria, the computer system ceases display of the representation of the software object (e.g., as described herein with respect to FIG. 13H) . In some embodiments, in response to detecting the second state of the environment and in accordance with the determination that the second state of the environment does not satisfy the set of one or more criteria, the computer system transitions into a lower-power mode (e.g., turning off the one or more display components, deactivating at least one of the one or more input devices, and/or moving a portion of the computer system to depict the lower-power mode) . In some embodiments, in response to detecting the second state of the environment and in accordance with the determination that the second state of the environment does not satisfy the set of one or more criteria, the computer system maintains one or more of the one or more input devices (e.g., maintaining detectability of a subject after ceasing display of the representation of the software object and/or maintaining detectability to allow displaying the representation of the software object in response to detecting a subject) . In some embodiments, in response to detecting the second state of the environment and in accordance with a determination that the second state of the
248
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) environment satisfies the set of one or more criteria, the computer system maintains display of the representation of the software object.
[0396] In some embodiments, while (and/or after) displaying the representation of the software object, the computer system detects, via the one or more input devices, an input (e.g., a voice input and/or a tap input) (e.g., 1305d) . In some embodiments, the input is from a subject in the environment. In some embodiments, in response to detecting the input, the computer system performs, via the software object, one or more operations based on (and/or requested within) the input (e.g., as described herein with respect to FIGS. 13D and 13E) . In some embodiments, the one or more operations includes one or more device actions (e.g., sending messages and/or calling another subject) , one or more alterations to the computer system (e.g., modifying information stored on the computer system and/or updating one or more settings) , and/or one or more application actions (e.g., operations performed through the use of one or more applications on the computer system such as sending an email, accessing a news report, accessing a current weather forecast, displaying streaming content, and/or accessing remotely stored information) . In some embodiments, in response to detecting the input and in accordance with a determination that a first set of one or more criteria is satisfied, the computer system outputs (e.g., displays) , via one or more output components (e.g., the one or more display components) , first content. In some embodiments, in response to detecting the input and in accordance with a determination that a second set of one or more criteria is satisfied, the computer system outputs (e.g., displays) , via one or more output components (e.g., the one or more display components) , second content different from the first content. In some embodiments, in response to detecting the input and in
249
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) accordance with a determination that the first set of one or more criteria and/or the second set of one or more criteria is not satisfied, the computer system forgoes output (e.g., display) of the first content and/or the second content.
[0397] Note that details of the processes described above with respect to process 1400 (e.g., FIG. 14) are also applicable in an analogous manner to other processes described herein. For example, process 1500 optionally includes one or more of the characteristics of the various processes described above with reference to process 1400. For example, the representation of the software object of process 1400 can be displayed in conjunction with the direction of the computer system of process 1500. For brevity, these details are not repeated herein .
[0398] FIG. 15 is a flow diagram illustrating a process (e.g., process 1500) for directing a computer system based on points of interest in an environment in accordance with some embodiments. Some operations in process 1500 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0399] As described below, process 1500 provides an intuitive way for directing a computer system based on points of interest in an environment. Process 1500 reduces the cognitive burden on a user, thereby creating a more efficient humanmachine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0400] The devices, methods, and/or computer-readable storage mediums described below enhance the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user 250
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) mistakes when operating/interacting with the device) which, additionally, reduces power usage and/or improves battery life of the device by enabling the user to use the device more quickly and efficiently. Providing improved feedback (such as by directing a computer system at different points of interest) enhances the operability of the device by reducing accidental and mistaken inputs, reducing energy usage by the device. Reducing the number of inputs needed to perform an operation (such as by directing a computer system in response to transitioning from an inactive mode to an active mode and based on points of interest detected within an environment) enhances the operability of the device by reducing the number of inputs and time required to perform a particular operation, reducing energy usage by the device. Performing an operation when a set of conditions has been met without requiring further user input (such as by directing a computer system in response to transitioning from an inactive mode to an active mode and based on points of interest detected within an environment) enhances the operability of the device by reducing unnecessary inputs and/or steps to navigate through different user interfaces or sets of controls, reducing energy usage by the device.
[0401] In some embodiments, process 1500 is performed at a computer system (e.g., 600) that is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more input devices (e.g., a camera, a depth sensor, a microphone, a light sensor, a flicker sensor, a hardware input mechanism, a rotatable input mechanism, a physical input mechanism, a mechanical button, a touch- sensitive button, a button, a crown, a knob, a dial, a physical slider, an accelerometer, a mouse, a keyboard, a touchpad, and/or a touch-sensitive surface) and one or more display components (e.g., a display screen, a projector, a
251
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) head mounted display, and/or a touch-sensitive display) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, an electronic device, and/or a personal computing device. In some embodiments, the computer system includes one or more movement components (e.g., an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base) .
[0402] In response to (1502) transitioning from an inactive mode to an active mode and while displaying, via the one or more display components, a representation (e.g., 604) (e.g., a character and/or a system avatar) of a software object (e.g., as described above with respect to FIG. 5) , in accordance with a determination that a first point (e.g., 1308, 1306, or 1304) (e.g., a point within the environment and/or a point with a first interest score) of multiple points of interest detected within an environment (e.g., 1302) satisfies a set of one or more criteria, the computer system directs (1504) the computer system at the first point such that the representation of the software object is directed towards the first point (e.g., as described herein with respect to FIGS. 13D and 13E) . In some embodiments, performing the operation to direct the representation of the software object to the first point includes manipulating, via the one or more display components, the representation of the software object to depict the software object looking at and/or listening in a direction of the first point. In some embodiments, performing the operation to direct the representation of the software object to the first point includes moving, via a movement component (e.g., an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base) , a portion of the computer system that includes the one or more display
252
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) components towards the first point (e.g., in a direction that the one or more display components can be viewed at the first point) . In some embodiments, moving the portion of the computer system that includes the one or more display components towards the first point includes manipulating, via the one or more display components, the representation of the software object to compensate for the movement of the portion of the computer system. In some embodiments, performing the operation to direct the representation of the software object to the first point includes manipulating the representation of the software object to depict a status of the computer system (e.g., waking up, ready to perform an activity, listening to an input, and/or outputting a reply to the input) .
[0403] In response to (1502) transitioning from an inactive mode to an active mode and while displaying, via the one or more display components, the representation of a software object, in accordance with a determination that a second point (e.g., 1308, 1306, or 1304) (e.g., another point within the environment and/or a point with a second interest score) of the multiple points of interest detected within the environment satisfies the set of one or more criteria, the computer system directs (1506) the computer system at the second point such that the representation of the software object is directed towards the second point, wherein the second point is different from the first point (e.g., as described herein with respect to FIG. 13G) . In some embodiments, performing the operation to direct the representation of the software object to the second point includes manipulating, via the one or more display components, the representation of the software object to depict the software object looking at and/or listening in a direction of the second point. In some embodiments, performing the operation to direct the representation of the software object
253
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) to the second point includes moving, via a movement component (e.g., an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base) , a portion of the computer system that includes the one or more display components towards the second point (e.g., in a direction that the one or more display components can be viewed at the second point) . In some embodiments, moving the portion of the computer system that includes the one or more display components towards the second point includes manipulating, via the one or more display components, the representation of the software object to compensate for the movement of the portion of the computer system. In some embodiments, performing the operation to direct the representation of the software object to the second point includes manipulating the representation of the software object to depict a status of the computer system (e.g., waking up, ready to perform an activity, listening to an input, and/or outputting a reply to the input) . In some embodiments, the computer system performs the operation to direct the representation of the software object to the second point (e.g., over the first point and/or instead of the first point) in response to determining that the second point has a higher interest score (e.g., the computer system recognizes that of the multiple points of interest that the second point (e.g., a subject and/or a first subject that is speaking of a group of subjects) should be prioritized and/or focused on) than the first point (e.g., an object and/or a second subject that is not speaking of the group of subjects) . In some embodiments, before performing the operating to direct the representation of the software object to the first point or the second point, the computer system detects, via the one or more input devices, multiple points of interest (e.g., subjects, light sources, objects, and/or parts) within an environment. In some embodiments, detecting the multiple points of interest include assigning an interest score of each
254
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) point of interest within the multiple points of interest. In some embodiments, the multiple points of interest are multiple subjects (e.g., subjects within a threshold distance from the computer system and/or subjects performing an activity within the environment such as talking and/or manipulating objects) , objects (e.g., items that a subject is interacting with, movable objects, and/or items that match a context of the environment such as knives, pans, and/or ingredients within a kitchen) , and/or parts of the environment (e.g., walls, doors, countertops, tables, and/or furniture) . In some embodiments, the representation of the software object is a system avatar and/or visual representation to depict that a subject is interacting with the computer system (e.g., providing a visual cue that the subject is talking with and/or detected by the computer system by providing an avatar that is listening to the subject) . In some embodiments, the inactive mode is a lower-power state and/or an idle mode (e.g., the computer system has not detected an input and/or a point of interest for a threshold amount of time and transitions to the inactive mode) . In some embodiments, one or more functions of the computer system are disabled and/or unavailable in the inactive mode (e.g., accessing information associated with a user and/or outputting content) until the computer system transitions to the active mode. In some embodiments, the active mode is a higher-powered mode and/or interactable mode (e.g., the computer system detects an input and/or event and, in response, the computer system to transition into the active mode) . In some embodiments, displaying the representation of the software object includes performing an operation on the representation of the software object to depict a status of the computer system (e.g., waking up, ready to perform an activity, listening to an input, and/or outputting a reply to the input) .
255
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0404] In some embodiments, before displaying the representation of the software object, the computer system detects, via the one or more input devices, an input (e.g., a voice input and/or a tap input) corresponding to (e.g., including, associated with, and/or connected to) a request to wake (e.g., the computer system and/or the software object) (e.g., as described herein with respect to FIGS. 13A and 13B) . In some embodiments, detecting the input corresponding to the request to wake includes detecting an input and/or activity directed to the computer system (e.g., a request for the computer system to perform an operation such as outputting content, providing a response to a request, and/or interacting with an application) . In some embodiments, detecting the input corresponding to the request to wake includes detecting an input and/or activity not directed to the computer system (e.g., the computer system passively detects activity within an environment, the computer system detects context of speech between two subjects, and/or the computer system detects an operation that can be performed by the computer system based on an activity and/or input of a subject) . In some embodiments, the operation that can be performed by the computer system based on the activity of the subject includes outputting a weather update based on a subject discussing a current weather forecast with another subject and/or creating a calendar event based on the subject discussing an upcoming meeting. In some embodiments, in response to detecting the input corresponding to the request to wake, the computer system transitions from the inactive mode to the active mode (e.g., as described herein with respect to FIG. 13B) .
[0405] In some embodiments, the input corresponding to the request to wake is an input (e.g., 1305d and/or as described herein with respect to FIG. 13B) (e.g., a verbal input and/or a tap input) from a subject (e.g., 1308, 1306, and/or 1304)
256
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(e.g., a user, a person, an animal, another computer system different from the computer system, a device, and/or an object) within the environment. In some embodiments, the input from the subject within the environment is a touch and/or tap input directed to a portion of the computer system (e.g., the display component and/or a physical input mechanism) . In some embodiments, the input from the subject is speech detected (e.g., passively detected and/or actively directed to the computer system) by the computer system (e.g., the subject speaking to another subject within the environment and/or the subject speaking an activation phrase) . In some embodiments, the input from the subject within the environment is detected via the one or more input devices.
[0406] In some embodiments, before displaying the representation of the software object, the computer system detects, via the one or more input devices, a state (e.g., 1320, 1308, 1306, 1304, 1305d, and/or a state as described herein with respect to FIG. 13B) (e.g., an amount of light within the environment, a level of light within the environment, a level of activity of a subject within the environment and/or presence of one or more points of interest within the environment) of the environment. In some embodiments, detecting the state of the environment includes detecting light within the environment. In some embodiments, in response to detecting the state of the environment, the computer system transitions from the inactive mode to the active mode (e.g., as described herein with respect to FIGS. 13A and 13B) . In some embodiments, in response to detecting the state of the environment and in accordance with a determination that the state of the environment satisfies an additional set of one or more criteria, the computer system transitions from the inactive mode to the active mode. In some embodiments, in response to detecting the state of the
257
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) environment and in accordance with a determination that the state of the environment does not satisfy the additional set of one or more criteria, the computer system forgoes transition from the inactive mode to the active mode. In some embodiments, while transitioning from the inactive mode to the active mode, the computer system enables at least one of the one or more input devices, at least one of the one or more display components, and/or one or more movement components. In some embodiments, the computer system transitions from the inactive mode to the active mode in accordance with a determination that there is sufficient light within the environment to obtain effective readings from the input devices. In some embodiments, the computer system transitions from the inactive mode to the active mode in response to detecting an input by another input device (e.g., enabling a camera after detecting a threshold amount of light through a light sensor and/or enabling a motion sensor after detecting movement through a camera) .
[0407] In some embodiments, the first point of the multiple points of interest is a person (e.g., 1308, 1306, and/or 1304) (and/or subject) within the environment. In some embodiments, the multiple points of interest correspond to multiple people within the environment and/or a group of people within the environment. In some embodiments, the person is a known person (e.g., the computer system recognizes the person as a user and/or has an account corresponding to the person) . In some embodiments, the computer system determines that the person is more interesting and/or worth attention (e.g., closest, looking at the computer system, interacting with the computer system, and/or known to the computer system) relative to other people (e.g., other points of the multiple points of interest) . In some embodiments, the second point of the multiple points of interest is not an inanimate object.
258
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0408] In some embodiments, the second point of the multiple points of interest is an inanimate object (e.g., as described herein with respect to FIG. 13C) (e.g., an object within the environment such as an avocado, book, and/or painting, a device such as a phone, laptop, and/or accessory, and/or a component of the environment such as a window, door, piece of furniture, and/or wall) in the environment. In some embodiments, the second point of the multiple points of interest is not a person.
[0409] In some embodiments, the set of one or more criteria includes a criterion that is satisfied when a respective point (e.g., the first point of the multiple points of interest, the second point of the multiple points of interest, or another point of the multiple points of interest) of the multiple points of interest is within a threshold distance (e.g., as described herein with respect to FIGS. 13C and 13F) (e.g., is within a predefined distance and/or proximity) from the computer system. In some embodiments, the set of one or more criteria includes a criterion that is satisfied when the first point of the multiple points of interest moves from outside of the threshold distance to within the threshold distance. In some embodiments, the set of one or more criteria includes a criterion that is satisfied when the first point of the multiple points of interest remains within the threshold distance for a threshold amount of time (e.g., not merely passing through) .
[0410] In some embodiments, the threshold distance is a first threshold distance. In some embodiments, the set of one or more criteria includes a criterion that is satisfied when the respective point of the multiple points of interest is within a second threshold distance from one or more other points of the multiple points of interest (e.g., as described herein with respect to FIGS. 13C and 13F) . In some embodiments, the
259
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) one or more other points of the multiple points of interest are a group of subjects. In some embodiments, the set of one or more criteria includes a criterion that is satisfied when the respective point of the multiple points of interest is not within the group of subjects. In some embodiments, the second threshold distance is different from the first threshold distance. In some embodiments, the second threshold distance is the same as the first threshold distance.
[0411] In some embodiments, the computer system is a communal device (e.g., as described herein with respect to FIGS. 13A- 13H) (and/or a resident device) . In some embodiments, the communal device is a device positioned within the environment (e.g., a locality, a room, a position within a home, office, and/or apartment, and/or an area defined by a subject) . In some embodiments, the communal device does not correspond to a subject (e.g., the communal device can be used by multiple subjects within the environment simultaneously and/or individually) . In some embodiments, the communal device is part of and/or manages an ecosystem of devices (e.g., a network of devices and/or a system of network connected devices) .
[0412] In some embodiments, the computer system detects (e.g., in response to transitioning from the inactive mode to the active mode and/or while displaying the representation of the software object) , via the one or more input devices (e.g., via one or more microphones) , speech (e.g., 1305d) corresponding to (e.g., from and/or associated with) the multiple points of interest. In some embodiments, the computer system detects the speech corresponding to the multiple points of interest passively and/or in response to detecting an activation phrase (e.g., detecting an input indicating that the speech is directed to the computer system and/or detecting an input
260
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) referencing the computer system) . In some embodiments, the one or more input devices includes one or more microphones.
[0413] In some embodiments, the one or more input devices includes one or more cameras. In some embodiments, the computer system detects (e.g., in response to transitioning from the inactive mode to the active mode and/or while displaying the representation of the software object) , via the one or more cameras, the multiple points of interest (e.g., as described herein with respect to FIGS. 13A-13D) . In some embodiments, while detecting the multiple points of interest with the one or more cameras, the computer system attempts to recognizes subjects and/or people (e.g., performs facial recognition and/or image recognition to match a point of interest of the multiple points of interest to a known user and/or account) . In some embodiments, while detecting the multiple points of interest with the one or more cameras, the computer system attempts to recognize characteristics of objects (e.g., performing image recognition to determine brand, quantity, type of object, and/or object identification) .
[0414] In some embodiments, the computer system is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more movement components. In some embodiments, the one or more movement components include an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base. In some embodiments, before detecting the multiple points of interest (and/or in response to transitioning from the inactive mode to the active mode and/or while displaying the representation of the software object) , the computer system moves, via the one or more movement components, a portion (e.g., that includes at least one of the one or more input devices and/or that includes at least one of the one or more 261
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) display components) of the computer system, wherein the multiple points of interest are detected in conjunction with (e.g., while, as a result of, as a part of, or after) moving the portion of the computer system (e.g., as described herein with respect to FIG. 13B) . In some embodiments, before directing the computer system at the first point, the computer system moves the portion of the computer system to attempt to detect a point of interest within the environment. In some embodiments, before directing the computer system at the first point and after detecting the first point of interest within the environment, the computer system moves the portion of the computer system towards the first point of interest.
[0415] In some embodiments, the set of one or more criteria is a first set of one or more criteria. In some embodiments, in response to transitioning from the inactive mode to the active mode and in accordance with a determination that the multiple points of interest (e.g., 1308 and 1306) detected within the environment satisfy a second set of one or more criteria, the computer system directs the computer system at the multiple points (and/or a point between the multiple points) such that the representation of the software object is directed towards the multiple points, wherein the second set of one or more criteria is different from the first set of one or more criteria (e.g., as described herein with respect to FIG. 13G) . In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when the multiple points are within a threshold distance from the computer system (and/or that a point of the multiple points is within a threshold distance from a different point of the multiple points) . In some embodiments, the second set of one or more criteria includes a criterion that is satisfied when the representation of the software object can be viewed by the multiple points (e.g., the multiple points can simultaneously
262
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) view a display and/or the representation of the software object) . In some embodiments, in response to transitioning from the inactive mode to the active mode and in accordance with a determination that the multiple points of interest detected within the environment do not satisfy the second set of one or more criteria, the computer system forgoes direction of the computer system at the multiple points (and/or the point between the multiple points) .
[0416] In some embodiments, the computer system is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more movement components. In some embodiments, the one or more movement components include an actuator, a movable base, a rotatable component, a motor, a lift, a level, and/or a rotatable base. In some embodiments, in response to transitioning from the inactive mode to the active mode and in accordance with the determination that the first point of the multiple points of interest detected within the environment satisfies the set of one or more criteria, the computer system moves, via the one or more movement components, a portion (e.g., that includes at least one of the one or more input devices and/or that includes at least one of the one or more display components) of the computer system such that the portion of the computer system is directed towards the first point (e.g., as described herein with respect to FIGS. 13D and 13E) . In some embodiments, in response to transitioning from the inactive mode to the active mode and in accordance with the determination that the second point of the multiple points of interest detected within the environment satisfies the set of one or more criteria, the computer system moves, via the one or more movement components, the portion of the computer system such that the portion of the computer system is directed towards the first point.
263
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0417] In some embodiments, directing the computer system at the first point such that the representation of the software object is directed towards the first point includes altering one or more visual characteristics (e.g., size, shape, directionality, location, color, and/or emphasis) of the representation of the software object (e.g., as described herein with respect to FIGS. 13D and 13E) . In some embodiments, altering the one or more visual characteristics of the representation of the software object includes moving the representation of the software object within a boundary of the one or more display components and/or a user interface. In some embodiments, altering the one or more visual characteristics of the representation of the software object includes turning the representation of the software object in a direction of the first point. In some embodiments, altering the one or more visual characteristics of the representation of the software object includes increasing a size of the representation of the software object (e.g., to depict that the computer system is directing attention to and/or gaining information from the environment) .
[0418] In some embodiments, in response to transitioning from the inactive mode to the active mode and in accordance with the determination that the first point of the multiple points of interest detected within the environment satisfies the set of one or more criteria (and/or after directing the computer system at the first point) , the computer system displays, via the one or more display components, content (e.g., 1310) corresponding to (e.g., for, associated with, and/or based on) the first point (e.g., as described herein with respect to FIG. 13E) . In some embodiments, in response to transitioning from the inactive mode to the active mode and in accordance with the determination that the second point of the multiple points of interest detected within the environment satisfies
264
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the set of one or more criteria (and/or after directing the computer system at the second point) , the computer system displays, via the one or more display components, content corresponding to (e.g., for, associated with, and/or based on) the second point. In some embodiments, the content corresponding to the first point includes a reference to the first point (e.g., acknowledgement of a person and/or an indication of detection of the first point) , an identification of and/or greeting for a person corresponding to the first point, content corresponding to the first point (e.g., calendar information, information from one or more applications, and/or one or more messages for the subject) , and/or generalized content relevant to the first point (e.g., a news report and/or a current weather forecast) .
[0419] In some embodiments, the one or more display components are disabled while the computer system is in the inactive mode (e.g., as described herein with respect to FIGS. 13A and 13H) . In some embodiments, while the one or more display components are disabled, the computer system maintains at least one of the one or more input devices to allow for detecting a point of interest and/or light while in the inactive mode.
[0420] In some embodiments, before transitioning from the inactive mode to the active mode (and/or while in the inactive mode) , the computer system displays, via the one or more display components, content different from the representation of the software object (e.g., as described herein with respect to FIGS. 13A and 13H) (e.g., without displaying the representation of the software object) . In some embodiments, the content different from the representation of the software object includes generalized content (e.g., a news report and/or a current weather forecast) .
265
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0421] In some embodiments, the set of one or more criteria is a first set of one or more criteria. In some embodiments, while detecting the multiple points of interest and in accordance with a determination that a third set of one or more criteria is satisfied (e.g., that the multiple points of interest satisfy the third set of one or more criteria) , the computer system transitions from the active mode to the inactive mode, wherein the third set of one or more criteria is different from the first set of one or more criteria (e.g., as described herein with respect to FIG. 13H) . In some embodiments, in response to transitioning from the active mode to the inactive mode, the computer system disables one or more of the one or more display components and/or one or more of the one or more input devices. In some embodiments, in response to transitioning from the active mode to the inactive mode, the computer system ceases display of the representation of the software object.
[0422] In some embodiments, while displaying the representation of the software object, the computer system detects, via the one or more input devices, that the environment does not include a point of interest (and/or that there is no point of interest within the environment) . In some embodiments, in response to detecting that the environment does not include a point of interest, the computer system transitions from the active mode to the inactive mode (e.g., as described herein with respect to FIG. 13H) . In some embodiments, in response to transitioning from the active mode to the inactive mode, the computer system disables one or more of the one or more display components and/or one or more of the one or more input devices. In some embodiments, in response to transitioning from the active mode to the inactive mode, the computer system ceases display of the representation of the software object.
266
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0423] Note that details of the processes described above with respect to process 1500 (e.g., FIG. 15) are also applicable in an analogous manner to other processes described herein. For example, process 1600 optionally includes one or more of the characteristics of the various processes described above with reference to process 1500. For example, the direction of the computer system of process 1500 can be performed in conjunction with display of the content and/or the representation of the software object of process 1600. For brevity, these details are not repeated herein.
[0424] FIG. 16 is a flow diagram illustrating a process (e.g., process 1600) for managing display of content and a representation of a software object based on attention of a subject in accordance with some embodiments. Some operations in process 1600 are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted.
[0425] As described below, process 1600 provides an intuitive way for managing display of content and a representation of a software object based on attention of a subject. Process 1600 reduces the cognitive burden on a user, thereby creating a more efficient human-machine interface. For battery-operated computing devices, enabling a user to interact with such devices faster and more efficiently conserves power and increases the time between battery charges.
[0426] The devices, methods, and/or computer-readable storage mediums described below enhance the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) which, additionally, reduces power usage and/or improves battery life of the device by enabling the user to use the device more
267
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) quickly and efficiently. Displaying user interface elements with different appearances at different times (such as by selectively displaying content and displaying a representation of a software object in different manners based on attention of a subject) helps to avoid image persistence or burn in effects that can occur with some display technologies when the same object is displayed with the same appearance at the same location repeatedly or for a long period of time. Reducing the number of inputs needed to perform an operation (such as by selectively displaying content and displaying a representation of a software object in different manners based on attention of a subject) enhances the operability of the device by reducing the number of inputs and time required to perform a particular operation, reducing energy usage by the device. Performing an operation when a set of conditions has been met without requiring further user input (such as by selectively displaying content and displaying a representation of a software object in different manners based on attention of a subject) enhances the operability of the device by reducing unnecessary inputs and/or steps to navigate through different user interfaces or sets of controls, reducing energy usage by the device. Automatically deletion and/or removal of information (such as by selectively displaying content based on attention of a subject) improves privacy and security by limiting when that potentially sensitive information can be accessed .
[0427] In some embodiments, process 1600 is performed at a computer system (e.g., 600) that is in communication (e.g., wired communication and/or wireless communication) with (and/or includes) one or more input devices (e.g., a camera, a depth sensor, a microphone, a light sensor, a flicker sensor, a hardware input mechanism, a rotatable input mechanism, a physical input mechanism, a mechanical button, a touch-
268
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) sensitive button, a button, a crown, a knob, a dial, a physical slider, an accelerometer, a mouse, a keyboard, a touchpad, and/or a touch-sensitive surface) and one or more display components (e.g., a display screen, a projector, a head mounted display, and/or a touch-sensitive display) . In some embodiments, the computer system is a watch, a phone, a tablet, a fitness tracking device, a processor, a head-mounted display (HMD) device, a communal device, a media device, a speaker, a television, an electronic device, and/or a personal computing device.
[0428] While displaying, via the one or more display components, (1) a representation (e.g., 604) (e.g., a character and/or a system avatar) of a software object (e.g., as described above with respect to FIG. 5) in a first manner (e.g., a first set of visual characteristics such as color, size, opacity, position, expression, orientation, and/or emphasis) and (2) content (e.g., 1310) (e.g., as described herein with respect to FIG. 13E) , the computer system detects (1602) , via the one or more input devices, attention of a subject (e.g., 1308) . In some embodiments, detecting the attention of the subject includes detecting, via the one or more input devices, a level of attention of the subject, and the attention of the subject is a level of attention of the subject that is less than a threshold value. In some embodiments, detecting the attention of the subject includes detecting, via the one or more input devices, that the subject is directing attention away from the computer system and/or at a point of interest within an environment (e.g., looking at and/or interacting with an object external to the computer system) . In some embodiments, the computer system detects the attention of the subject through recognizing (e.g., detecting with a certain confidence score and/or threshold detection confidence) the subject and/or one or more features
269
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) corresponding to the subject (e.g., recognizing that the subject's face and/or eyes are not looking towards the computer system) . In some embodiments, the representation of the software object is a system avatar and/or visual representation to depict that a subject is interacting with the computer system (e.g., providing a visual cue that the subject is talking with and/or detected by the computer system by providing an avatar that is listening to the subject) . In some embodiments, displaying the representation of the software object in the first manner includes performing an operation on the representation of the software object to depict a status of the computer system (e.g., waking up, ready to perform an activity, listening to an input, and/or outputting a reply to the input) . In some embodiments, the content is content from an application (e.g., content retrieved from and/or sent by an application) , content for a subject and/or about a subject (e.g., a response to a request and/or a notification) , content from another subject (e.g., a text message, calendar invite, and/or call) , and/or content about an event (e.g., end of a time and/or time until a meeting) . In some embodiments, the content is text and/or audiovisual content (e.g., photos and/or videos) .
[0429] In response to (1604) detecting the attention of the subject, in accordance with (1606) a determination that the attention of the subject satisfies a first set of one or more criteria (e.g., looking away from the computer system and/or directing attention to an object and/or point within an environment) , wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system, the computer system ceases (1608) display of the content (e.g., as described herein with respect to FIG. 13F) .
270
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0430] In response to (1604) detecting the attention of the subject, in accordance with (1606) the determination that the attention of the subject satisfies the first set of one or more criteria, wherein the first set of one or more criteria includes the criterion that is satisfied when the attention of the subject is not directed to the computer system, the computer system displays (1610) , via the one or more output devices, the representation of the software object in a second manner (e.g., a second set of visual characteristics such as color, size, opacity, position, expression, orientation, and/or emphasis) different from (e.g., different in one or more visual characteristics and/or with altered visual characteristics as compared to the first visual characteristics) the first manner (e.g., as described herein with respect to FIG. 13F) . In some embodiments, displaying the representation of the software object at the second manner includes performing an operation on the representation of the software object to depict a status of the computer system (e.g., waking up, ready to perform an activity, listening to an input, and/or outputting a reply to the input) . In some embodiments, displaying the representation of the software object at the second manner include displaying the representation of the software object overlapping and/or reclaiming a portion of a display previously taken by the content (e.g., taking advantage of freed up display space provided by the computer system ceasing display of the content) .
[0431] In response to (1604) detecting the attention of the subject, in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, the computer system maintains (1612) display of (1) the content and (2) the representation of the software object
271
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in the first manner (e.g., as described herein with respect to FIG. 13E) .
[0432] In some embodiments, displaying the representation of the software object in the first manner includes displaying, via the one or more display components, the representation of the software object having a first size (e.g., a reduced size and/or a smaller size) (e.g., as described herein with respect to FIG. 13E) . In some embodiments, displaying the representation of the software object in the second manner includes displaying, via the one or more display components, the representation of the software object having a second size (e.g., a full size, an intermediate size, and/or an increased size) larger than the first size (e.g., as described herein with respect to FIG. 13F) . In some embodiments, the computer system displays the representation of the software object at the first size in response to displaying the content. In some embodiments, the computer system displays the representation of the second size irrespective of the content (e.g., the second size does not depend on the content and/or the computer system makes space for the representation of the software object at the second size such as overlapping the content, moving the content, and/or reducing the size of the content for the representation of the software object) .
[0433] In some embodiments, in accordance with a determination that the content has a third size (e.g., a full size, an intermediate size, and/or an increased size) , the representation of the software object is displayed having the first size (e.g., as described herein with respect to FIG.
13E) . In some embodiments, in accordance with a determination that the content has a fourth size (e.g., a reduced size and/or a smaller size) , the representation of the software object is displayed having the second size (e.g., as described herein with respect to FIG. 13F) . In some embodiments, the
272
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) fourth size is different from (e.g., smaller than) the third size. In some embodiments, the size of the representation of the software object is within a range of sizes (e.g., the computer system can select a size from the range of sizes based on the content and/or the computer system defaults to a size within the range of sizes) . In some embodiments, the computer system can increase and/or decrease the size of the representation of the software object by a threshold amount (e.g., within threshold bounds and/or within a remaining portion of a user interface) based on the size of the content.
[0434] In some embodiments, the computer system detects (e.g., while displaying the representation of the software object in the first manner and/or before displaying the content) , via the one or more input devices, an input (e.g., 1305d and as described herein with respect to FIG. 13D) (e.g., a voice input and/or a tap input) corresponding to (e.g., including and/or associated with) a request for content. In some embodiments, the input corresponding to the request for content is a question and/or request that can be answered and/or aided by displaying the content (e.g., the computer system displays a weather report in response to an input asking about what they should wear and/or the computer system displays a calendar in response to an input asking how busy they are) . In some embodiments, in response to detecting the input corresponding to the request for content, the computer system displays, via the one or more display components, the content (e.g., as described herein with respect to FIG. 13E) . In some embodiments, before displaying the content, the computer system displays the representation of the software object in a third manner different from the first manner and/or in the second manner. In some embodiments, in response to displaying the content, the computer system displays the representation of the software object in the first manner. In
273
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) some embodiments, the computer system displays the representation of the software object in the first manner to make room for and/or accommodate the content.
[0435] In some embodiments, the content corresponds to (e.g., is based on, is tailored for, and/or is associated with) the subject (e.g., as described herein with respect to FIG. 13E) . In some embodiments, the content is based on the input corresponding to the request for the content (e.g., the request includes references to particular content and/or a question that can be answered via the content) . In some embodiments, the content is associated with the subject (e.g., the content is accessible by a credential belonging to the subject and/or the content has been previously viewed by the subject) . In some embodiments, the content is tailored to the subject (e.g., the content is modified and/or customized based on a setting defined by the subject and/or to accommodate requests previously made by the subject such as greater font size and/or increased visuals) .
[0436] In some embodiments, detecting the attention of the subject includes detecting, via the one or more input devices, a location (e.g., a position within the environment and/or a position in relation to the computer system) of the subject within an environment (e.g., 1302) (e.g., as described herein with respect to FIGS. 13C and 13D) . In some embodiments, detecting attention of the subject includes detecting that the subject is within a threshold distance (e.g., is within a predefined distance and/or proximity) from the computer system.
[0437] In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when a location of the subject is outside (e.g., not within) a threshold distance from the computer system (e.g., as
274
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) described herein with respect to FIGS. 13C and 13F) . In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when subject remains outside the threshold distance for a threshold amount of time (e.g., not merely passing through) .
[0438] In some embodiments, detecting the attention of the subject includes detecting, via the one or more input devices, a gaze (e.g., a direction of a gaze, an object to which the gaze is directed, and/or a field of view) of the subject (e.g., as described herein with respect to FIG. 13C) . In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the gaze of the subject is not directed to the computer system and/or is directed away from the computer system.
[0439] In some embodiments, detecting the attention of the subject includes detecting an amount of time since the subject last interacted with the computer system (e.g., as described herein with respect to FIG. 13F) . In some embodiments, the first set of one or more criteria includes a criterion that is satisfied when the amount of time since the subject last interacted with the computer system is greater than a threshold amount of time (e.g., stale interaction and/or infer the interaction is completed) .
[0440] In some embodiments, while displaying the content, the computer system detects, via the one or more input devices, attention (e.g., directed at the computer system) of another subject (e.g., 1306 and/or 1304) . In some embodiments, detecting the attention of the other subject includes detecting, via the one or more input devices, a level of attention of the other subject, and the attention of the other subject is a level of attention of the other subject that is greater than a threshold value. In some embodiments, detecting
275
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the attention of the other subject includes detecting, via the one or more input devices, that the other subject is directing attention towards the computer system (e.g., looking at and/or interacting with the computer system) . In some embodiments, the computer system detects the attention of the other subject through recognizing (e.g., detecting with a certain confidence score and/or threshold detection confidence) the other subject and/or one or more features corresponding to the other subject (e.g., recognizing that the other subject's face and/or eyes are looking towards the computer system) . In some embodiments, in response to detecting the attention of the other subject, in accordance with a determination that the attention of the other subject does not satisfy the first set of one or more criteria, the computer system ceases display of the content (e.g., as described herein with respect to FIG. 13F) . In some embodiments, in response to detecting the attention of the other subject in accordance with the determination that the attention of the other subject does not satisfy the first set of one or more criteria, the computer system displays the representation of the software object in the second manner. In some embodiments, in response to detecting the attention of the other subject, in accordance with a determination that the attention of the other subject satisfied the first set of one or more criteria, the computer system maintains display of the content (e.g., as described herein with respect to FIG. 13E) . In some embodiments, in response to detecting the attention of the other subject in accordance with the determination that the attention of the other subject satisfies the first set of one or more criteria, the computer system maintains display of the representation of the software object in the first manner.
[0441] In some embodiments, while displaying the content, the computer system detects, via the one or more input devices, attention (e.g., directed at the computer system) of another
276
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) subject (e.g., 1306 and/or 1304) . In some embodiments, detecting the attention of the other subject includes detecting, via the one or more input devices, a level of attention of the other subject, and the attention of the other subject is a level of attention of the other subject that is greater than a threshold value. In some embodiments, detecting the attention of the other subject includes detecting, via the one or more input devices, that the other subject is directing attention towards the computer system (e.g., looking at and/or interacting with the computer system) . In some embodiments, the computer system detects the attention of the other subject through recognizing (e.g., detecting with a certain confidence score and/or threshold detection confidence) the other subject and/or one or more features corresponding to the other subject (e.g., recognizing that the other subject's face and/or eyes are looking towards the computer system) . In some embodiments, in response to detecting the attention of the other subject (e.g., in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria or in accordance with the determination that the attention of the subject does not satisfy the first set of one or more criteria) , the computer system maintains display of the content (e.g., as described herein with respect to FIG. 13E) . In some embodiments, in response to detecting the attention of the other subject, the computer system maintains display of the representation of the software object in the first manner.
[0442] In some embodiments, in response to detecting the attention of the subject, in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria and in accordance with a determination that a predefined amount of time has not passed since the attention of the subject last did not satisfy the first set of one or more criteria (e.g., at a first time) , the computer
277
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) system maintains display of (1) the content and (2) the representation of the software object in the first manner (e.g., as described herein with respect to FIGS. 13E and 13F) . In some embodiments, the first time corresponds to a time at which the computer system detects that the subject is no longer directing attention to the computer system. In some embodiments, in response to detecting the attention of the subject, in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria and in accordance with a determination that the predefined amount of time has passed since the attention of the subject last did not satisfy the first set of one or more criteria (e.g., at a second time after the first time) , the computer system ceases display of the content (e.g., as described herein with respect to FIG. 13F) . In some embodiments, an amount of time since the attention of the subject last did not satisfy the first set of one or more criteria corresponds to a period of time after the computer system detects that the subject is initially no longer directing attention towards the computer system. In some embodiments, the predefined amount of time is a default amount of time (e.g., 1 second - 20 seconds) . In some embodiments, the predefined amount of time depends on the content (e.g., maintaining recipe content for a greater amount of time than a weather forecast and/or maintaining video content for a greater amount of time than textual content) . In some embodiments, the predefined amount of time corresponds to a time at which the computer system determines that the subject is no longer going to return to the computer system. In some embodiments, in response to detecting the attention of the subject, in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria and in accordance with the determination that the predefined amount of time has passed since the attention
278
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of the subject last did not satisfy the first set of one or more criteria, the computer system displays, via the one or more output devices, the representation of the software object in the second manner (e.g., as described herein with respect to FIG. 13F) .
[0443] In some embodiments, the attention of the subject is a first attention of the subject at a first time. In some embodiments, in response to detecting the first attention of the subject at the first time and in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria, the computer system detects, via the one or more input devices, second attention of the subject at a second time after the first time (e.g., after a predefined amount of time after the first time) (e.g., as described herein with respect to FIG. 13H) . In some embodiments, detecting the second attention of the subject at the second time includes detecting, via the one or more input devices, a level of attention of the subject, and the second attention of the subject at the second time is a level of attention of the subject that is less than a threshold value. In some embodiments, detecting the second attention of the subject at the second time includes detecting, via the one or more input devices, that the subject is directing attention away from the computer system and/or at a point of interest within an environment (e.g., looking at and/or interacting with an object external to the computer system) . In some embodiments, in response to detecting the second attention of the subject at the second time and in accordance with a determination that the second attention of the subject at the second time satisfies the first set of one or more criteria (e.g., is not directed to the computer system) , the computer system ceases display of the representation of the software object (and/or transitioning into a lower-power mode) (e.g.,
279
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) as described herein with respect to FIG. 13H) . In some embodiments, in response to detecting the second attention of the subject at the second time and in accordance with the determination that the second attention of the subject at the second time satisfies the first set of one or more criteria, the computer system disables one or more of the one or more input devices and/or one or more of the one or more display components. In some embodiments, in response to detecting the second attention of the subject at the second time and in accordance with the determination that the second attention of the subject at the second time satisfies the first set of one or more criteria, the computer system maintains at least one of the one or more input devices. In some embodiments, in response to detecting the second attention of the subject at the second time and in accordance with the determination that the second attention of the subject at the second time satisfies the first set of one or more criteria, the computer system transitions to a lower-power mode.
[0444] In some embodiments, the attention of the subject is first attention of the subject at a first time. In some embodiments, after detecting the first attention of the subject at the first time and while no longer displaying the content, the computer system detects, via the one or more input devices, third attention of the subject at a third time after the first time (e.g., as described herein with respect to FIG. 13H) . In some embodiments, detecting the third attention of the subject at the third time includes detecting, via the one or more input devices, a level of attention of the subject, and the third attention of the subject at the third time is a level of attention of the subject that is greater than a threshold value. In some embodiments, detecting the third attention of the subject at the third time includes detecting, via the one or more input devices, that the subject
280
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) is directing attention towards the computer system (e.g., looking at and/or interacting with the computer system) . In some embodiments, in response to detecting the third attention of the subject at the third time and in accordance with a determination that the third attention of the subject at the third time does not satisfy the first set of one or more criteria (e.g., is directed to the computer system) , the computer system displays, via the one or more display components, the content (and/or the representation of the software object in the first manner) (e.g., as described herein with respect to FIG. 13H) . In some embodiments, the computer system displays the content in response to the third attention of the subject at the third time to allow the subject to resume interacting with the content and/or prevent loss of the content when the subject is not done with the content .
[0445] In some embodiments, the attention of the subject is first attention of the subject at a first time. In some embodiments, in response to detecting the first attention of the subject at the first time and while no longer displaying the content, the computer system detects, via the one or more input devices, fourth attention of the subject at a fourth time after the first time (e.g., as described herein with respect to FIG. 13F) . In some embodiments, detecting the fourth attention of the subject at the fourth time includes detecting, via the one or more input devices, a level of attention of the subject, and the fourth attention of the subject at the fourth time is a level of attention of the subject that is greater than a threshold value. In some embodiments, detecting the fourth attention of the subject at the fourth time includes detecting, via the one or more input devices, that the subject is directing attention towards the computer system (e.g., looking at and/or interacting with the
281
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) computer system) . In some embodiments, in response to detecting the fourth attention of the subject at the fourth time and in accordance with a determination that the fourth attention of the subject at the fourth time does not satisfy the first set of one or more criteria (e.g., is directed to the computer system) , the computer system forgoes display of the content (and/or the representation of the software object in the first manner) (e.g., as described herein with respect to FIG. 13F) . In some embodiments, the computer system forgoes display of the content in response to detecting the fourth attention of the subject at the fourth time to allow the subject to start a new session and/or obtain new content different from the content. In some embodiments, after ceasing display of the content a threshold amount of time has passed and, in response, the computer system forgoes display of the content (e.g., even though the subject is directing attention to the computer system) .
[0446] In some embodiments, the content is first content. In some embodiments, after ceasing display of the content, the computer system detects, via the one or more input devices, an input (e.g., a voice input and/or a tap input) corresponding to (e.g., including and/or associated with) a request for second content, wherein the second content is different from the first content (e.g., as described herein with respect to FIG. 13F) . In some embodiments, the input corresponding to the request for the second content is a question and/or request that can be answered and/or aided by displaying the second content (e.g., the computer system displays a weather report in response to an input asking about what they should wear and/or the computer system displays a calendar in response to an input asking how busy they are) . In some embodiments, the second content is content from an application (e.g., content retrieved from and/or sent by an application) , content for a
282
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) subject and/or about a subject (e.g., a response to a request and/or a notification) , content from another subject (e.g., a text message, calendar invite, and/or call) , and/or content about an event (e.g., end of a time and/or time until a meeting) . In some embodiments, the second content is text and/or audiovisual content (e.g., photos and/or videos) . In some embodiments, in response to detecting the input corresponding to the request for the second content, the computer system displays, via the one or more display components, the second content (e.g., as described herein with respect to FIG. 13F) . In some embodiments, in response to detecting the input corresponding to the request for the second content, the computer system displays, via the one or more display components, the representation of the software object in a third manner different from the second manner (e.g., as described herein with respect to FIG. 13F) . In some embodiments, the third manner is the first manner. In some embodiments, the third manner is different from the first manner (e.g., different reduced size and/or different in one or more other characteristics) . In some embodiments, the third manner is based on the second content (e.g., sized to accommodate the second content and/or with a set of visual characteristics that match the content) .
[0447] Note that details of the processes described above with respect to process 1600 (e.g., FIG. 16) are also applicable in an analogous manner to the processes described herein. For example, process 1400 optionally includes one or more of the characteristics of the various processes described herein with reference to process 1600. For example, the representation of the software object of process 1600 can be the representation of the software object of process 1400. For brevity, these details are not repeated herein.
283
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
[0448] The description above, has been described with reference to specific examples for the purpose of explanation. Such specific examples can be in the form of textual description above and/or in the accompanying drawings. However, such embodiments should not be interpreted as being exhaustive and/or limiting to the disclosure (e.g., limiting to the explicit manners described herein) . Many modifications and variations are possible in view of the above teachings by one of ordinary skill in the art without departing from the scope of the present disclosure.
[0449] In some embodiments, content is automatically generated by one or more computer systems in response to a request to generate the content. The automatically-generated content is optionally generated on-device (e.g., generated at least in part by a computer system at which a request to generate the content is received) and/or generated off-device (e.g., generated at least in part by one or more nearby computers that are available via a local network or one or more computers that are available via the internet) . This automatically-generated content optionally includes visual content (e.g., graphics, images, and/or video) , audio content, and/or text content.
[0450] In some embodiments, novel automatically-generated content that is generated via one or more artificial intelligence ("Al") processes is referred to as generative content (e.g., generative images, generative graphics, generative video, generative audio, and/or generative text) . Generative content is typically generated by an Al process based on a prompt that is provided to the Al process. An Al process typically uses one or more Al models to generate an output based on an input. An Al process optionally includes one or more pre-processing steps to adjust the input before it is used by the Al model to generate an output (e.g.,
284
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) adjustment to a user-provided prompt, creation of a systemgenerated prompt, and/or Al model selection) . An Al process optionally includes one or more post-processing steps to adjust the output by the Al model (e.g., passing Al model output to a different Al model, downscaling, upscaling, formatting, cropping, and/or adding or removing metadata) before the output of the Al model used for other purposes such as being provided to a different software process for further processing or being presented (e.g., visually or audibly) to a user. An Al process that generates generative content is sometimes referred to as a generative Al process.
[0451] A prompt for generating generative content can include one or more of one or more words (e.g., a natural language prompt that is written or spoken) , one or more images, one or more drawings, and/or one or more videos. Al processes can include machine-learning models including neural networks. Neural networks can include transformer-based deep neural networks such as large language models ("LLMs") . Generative pre-trained transformer models are a type of LLM that can be effective at generating novel generative content based on a prompt. Some Al processes use a prompt that includes text to generate either different generative text, generative audio content, and/or generative visual content. Some Al processes use a prompt that includes visual content and/or an audio content to generate generative text (e.g., a transcription of audio and/or a description of the visual content) . Some multimodal Al processes use a prompt that includes multiple types of content (e.g., text, images, audio, video, and/or other sensor data) to generate generative content. A prompt sometimes also includes values for one or more parameters indicating an importance of various parts of the prompt. Some prompts include a structured set of instructions that can be understood by an Al process that include phrasing, a specified
285
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) style, relevant context (e.g., starting point content and/or one or more examples) , and/or a role for the Al process.
[0452] Generative content is generally based on the prompt but is not deterministically selected from pre-generated content and is, instead, generated using the prompt as a starting point. In some embodiments, pre-existing content (e.g., audio, text, and/or visual content) is used as part of the prompt for creating generative content (e.g., the pre-existing content is used as a starting point for creating the generative content) . For example, a prompt could request that a block of text be summarized or rewritten in a different tone, and the output would be generative text that is summarized or written in the different tone. Similarly, a prompt could request that visual content be modified to include or exclude content specified by a prompt (e.g., removing an identified feature in the visual content, adding a feature to the visual content that is described in a prompt, changing a visual style of the visual content, and/or creating additional visual elements outside of a spatial or temporal boundary of the visual content that are based on the visual content) . In some embodiments, a random or pseudo-random seed is used as part of the prompt for creating generative content (e.g., the random or pseud-random seed content is used as a starting point for creating the generative content) . For example, when generating an image from a diffusion model, a random noise pattern is iteratively denoised based on the prompt to generate an image that is based on the prompt. While specific types of Al processes have been described herein, it should be understood that a variety of different Al processes could be used to generate generative content based on a prompt.
[0453] Some embodiments described herein can include use of artificial intelligence and/or machine-learning systems (sometimes referred to herein as the AI/ML systems) . The use
286
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) can include collecting, processing, labeling, organizing, analyzing, recommending and/or generating data. Entities that collect, share, and/or otherwise utilize user data should provide transparency and/or obtain user consent when collecting such data. The present disclosure recognizes that the use of the data in the AI/ML systems can be used to benefit users. For example, the data can be used to train models that can be deployed to improve performance, accuracy, and/or functionality of applications and/or services. Accordingly, the use of the data enables the AI/ML systems to adapt and/or optimize operations to provide more personalized, efficient, and/or enhanced user experiences. Such adaptation and/or optimization can include tailoring content, recommendations, and/or interactions to individual users, as well as streamlining processes, and/or enabling more intuitive interfaces. Further beneficial uses of the data in the AI/ML systems are also contemplated by the present disclosure.
[0454] The present disclosure contemplates that, in some embodiments, data used by AI/ML systems includes publicly available data. To protect user privacy, data may be anonymized, aggregated, and/or otherwise processed to remove or to the degree possible limit any individual identification. As discussed herein, entities that collect, share, and/or otherwise utilize such data should obtain user consent prior to and/or provide transparency when collecting such data. Furthermore, the present disclosure contemplates that the entities responsible for the use of data, including, but not limited to data used in association with AI/ML systems, should attempt to comply with well-established privacy policies and/or privacy practices.
[0455] For example, such entities may implement and consistently follow policies and practices recognized as meeting or exceeding industry standards and regulatory
287
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) requirements for developing and/or training AI /ML systems . In doing so , attempts should be made to ensure all intellectual property rights and privacy considerations are maintained . Training should include practices safeguarding training data, such as personal information, through suf ficient protections against misuse or exploitation . Such policies and practices should cover all stages of the AI /ML systems development , training, and use , including data collection, data preparation, model training, model evaluation, model deployment , and ongoing monitoring and maintenance . Transparency and accountability should be maintained throughout . Such policies should be easily accessible by users and should be updated as the collection and/or use of data changes . User data should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses . Further, such collection and sharing should occur through transparency with users and/or after receiving the informed consent of the users . Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such data and ensuring that others with access to the data adhere to their privacy policies and procedures . Further, such entities should subj ect themselves to evaluation by third parties to certi fy, as appropriate for transparency purposes , their adherence to widely accepted privacy policies and practices . In addition, policies and/or practices should be adapted to the particular type of data being collected and/or accessed and tailored to a speci fic use case and applicable laws and standards , including j urisdiction-speci fic considerations .
[ 0456 ] In some embodiments , AI /ML systems may utili ze models that may be trained ( e . g . , supervised learning or unsupervised learning) using various training data, including data collected using a user device . Such use of user-collected data
288
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) may be limited to operations on the user device. For example, the training of the model can be done locally on the user device so no part of the data is sent to another device. In other embodiments, the training of the model can be performed using one or more other devices (e.g., server(s) ) in addition to the user device but done in a privacy preserving manner, e.g., via multi-party computation as may be done cryptographically by secret sharing data or other means so that the user data is not leaked to the other devices.
[0457] In some embodiments, the trained model can be centrally stored on the user device or stored on multiple devices, e.g., as in federated learning. Such decentralized storage can similarly be done in a privacy preserving manner, e.g., via cryptographic operations where each piece of data is broken into shards such that no device alone (i.e., only collectively with another device (s) ) or only the user device can reassemble or use the data. In this manner, a pattern of behavior of the user or the device may not be leaked, while taking advantage of increased computational resources of the other devices to train and execute the ML model. Accordingly, user-collected data can be protected. In some embodiments, data from multiple devices can be combined in a privacy-preserving manner to train an ML model.
[0458] In some embodiments, the present disclosure contemplates that data used for AI/ML systems may be kept strictly separated from platforms where the AI/ML systems are deployed and/or used to interact with users and/or process data. In such embodiments, data used for offline training of the AI/ML systems may be maintained in secured datastores with restricted access and/or not be retained beyond the duration necessary for training purposes. In some embodiments, the AI/ML systems may utilize a local memory cache to store data temporarily during a user session. The local memory cache may
289
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) be used to improve performance of the AI/ML systems. However, to protect user privacy, data stored in the local memory cache may be erased after the user session is completed. Any temporary caches of data used for online learning or inference may be promptly erased after processing. All data collection, transfer, and/or storage should use industry-standard encryption and/or secure communication.
[0459] In some embodiments, as noted above, techniques such as federated learning, differential privacy, secure hardware components, homomorphic encryption, and/or multi-party computation among other techniques may be utilized to further protect personal information data during training and/or use of the AI/ML systems. The AI/ML systems should be monitored for changes in underlying data distribution such as concept drift or data skew that can degrade performance of the AI/ML systems over time.
[0460] In some embodiments, the AI/ML systems are trained using a combination of offline and online training. Offline training can use curated datasets to establish baseline model performance, while online training can allow the AI/ML systems to continually adapt and/or improve. The present disclosure recognizes the importance of maintaining strict data governance practices throughout this process to ensure user privacy is protected.
[0461] In some embodiments, the AI/ML systems may be designed with safeguards to maintain adherence to originally intended purposes, even as the AI/ML systems adapt based on new data. Any significant changes in data collection and/or applications of an AI/ML system use may (and in some cases should) be transparently communicated to affected stakeholders and/or include obtaining user consent with respect to changes in how user data is collected and/or utilized.
290
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 )
[ 0462 ] Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively restrict and/or block the use of and/or access to data . That is , the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to data . For example , in the case of some services , the present technology should be configured to allow users to select to "opt in" or "opt out" of participation in the collection of data during registration for services or anytime thereafter . In another example , the present technology should be configured to allow users to select not to provide certain data for training the AI /ML systems and/or for use as input during the inference stage of such systems . In yet another example , the present technology should be configured to allow users to be able to select to limit the length of time data is maintained or entirely prohibit the use of their data for use by the AI /ML systems . In addition to providing "opt in" and "opt out" options , the present disclosure contemplates providing noti fications relating to the access or use of personal information . For instance , a user can be noti fied when their data is being input into the AI /ML systems for training or inference purposes , and/or reminded when the AI /ML systems generate outputs or make decisions based on their data .
[ 0463 ] The present disclosure recogni zes AI /ML systems should incorporate explicit restrictions and/or oversight to mitigate against risks that may be present even when such systems having been designed, developed, and/or operated according to industry best practices and standards . For example , outputs may be produced that could be considered erroneous , harmful , of fensive , and/or biased; such outputs may not necessarily reflect the opinions or positions of the entities developing or deploying these systems . Furthermore , in some cases ,
291
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) references to third-party products and/or services in the outputs should not be construed as endorsements or affiliations by the entities providing the AI/ML systems. Generated content can be filtered for potentially inappropriate or dangerous material prior to being presented to users, while human oversight and/or ability to override or correct erroneous or undesirable outputs can be maintained as a failsafe.
[0464] The present disclosure further contemplates that users of the AI/ML systems should refrain from using the services in any manner that infringes upon, misappropriates, or violates the rights of any party. Furthermore, the AI/ML systems should not be used for any unlawful or illegal activity, nor to develop any application or use case that would commit or facilitate the commission of a crime, or other tortious, unlawful, or illegal act. The AI/ML systems should not violate, misappropriate, or infringe any copyrights, trademarks, rights of privacy and publicity, trade secrets, patents, or other proprietary or legal rights of any party, and appropriately attribute content as required. Further, the AI/ML systems should not interfere with any security, digital signing, digital rights management, content protection, verification, or authentication mechanisms. The AI/ML systems should not misrepresent machine-generated outputs as being human- genera ted .
[0465] Aspects of the technology described above can include the collection and/or use of data from various sources. Such data can be used to improve interactions that a device has with users and/or its environment. In some scenarios, such data can include personal information that is can be used to identify a specific person. Such personal information can include demographic data, email addresses, home addresses, work addresses, phone numbers, location and/or location-
292
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) related data, and/or other identi fying information . The use of such personal information can be applied to enhance the user' s experience with the device . For example , a user' s personal information can be used to improve interactions between the device and the user . Other benefits from the use of personal information data are also possible and contemplated by the present disclosure .
[ 0466 ] The use of personal information can require one or more entities managing such data . These entities can be involved in processing, gathering, disclosing, trans ferring, storing, and/or other acts that support the technologies described herein . The present disclosure contemplates that ( e . g . , does not preclude ) that all use of personal information complies with well-established privacy policies and/or privacy practices by all entities involved . Such policies and practices should meet or exceed generally recogni zed industry standards and comply with all applicable data privacy and security-related governmental requirements . For example , entities should receive informed consent from users to collect and/or use the personal information . Additionally, such collection and/or use should only be for legitimate and reasonable uses . Further, personal information of a user should not be disclosed, shared, sold, and/or provided for uses other than legitimate and/or reasonable uses .
[ 0467 ] In various scenarios , personal information may not be available , such as when a user declines to share such information . For example , the user can withhold consent for collection and/or use of such data, such as by "opting out" of sharing such data and/or by not explicitly "opting in" during a registration process . The user can also employ the use of any of various components , including hardware and/or software components , to prevent the collection and/or use of such data . While the use of personal information can benefit a user by
293
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) improving the operation of the device , the present disclosure contemplates that embodiments of the present technology can be performed and/or used without access to such data . For example , operations of the device can use non-personal information and/or otherwise proceed without personal information . In some embodiments , the device can make inferences based on non-personal information and/or a minimal amount of personal information .
294
4926-8595-8766
Claims
1. A method, comprising: at a computer system that is in communication with one or more input devices, a display component, and a movement component : while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
2. The method of claim 1, wherein performing the movement includes causing the display component to be obscured from a first viewpoint of a user.
3. The method of any one of claims 1-2, wherein the movement, performed in conjunction with displaying the representation of the second software object, is a preconfigured movement pattern.
4. The method of any one of claims 1-3, wherein the movement performed in conjunction with displaying the representation of the second software object is a 360 degree rotation.
5. The method of any one of claims 1-3, wherein the movement, performed in conjunction with displaying the representation of the second software object, includes moving
295
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) from a first position to a second position and then returning to the first position, wherein in the first position the display is not obscured from a second viewpoint of the user, and wherein in the second position the display is obscured from the second viewpoint of the user.
6. The method of any one of claims 1-5, wherein the indication that the representation of the second software object is to be displayed includes an indication of a task to be performed.
7. The method of any one of claims 1-6, further comprising: before performing the movement in conjunction with displaying the representation of the second software object, displaying, via the display component, a notification requesting permission to share information with the second software object.
8. The method of any one of claims 1-7, further comprising: before performing the movement in conjunction with displaying the representation of the second software object, displaying, via the display component, a notification requesting permission to display the representation of the second software object.
9. The method of any one of claims 1-8, wherein the indication that the representation of the second software object is to be displayed is received in accordance with a determination that the first software object cannot perform a requested task.
10. The method of any one of claims 1-9, wherein: the representation of the first software object includes a first set of visual characteristics; and
296
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the representation of the second software object includes a second set of visual characteristics different from the first set of visual characteristics.
11. The method of any one of claims 1-10, wherein the computer system is in communication with one or more audio output component, the method further comprising: while displaying the representation of the first software object, outputting, via the audio output component, a first audio output corresponding to the representation of the first software object; and while displaying the representation of the second software object, outputting, via the audio output component, a second audio output corresponding to the representation of the second software object, wherein the first audio output is different from the second audio output.
12. The method of any one of claims 1-11, wherein: the first software object corresponds to a first application; and the second software object corresponds to a second application different from the first application.
13. The method of any one of claims 1-12, further comprising: in response to receiving the indication that the representation of the second software object is to be displayed, ceasing displaying, via the display component, the representation of the first software object.
14. The method of any one of claims 1-13, wherein performing the movement in conjunction with displaying the representation of the second software object includes initiating displaying of the representation of the second software object while performing the movement.
297
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
15. The method of any one of claims 1-13, wherein performing the movement in conjunction with displaying the representation of the second software object is initiated after the movement has started.
16. The method of any one of claims 1-15, wherein the computer system is in communication with a second set of one or more audio output components, the method further comprising : after performing the movement, outputting, via the second set of one or more audio output components, a second audio output corresponding to the representation of the second software object.
17. The method of any one of claims 1-16, wherein the representation of the first software object is displayed in conjunction with performing the movement.
18. The method of any one of claims 1-17, wherein the representation of the first software object and the representation of the second software object are not concurrently displayed.
19. The method of any one of claims 1-18, further comprising: before performing the movement, ceasing displaying, via the display component, the representation of the first software object.
20. The method of any one of claims 1-19, further comprising: while displaying, via the display component, the representation of the second software object, receiving an indication that a representation of a third software object is
298
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) to be displayed, wherein the third software object is different from the second software object; and in response to receiving the indication that the representation of the third software object is to be displayed, performing, via the movement component, the movement in conjunction with displaying, via the display component, the representation of the third software object.
21. The method of any one of claims 1-19, wherein the movement performed in conjunction with displaying the representation of the second software object is a first movement, the method further comprising: while displaying the representation of the second software object, receiving an indication that a representation of fourth software object is to be displayed, wherein the second software object is different from the fourth software object; and in response to receiving the indication that the representation of the fourth software object is to be displayed, performing, via the movement component, a second movement different from the first movement in conjunction with displaying, via the display component, the representation of the fourth software object.
22. The method of any one of claims 1-21, wherein the first software object corresponds to a system process and the second software object corresponds to a first application process, wherein displaying, via the display component, the representation of the second software object includes switching, the display of the representation of the first software object to the representation of the second software object, wherein performing, via the movement component, the movement in conjunction with displaying, via the display component, the representation of the second software object
299
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) includes moving in a first movement pattern, the method further comprising: while displaying, via the display component, the representation of the second software object, receiving an indication that a representation of a fifth software object is to be displayed, wherein: the fifth software object is different from the first software object and the second software object; and the fifth software object corresponds to a second application process; and in response to receiving the indication that the representation of the fifth software object is to be displayed, performing, via the movement component, a second movement pattern different from the first movement pattern in conjunction with displaying, via the display component, the representation of the fifth software object, and wherein displaying the representation of the fifth software object includes switching the display of the representation of the second software object to the representation of the fifth software object.
23. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices, a display component, and a movement component, the one or more programs including instructions for performing the method of any one of claims 1-22.
24. A computer system that is configured to communicate with one or more input devices, a display component, and a movement component, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more
300
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) programs including instructions for performing the method of any one of claims 1-22.
25. A computer system that is configured to communicate with one or more input devices, a display component, and a movement component, the computer system comprising: means for performing the method of any one of claims 1- 22.
26. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices, a display component, and a movement component, the one or more programs including instructions for performing the method of any one of claims 1-22.
27. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices, a display component, and a movement component, the one or more programs including instructions for : while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
301
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
28. A computer system configured to communicate with one or more input devices, a display component, and a movement component, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
29. A computer system configured to communicate with one or more input devices, a display component, and a movement component, comprising: means for, while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and means for, in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
30. A computer program product, comprising one or more programs configured to be executed by one or more processors
302
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) of a computer system that is in communication with one or more input devices, a display component, and a movement component, the one or more programs including instructions for: while displaying, via the display component, a representation of a first software object, receiving an indication that a representation of a second software object is to be displayed, wherein the second software object is different from the first software object; and in response to receiving the indication that the representation of the second software object is to be displayed, performing, via the movement component, a movement in conjunction with displaying, via the display component, the representation of the second software object.
31. A method, comprising: at a computer system that is in communication with one or more input devices and a display component: detecting, via the one or more input devices, a request ; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the
303
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software object.
32. The method of claim 31, wherein displaying the representation of the first software object that corresponds to the respective application includes displaying a visual change over time corresponding to the representation of the first software object.
33. The method of any one of claims 31-32, wherein the computer system is in communication with a first audio output component, the method further comprising: while displaying the representation of the first software object that corresponds to the respective application, outputting, via the first audio output component, an audio output corresponding to the representation of the first software object.
34. The method of claim 33, wherein: displaying the representation of the first software object that corresponds to the respective application includes performing a movement corresponding to the representation of the first software object; and the audio output corresponding to the representation of the first software object is synchronized with the movement corresponding to the representation of the first software ob j ect .
35. The method of any one of claims 31-34, wherein the computer system is in communication with a one or more output devices, the method further comprising:
304
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) while displaying the representation of the first software object, receiving a first input, via the one or more input devices, corresponding to an interaction with the first software object; and in response to receiving the input corresponding to an interaction with the first software object: in accordance with a determination that a third set of one or more criteria is satisfied, wherein the third set of one or more criteria includes a criterion that is satisfied when the first input corresponds to the first application, outputting, via the one or more output devices, a first response; and in accordance with a determination that a fourth set of one or more criteria is satisfied, wherein the fourth set of one or more criteria includes a criterion that is satisfied when the first input corresponds to the second application, outputting, via the one or more output devices, a second response .
36. The method of any one of claims 31-35, wherein the computer system is in communication with a display component, the method further comprising: before detecting the request, displaying, via the display component, a representation of a second software object different from the representation of the first software object; and in response to detecting the request, transitioning the representation of the second software object to the representation of the first software object.
37. The method of claim 36, wherein the computer system is in communication with a movement component, and wherein transitioning the representation of the second software object
305
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) to the representation of the first software object includes performing, via the movement component, a movement.
38. The method of any one of claims 36-37, wherein transitioning the representation of the second software object to the representation of the first software object includes: ceasing displaying, via the display component, the representation of the second software object; and displaying, via the display component, the representation of the first software object.
39. The methods of any one of claims 31-38, further comprising : in response to receiving the request, concurrently displaying, via the display component, the representation of the first software object and a representation of a third software object different from the representation of the first software object.
40. The method of any one of claims 31-39, wherein the first set of one or more criteria includes a criterion that is satisfied when a second input is detected.
41. The method of claim 40, wherein the second input is an interaction with the first software object.
42. The method of any one of claims 40-41, wherein the second input is directed to the first software object.
43. The method of any one of claims 40-42, wherein the one or more input devices includes a microphone and wherein the second input includes verbal input.
306
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
44. The method of any one of claims 31-43, further comprising : in accordance with a determination that a fifth set of one or more criteria is satisfied, ceasing displaying the identifier corresponding to the representation of the first software object.
45. The method of any one of claims 31-44, further comprising : before detecting the request, displaying, via the display component, a representation of a fourth software object, wherein the representation of the fourth software object includes a first set of one or more visual characteristics, wherein the representation of the first software object includes a second set of one or more visual characteristics different from the first set of one or more visual characteristics, and wherein the first software object is different from the fourth software object.
46. The method of any one of claims 31-45, further comprising : before detecting the request, displaying, via the display component, a representation of the fifth software object, wherein the fifth software object includes a first set of one or more audio characteristics, wherein the first software object includes a second set of one or more audio characteristics different from the first set of one or more audio characteristics, and wherein the first software object is different from the fifth software object.
47. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component, the one or
307
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) more programs including instructions for performing the method of any one of claims 31-46 .
48 . A computer system that is configured to communicate with one or more input devices and a display component , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for performing the method of any one of claims 31-46 .
49 . A computer system that is configured to communicate with one or more input devices and a display component , the computer system comprising : means for performing the method of any one of claims 31- 46 .
50 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component , the one or more programs including instructions for performing the method of any one of claims 31-46 .
51 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component , the one or more programs including instructions for : detecting, via the one or more input devices , a request ; in response to receiving the request , displaying, via the display component , a representation of a first software obj ect that corresponds to a respective application; and
308
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) while displaying, via the display generation component , the representation of the first software obj ect that corresponds to the respective application : in accordance with a determination that a first set of one or more criteria is satis fied, the first set of one or more criteria including a criterion that is satis fied when the first software obj ect corresponds to a first application, displaying, via the display component , an identi fier corresponding to the first software obj ect ; and in accordance with a determination that a second set of one or more criteria is satis fied, wherein the second set of one or more criteria includes a criterion that is satis fied when the first software obj ect corresponds to a second application di f ferent from the first application, forgoing displaying, via the display component , the identi fier corresponding to the first software obj ect .
52 . A computer system configured to communicate with one or more input devices and a display component , comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for : detecting, via the one or more input devices , a request ; in response to receiving the request , displaying, via the display component , a representation of a first software obj ect that corresponds to a respective application; and while displaying, via the display generation component , the representation of the first software obj ect that corresponds to the respective application : in accordance with a determination that a first set of one or more criteria is satis fied, the first set of one
309
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software object.
53. A computer system configured to communicate with one or more input devices and a display component, comprising: means for, detecting, via the one or more input devices, a request; means for, in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: means for, in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and means for, in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application,
310
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) forgoing displaying, via the display component, the identifier corresponding to the first software object.
54. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component, the one or more programs including instructions for: detecting, via the one or more input devices, a request; in response to receiving the request, displaying, via the display component, a representation of a first software object that corresponds to a respective application; and while displaying, via the display generation component, the representation of the first software object that corresponds to the respective application: in accordance with a determination that a first set of one or more criteria is satisfied, the first set of one or more criteria including a criterion that is satisfied when the first software object corresponds to a first application, displaying, via the display component, an identifier corresponding to the first software object; and in accordance with a determination that a second set of one or more criteria is satisfied, wherein the second set of one or more criteria includes a criterion that is satisfied when the first software object corresponds to a second application different from the first application, forgoing displaying, via the display component, the identifier corresponding to the first software object.
55. A method, comprising: at a computer system that is in communication with one or more input devices and a display component: while displaying, via the display component, a representation of a first software object that corresponds to
311
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) a first application, detecting input representing a request; and in response to receiving the input: in accordance with a determination that the request corresponds to a second application different from the first application, displaying, via the display component, a representation of a second software object that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application: continuing displaying, via the display component, the representation of the first software object; and forgoing displaying, via the display component, the representation of the second software object that corresponds to the second application.
56. The method of claim 55, wherein the first software object is a system software object.
57. The method of any one of claims 55-56, further comprising : in conjunction with displaying the representation of the second software object, causing an operation to be performed with one or more files corresponding to the request.
58. The method of claim 57, wherein causing the operation to be performed with the one or more files corresponding to the request is performed in response to receiving the input.
59. The method of claim 57, wherein causing the operation to be performed with the one or more files corresponding to the request is performed after receiving the input.
312
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
60. The method of any one of claims 55-59, further comprising : in response to receiving the input and in accordance with a determination that the request cannot be performed by the first software object, outputting, via the one or more output devices, a prompt for a second input.
61. The method of claim 60, wherein the prompt for the second input is a request for permission to launch the first application .
62. The method of claim 60, wherein the prompt for the second input is a request for permission to share data with the first application .
63. The method of any one of claims 55-62, further comprising : before displaying the representation of the second software object that corresponds to the second application, obtaining a response corresponding to a task specified in the request .
64. The method of claim 63, wherein obtaining the response related to the detected input specified in the request includes detecting, via the one or more input devices, a first request to share information with the second software object.
65. The method of claim 63, wherein the obtained response is a request to display the representation of the second software object that corresponds to the second application.
66. The method of any one of claims 55-65, further comprising :
313
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in response to receiving the input and in accordance with a determination that the first software object cannot perform the request: ceasing displaying, via the display component, the representation of the first software object; and displaying, via the display component, the representation of the second software object.
67. The method of any one of claims 55-66, wherein the representation of the first software object includes a set of one or more visual characteristics and wherein the representation of the second software object includes a second set of one or more visual characteristics different from the first set of one or more visual characteristics.
68. The method of any one of claims 55-67, wherein the representation of the first software object includes a first set of one or more audio characteristics and wherein the representation of the second software object includes a second set of one or more audio characteristics different from the first set of one or more audio characteristics.
69. The method of any one of claims 55-68, wherein the first software object corresponds to a first application, and wherein the second software object corresponds to a second application different from the first application.
70. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component, the one or more programs including instructions for performing the method of any one of claims 55-69.
314
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 )
71 . A computer system that is configured to communicate with one or more input devices and a display component , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for performing the method of any one of claims 55- 69 .
72 . A computer system that is configured to communicate with one or more input devices and a display component , the computer system comprising : means for performing the method of any one of claims 55- 69 .
73 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component , the one or more programs including instructions for performing the method of any one of claims 55- 69 .
74 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component , the one or more programs including instructions for : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first
315
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
75 . A computer system configured to communicate with one or more input devices and a display component , comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and
316
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
76 . A computer system configured to communicate with one or more input devices and a display component , comprising : means for, while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and in response to receiving the input : means for, in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : means for continuing displaying, via the display component , the representation of the first software obj ect ; and means for forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
77 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and a display component , the one or more programs including instructions for : while displaying, via the display component , a representation of a first software obj ect that corresponds to a first application, detecting input representing a request ; and
317
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) in response to receiving the input : in accordance with a determination that the request corresponds to a second application di f ferent from the first application, displaying, via the display component , a representation of a second software obj ect that corresponds to the second application; and in accordance with a determination that the request corresponds to the first application : continuing displaying, via the display component , the representation of the first software obj ect ; and forgoing displaying, via the display component , the representation of the second software obj ect that corresponds to the second application .
78 . A method, comprising : at a computer system that is in communication with one or more display components and one or more input devices : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
318
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
79. The method of claim 78, wherein the first mode is a first power mode, wherein the second mode is a second power mode, and wherein the first power mode consumes more power than the second power mode.
80. The method of any one of claims 78-79, further comprising : while the computer system is in the first mode, displaying, via the one or more display components, a first user interface; and while the computer system is in the second mode, displaying, via the one or more display components, a second user interface different from the first user interface.
81. The method of claim 80, wherein: the second user interface includes current temporal information; and the first user interface does not include current temporal information.
82. The method of any one of claims 78-81, wherein the input is a first input, the method further comprising: while the computer system is in the second mode: detecting, via the one or more input devices, a second input different from the first input; and in response to detecting the second input: in accordance with a determination that the second input is a first type of input, responding to the second input; and in accordance with a determination that the second input is a second type of input different from the first type of input, forgoing response to the second input; and while the computer system is in the first mode:
319
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) detecting, via the one or more input devices, a third input different from the first input; and in response to detecting the third input: in accordance with a determination that the third input is the first type of input, responding to the third input; and in accordance with a determination that the third input is the second type of input, responding to the third input.
83. The method of any one of claims 78-82, further comprising : while the computer system is in the first mode, moving a portion of the representation of the software object; and while the computer system is in the second mode, forgoing movement of the portion of the representation of the software ob ect .
84. The method of any one of claims 78-83, wherein the representation of the software object is a first representation of the software object, the method further comprising : in response to detecting the input and in accordance with the determination that the input starts at the location corresponding to the representation of the software object and proceeds in the first direction, displaying, via the one or more display components, a second representation of the software object different from the first representation of the software object.
85. The method of any one of claims 78-84, wherein the computer system is in communication with one or more output devices, wherein changing the computer system to be operated in the second mode includes outputting, via the one or more
320
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) output devices, an auditory output, a haptic output, or a combination thereof.
86. The method of any one of claims 78-85, wherein the input is a first input, the method further comprising: while the computer system is in the second mode, detecting, via the one or more input devices, a third input different from the first input; and in response to detecting the third input and in accordance with a determination that the third input satisfies a set of one or more criteria, changing the computer system to be operated in the first mode.
87. The method of any one of claims 78-86, further comprising : while the computer system is in the first mode, detecting that an event has occurred without detecting, via the one or more input devices, an additional input; and in response to detecting the event and in accordance with a determination that a set of one or more criteria is satisfied, changing the computer system to be operated in the second mode.
88. The method of any one of claims 78-87, wherein the representation of the software object is a first representation of the software object, and wherein changing the computer system to be operated in the second mode includes : ceasing display of the first representation of the software object; and displaying, via the one or more display components, a second representation of the software object different from the first representation of the software object.
321
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
89. The method of any one of claims 78-88, wherein the representation of the software object is a first representation of the software object, the method further comprising : in response to detecting the input and in accordance with a determination that the input starts at the location corresponding to the representation of the software object and proceeds in a third direction different from the first direction : maintaining the computer system in the first mode; ceasing display of the first representation of the software object; and displaying, via the one or more display components, a third representation of the software object different from the first representation of the software object.
90. The method of any one of claims 78-89, wherein the location is a first location, wherein the computer system is in communication with one or more output devices, the method further comprising: in conjunction with detecting the input, displaying, via the one or more input devices, a first user interface element and a second user interface element different from the first user interface element; and after detecting the input: in accordance with a determination that the input started at the first location corresponding to the representation of the software object and was released at the first user interface element, outputting, via the one or more output devices, an output corresponding to the first user interface element; and in accordance with a determination that the input started at the first location corresponding to the representation of the software object and was released at the
322
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) second user interface element , outputting, via the one or more output devices , an output corresponding to the second user interface element .
91 . A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices , the one or more programs including instructions for performing the method of any one of claims 78- 90 .
92 . A computer system that is configured to communicate with one or more display components and one or more input devices , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for performing the method of any one of claims 78- 90 .
93 . A computer system that is configured to communicate with one or more display components and one or more input devices , the computer system comprising : means for performing the method of any one of claims 78- 90 .
94 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices , the one or more programs including instructions for performing the method of any one of claims 78- 90 .
323
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 )
95 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices , the one or more programs including instructions for : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
96 . A computer system configured to communicate with one or more display components and one or more input devices , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for : while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input :
324
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
97 . A computer system configured to communicate with one or more display components and one or more input devices , the computer system comprising : means for, while displaying, via the one or more display components , a representation of a software obj ect and while the computer system is in a first mode , detecting, via the one or more input devices , an input ; and in response to detecting the input : means for, in accordance with a determination that the input starts at a location corresponding to the representation of the software obj ect and proceeds in a first direction, changing the computer system to be operated in a second mode di f ferent from the first mode ; and means for, in accordance with a determination that the input starts at the location corresponding to the representation of the software obj ect and proceeds in a second direction di f ferent from the first direction, maintaining the computer system in the first mode .
98 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more
325
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) display components and one or more input devices, the one or more programs including instructions for: while displaying, via the one or more display components, a representation of a software object and while the computer system is in a first mode, detecting, via the one or more input devices, an input; and in response to detecting the input: in accordance with a determination that the input starts at a location corresponding to the representation of the software object and proceeds in a first direction, changing the computer system to be operated in a second mode different from the first mode; and in accordance with a determination that the input starts at the location corresponding to the representation of the software object and proceeds in a second direction different from the first direction, maintaining the computer system in the first mode.
99. A method, comprising: at a computer system that is in communication with one or more display components and one or more input devices: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and
326
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
100. The method of claim 99, further comprising: in response to detecting the input directed at the representation of the system software object and in accordance with a determination that the input is a third type of input different from the first type of input and the second type of input, maintaining display of the first user interface.
101. The method of any one of claims 99-100, wherein the input directed at the representation of the system software object is a first input, the method further comprising: while displaying the first user interface, detecting, via the one or more input devices, a second input; and in response to detecting the second input and in accordance with a determination that the second input is a fourth type of input, displaying, via the one or more display components, a fourth user interface including a list of applications, wherein the fourth user interface is different from the first user interface, the second user interface, and the third user interface.
102. The method of claim 101, wherein the representation of the system software object is a first representation of the system software object, and wherein the fourth user interface includes a second representation of the system software ob j ect .
327
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
103. The method of any one of claims 101-102, further comprising : while displaying the fourth user interface including the list of applications, detecting, via the one or more input devices, an input corresponding to a respective application in the list of applications; and in conjunction with detecting the input corresponding to the respective application in the list of applications: in accordance with a determination that the respective application is a first application, displaying, via the one or more display components, a user interface of the first application without displaying the representation of the system software object; in accordance with a determination that the respective application is a second application different from the first application, displaying, via the one or more display components, a user interface of the second application without displaying the representation of the system software object.
104. The method of any one of claims 99-103, wherein: the determination that the input is the first type of input includes a determination that the input proceeds in a first direction; and the determination that the input is the second type of input includes a determination that the input proceeds in a second direction different from the first direction.
105. The method of any one of claims 99-104, wherein the representation of the system software object is a first representation of the system software object, wherein the second user interface includes a second representation of the system software object different from the first representation of the system software object.
328
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
106. The method of claim 105, wherein: the first representation of the system software object is a first size; the second representation of the system software object is a second size different from the first size.
107. The method of any one of claims 105-106, wherein: the first representation of the system software object has a first expression; the second representation of the system software object has a second expression different from the first expression.
108. The method of any one of claims 105-107, wherein: the first representation of the system software object is at a first location in a displayed area of the one or more display components; and the second representation of the system software object is at a second location, different from the first location, in the displayed area of the one or more display components.
109. The method of any one of claims 99-108, wherein the first user interface includes a set of one or more representations of interactions.
110. The method of any one of claims 99-109, wherein the input directed at the representation of the system software object is a first input, the method further comprising: while displaying the second user interface, detecting, via the one or more inputs devices, a third input different from the first input; and in response to detecting the third input and in accordance with a determination that the third input satisfies a first set of one or more criteria:
329
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) ceasing display of the second user interface; and displaying, via the one or more display components, the first user interface.
111. The method of any one of claims 99-110, further comprising : while displaying the third user interface, detecting, via the one or more inputs devices, a fourth input different from the first input; and in response to detecting the fourth input and in accordance with a determination that the fourth input satisfied a second set of one or more criteria: ceasing display of the third user interface; and displaying, via the one or more display components, the first user interface.
112. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices, the one or more programs including instructions for performing the method of any one of claims 99-111.
113. A computer system that is configured to communicate with one or more display components and one or more input devices, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 99-111.
330
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 )
114 . A computer system that is configured to communicate with one or more display components and one or more input devices , the computer system comprising : means for performing the method of any one of claims 99- 111 .
115 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices , the one or more programs including instructions for performing the method of any one of claims 99- 111 .
116 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices , the one or more programs including instructions for : while displaying, via the one or more display components , a first user interface , detecting, via the one or more input devices , an input directed at a representation of a system software obj ect ; and in response to detecting the input directed at the representation of the system software obj ect : in accordance with a determination that the input is a first type of input : ceasing display of the representation of the system software obj ect ; and displaying, via the one or more display components , a second user interface di f ferent from the first user interface ; and in accordance with a determination that the input is a second type of input di f ferent from the first type of input :
331
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
117. A computer system configured to communicate with one or more display components and one or more input devices, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
332
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
118. A computer system configured to communicate with one or more display components and one or more input devices, the computer system comprising: means for, while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: means for ceasing display of the representation of the system software object; and means for displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: means for ceasing display of the representation of the system software object; and means for displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
119. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display components and one or more input devices, the one or more programs including instructions for: while displaying, via the one or more display components, a first user interface, detecting, via the one or more input devices, an input directed at a representation of a system software object; and
333
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in response to detecting the input directed at the representation of the system software object: in accordance with a determination that the input is a first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a second user interface different from the first user interface; and in accordance with a determination that the input is a second type of input different from the first type of input: ceasing display of the representation of the system software object; and displaying, via the one or more display components, a third user interface different from the first user interface and the second user interface.
120. A method, comprising: at a computer system that is in communication with one or more input devices and one or more display components: detecting, via the one or more input devices, a state of an environment; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satisfies a set of one or more criteria, displaying, via the one or more display components, a representation of a software object; and in accordance with a determination that the state of the environment does not satisfy the set of one or more criteria, forgoing display of the representation of the software object.
121. The method of claim 120, further comprising:
334
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) in response to detecting the state of the environment and in accordance with the determination that the state of the environment satis fies the set of one or more criteria, enabling at least one of the one or more input devices .
122 . The method of any one of claims 120- 121 , wherein the computer system is in communication with one or more movement components , the method further comprising : in response to detecting the state of the environment and in accordance with the determination that the state of the environment satis fies the set of one or more criteria, enabling the one or more movement components .
123 . The method of any one of claims 120- 122 , wherein the one or more input devices includes one or more cameras , and wherein detecting the state of the environment includes detecting, via the one or more cameras , the state of the environment .
124 . The method of any one of claims 120- 123 , wherein the one or more input devices includes one or more light sensors , and wherein detecting the state of the environment includes detecting, via the one or more light sensors , the state of the environment .
125 . The method of any one of claims 120- 124 , wherein detecting the state of the environment includes detecting a level of light within the environment .
126 . The method of any one of claims 120- 125 , wherein the set of one or more criteria includes a criterion that is satis fied when a change in a level of light within the environment exceeds a threshold .
335
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
127. The method of any one of claims 120-126, wherein the set of one or more criteria includes a criterion that is satisfied when one or more subjects are detected in the environment.
128. The method of claim 127, wherein the computer system is in communication with one or more movement components, the method further comprising: in conjunction with detecting the state of the environment, moving, via one or more movement components, a portion of the computer system.
129. The method of any one of claims 127-128, wherein the computer system is in communication with one or more output devices, the method further comprising: in response to detecting the state of the environment and in accordance with the determination that the state of the environment satisfies the set of one or more criteria, outputting, via the one or more output devices, a greeting.
130. The method of any one of claims 127-129, wherein the one or more input devices include one or more cameras, one or more microphones, or any combination thereof, and wherein detecting the state of the environment includes detecting, via the one or more cameras, the one or more microphones, or any combination thereof, whether the one or more subjects are in the environment.
131. The method of any one of claims 120-130, wherein the state of the environment is a first state of the environment, the method further comprising: while displaying the representation of the software object, detecting, via the one or more input devices, a second state of the environment; and
336
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) in response to detecting the second state of the environment and in accordance with a determination that the second state of the environment does not satis fy the set of one or more criteria, ceasing display of the representation of the software obj ect .
132 . The method of any one of claims 120- 131 , further comprising : while displaying the representation of the software obj ect , detecting, via the one or more input devices , an input ; and in response to detecting the input , performing, via the software obj ect , one or more operations based on the input .
133 . A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for performing the method of any one of claims 120- 132 .
134 . A computer system that is configured to communicate with one or more input devices and one or more display components , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for performing the method of any one of claims 120- 132 .
135 . A computer system that is configured to communicate with one or more input devices and one or more display components , the computer system comprising :
337
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) means for performing the method of any one of claims 120- 132 .
136 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for performing the method of any one of claims 120- 132 .
137 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for : detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
138 . A computer system configured to communicate with one or more input devices and one or more display components , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for :
338
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
139 . A computer system configured to communicate with one or more input devices and one or more display components , the computer system comprising : means for detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : means for, in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and means for, in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
140 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for :
339
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) detecting, via the one or more input devices , a state of an environment ; and in response to detecting the state of the environment : in accordance with a determination that the state of the environment satis fies a set of one or more criteria, displaying, via the one or more display components , a representation of a software obj ect ; and in accordance with a determination that the state of the environment does not satis fy the set of one or more criteria, forgoing display of the representation of the software obj ect .
141 . A method, comprising : at a computer system that is in communication with one or more input devices and one or more display components : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
142 . The method of claim 141 , further comprising :
340
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) before displaying the representation of the software object, detecting, via the one or more input devices, an input corresponding to a request to wake; and in response to detecting the input corresponding to the request to wake, transitioning from the inactive mode to the active mode.
143. The method of claim 142, wherein the input corresponding to the request to wake is an input from a subject within the environment .
144. The method of claim 141, further comprising: before displaying the representation of the software object, detecting, via the one or more input devices, a state of the environment; and in response to detecting the state of the environment, transitioning from the inactive mode to the active mode.
145. The method of any one of claims 141-144, wherein the first point of the multiple points of interest is a person within the environment.
146. The method of any one of claims 141-145, wherein the second point of the multiple points of interest is an inanimate object in the environment.
147. The method of any one of claims 141-146, wherein the set of one or more criteria includes a criterion that is satisfied when a respective point of the multiple points of interest is within a threshold distance from the computer system.
148. The method of claim 147, wherein the threshold distance is a first threshold distance, and wherein the set of one or more criteria includes a criterion that is satisfied when the
341
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) respective point of the multiple points of interest is within a second threshold distance from one or more other points of the multiple points of interest.
149. The method of any one of claims 141-148, wherein the computer system is a communal device.
150. The method of any one of claims 141-149, further comprising : detecting, via the one or more input devices, speech corresponding to the multiple points of interest.
151. The method of any one of claims 141-150, wherein the one or more input devices includes one or more cameras, the method further comprising: detecting, via the one or more cameras, the multiple points of interest.
152. The method of claim 151, wherein the computer system is in communication with one or more movement components, the method further comprising: before detecting the multiple points of interest, moving, via the one or more movement components, a portion of the computer system, wherein the multiple points of interest are detected in conjunction with moving the portion of the computer system.
153. The method of any one of claims 141-152, wherein the set of one or more criteria is a first set of one or more criteria, the method further comprising: in response to transitioning from the inactive mode to the active mode and in accordance with a determination that the multiple points of interest detected within the environment satisfy a second set of one or more criteria,
342
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) directing the computer system at the multiple points such that the representation of the software obj ect is directed towards the multiple points , wherein the second set of one or more criteria is di f ferent from the first set of one or more criteria .
154 . The method of any one of claims 141- 153 , wherein the computer system is in communication with one or more movement components , the method further comprising : in response to transitioning from the inactive mode to the active mode and in accordance with the determination that the first point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, moving, via the one or more movement components , a portion of the computer system such that the portion of the computer system is directed towards the first point .
155 . The method of any one of claims 141- 154 , wherein directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point includes altering one or more visual characteristics of the representation of the software obj ect .
156 . The method of any one of claims 141- 155 , further comprising : in response to transitioning from the inactive mode to the active mode and in accordance with the determination that the first point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, displaying, via the one or more display components , content corresponding to the first point .
343
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
157. The method of any one of claims 141-156, wherein the one or more display components are disabled while the computer system is in the inactive mode.
158. The method of any one of claims 141-156, further comprising : before transitioning from the inactive mode to the active mode, displaying, via the one or more display components, content different from the representation of the software ob j ect .
159. The method of any one of claims 141-158, wherein the set of one or more criteria is a first set of one or more criteria, the method further comprising: while detecting the multiple points of interest and in accordance with a determination that a third set of one or more criteria is satisfied, transitioning from the active mode to the inactive mode, wherein the third set of one or more criteria is different from the first set of one or more criteria .
160. The method of any one of claims 141-159, further comprising : while displaying the representation of the software object, detecting, via the one or more input devices, that the environment does not include a point of interest; and in response to detecting that the environment does not include a point of interest, transitioning from the active mode to the inactive mode.
161. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components,
344
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) the one or more programs including instructions for performing the method of any one of claims 141- 160 .
162 . A computer system that is configured to communicate with one or more input devices and one or more display components , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for performing the method of any one of claims 141- 160 .
163 . A computer system that is configured to communicate with one or more input devices and one or more display components , the computer system comprising : means for performing the method of any one of claims 141- 160 .
164 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for performing the method of any one of claims 141- 160 .
165 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect :
345
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
166 . A computer system configured to communicate with one or more input devices and one or more display components , the computer system comprising : one or more processors ; and memory storing one or more programs configured to be executed by the one or more processors , the one or more programs including instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point
346
4926-8595-8766
Attorney Docket No . 032501 ( P68100W01 ) such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
167 . A computer system configured to communicate with one or more input devices and one or more display components , the computer system comprising : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : means for, in accordance with a determination that a first point of multiple points of interest detected within an environment satis fies a set of one or more criteria, directing the computer system at the first point such that the representation of the software obj ect is directed towards the first point ; and means for, in accordance with a determination that a second point of the multiple points of interest detected within the environment satis fies the set of one or more criteria, directing the computer system at the second point such that the representation of the software obj ect is directed towards the second point , wherein the second point is di f ferent from the first point .
168 . A computer program product , comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components , the one or more programs including instructions for : in response to transitioning from an inactive mode to an active mode and while displaying, via the one or more display components , a representation of a software obj ect : in accordance with a determination that a first point of multiple points of interest detected within an
347
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) environment satisfies a set of one or more criteria, directing the computer system at the first point such that the representation of the software object is directed towards the first point; and in accordance with a determination that a second point of the multiple points of interest detected within the environment satisfies the set of one or more criteria, directing the computer system at the second point such that the representation of the software object is directed towards the second point, wherein the second point is different from the first point.
169. A method, comprising: at a computer system that is in communication with one or more input devices and one or more display components: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the sub j ect : in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more display components, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and
348
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
(2) the representation of the software object in the first manner .
170. The method of claim 169, wherein displaying the representation of the software object in the first manner includes displaying, via the one or more display components, the representation of the software object having a first size, wherein displaying the representation of the software object in the second manner includes displaying, via the one or more display components, the representation of the software object having a second size larger than the first size.
171. The method of claim 170, wherein: in accordance with a determination that the content has a third size, the representation of the software object is displayed having the first size; in accordance with a determination that the content has a fourth size, the representation of the software object is displayed having the second size; and the fourth size is different from the third size.
172. The method of any one of claims 169-171, further comprising : detecting, via the one or more input devices, an input corresponding to a request for content; and in response to detecting the input corresponding to the request for content, displaying, via the one or more display components, the content.
173. The method of any one of claims 169-172, wherein the content corresponds to the subject.
174. The method of any one of claims 169-173, wherein detecting the attention of the subject includes detecting, via
349
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) the one or more input devices, a location of the subject within an environment.
175. The method of claim 174, wherein the first set of one or more criteria includes a criterion that is satisfied when a location of the subject is outside a threshold distance from the computer system.
176. The method of any one of claims 169-175, wherein detecting the attention of the subject includes detecting, via the one or more input devices, a gaze of the subject.
177. The method of any one of claims 169-176, wherein detecting the attention of the subject includes detecting an amount of time since the subject last interacted with the computer system.
178. The method of any one of claims 169-177, further comprising : while displaying the content, detecting, via the one or more input devices, attention of another subject; and in response to detecting the attention of the other sub j ect : in accordance with a determination that the attention of the other subject does not satisfy the first set of one or more criteria, ceasing display of the content; and in accordance with a determination that the attention of the other subject satisfied the first set of one or more criteria, maintaining display of the content.
179. The method of any one of claims 169-178, further comprising : while displaying the content, detecting, via the one or more input devices, attention of another subject; and
350
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in response to detecting the attention of the other subject, maintaining display of the content.
180. The method of any one of claims 169-179, further comprising : in response to detecting the attention of the subject: in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria and in accordance with a determination that a predefined amount of time has not passed since the attention of the subject last did not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner; and in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria and in accordance with a determination that the predefined amount of time has passed since the attention of the subject last did not satisfy the first set of one or more criteria : ceasing display of the content; and displaying, via the one or more display components, the representation of the software object in the second manner.
181. The method of any one of claims 169-180, wherein the attention of the subject is a first attention of the subject at a first time, the method further comprising: in response to detecting the first attention of the subject at the first time and in accordance with the determination that the attention of the subject satisfies the first set of one or more criteria, detecting, via the one or more input devices, second attention of the subject at a second time after the first time; and
351
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) in response to detecting the second attention of the subject at the second time and in accordance with a determination that the second attention of the subject at the second time satisfies the first set of one or more criteria, ceasing display of the representation of the software object.
182. The method of any one of claims 169-181, wherein the attention of the subject is first attention of the subject at a first time, the method further comprising: after detecting the first attention of the subject at the first time and while no longer displaying the content, detecting, via the one or more input devices, third attention of the subject at a third time after the first time; and in response to detecting the third attention of the subject at the third time and in accordance with a determination that the third attention of the subject at the third time does not satisfy the first set of one or more criteria, displaying, via the one or more display components, the content.
183. The method of any one of claims 169-182, wherein the attention of the subject is first attention of the subject at a first time, the method further comprising: in response to detecting the first attention of the subject at the first time and while no longer displaying the content, detecting, via the one or more input devices, fourth attention of the subject at a fourth time after the first time; and in response to detecting the fourth attention of the subject at the fourth time and in accordance with a determination that the fourth attention of the subject at the fourth time does not satisfy the first set of one or more criteria, forgoing display of the content.
352
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
184. The method of any one of claims 169-183, wherein the content is first content, the method further comprising: after ceasing display of the content, detecting, via the one or more input devices, an input corresponding to a request for second content, wherein the second content is different from the first content; and in response to detecting the input corresponding to the request for the second content: displaying, via the one or more display components, the second content; and displaying, via the one or more display components, the representation of the software object in a third manner different from the second manner.
185. A non-transitory computer-readable medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components, the one or more programs including instructions for performing the method of any one of claims 169-183.
186. A computer system that is configured to communicate with one or more input devices and one or more display components, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any one of claims 169-183.
187. A computer system that is configured to communicate with one or more input devices and one or more display components, the computer system comprising:
353
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) means for performing the method of any one of claims 169- 183.
188. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components, the one or more programs including instructions for performing the method of any one of claims 169-183.
189. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components, the one or more programs including instructions for: while displaying, via the one or more display components,
(1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more display components, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and
(2) the representation of the software object in the first manner .
354
4926-8595-8766
Attorney Docket No. 032501 (P68100W01)
190. A computer system configured to communicate with one or more input devices and one or more display components, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the sub j ect : in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and displaying, via the one or more display components, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner .
191. A computer system configured to communicate with one or more input devices and one or more display components, the computer system comprising: means for, while displaying, via the one or more display components, (1) a representation of a software object in a
355
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: means for ceasing display of the content; and means for displaying, via the one or more display components, the representation of the software object in a second manner different from the first manner; and means for, in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner .
192. A computer program product, comprising one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more input devices and one or more display components, the one or more programs including instructions for: while displaying, via the one or more display components, (1) a representation of a software object in a first manner and (2) content, detecting, via the one or more input devices, attention of a subject; and in response to detecting the attention of the subject: in accordance with a determination that the attention of the subject satisfies a first set of one or more criteria, wherein the first set of one or more criteria includes a criterion that is satisfied when the attention of the subject is not directed to the computer system: ceasing display of the content; and
356
4926-8595-8766
Attorney Docket No. 032501 (P68100W01) displaying, via the one or more display components, the representation of the software object in a second manner different from the first manner; and in accordance with a determination that the attention of the subject does not satisfy the first set of one or more criteria, maintaining display of (1) the content and (2) the representation of the software object in the first manner .
357
4926-8595-8766
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463700488P | 2024-09-27 | 2024-09-27 | |
| US202463700463P | 2024-09-27 | 2024-09-27 | |
| US63/700,488 | 2024-09-27 | ||
| US63/700,463 | 2024-09-27 | ||
| US202563889077P | 2025-09-26 | 2025-09-26 | |
| US63/889,077 | 2025-09-26 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2025265153A2 WO2025265153A2 (en) | 2025-12-26 |
| WO2025265153A9 true WO2025265153A9 (en) | 2026-02-19 |
Family
ID=97600118
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/048554 Pending WO2025265153A2 (en) | 2024-09-27 | 2025-09-29 | Providing indications of interactive user interfaces |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025265153A2 (en) |
-
2025
- 2025-09-29 WO PCT/US2025/048554 patent/WO2025265153A2/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12299340B2 (en) | Multi-device continuity for use with extended reality systems | |
| US12267623B2 (en) | Camera-less representation of users during communication sessions | |
| US12423917B2 (en) | Extended reality based digital assistant interactions | |
| US20240095491A1 (en) | Method and system for personalized multimodal response generation through virtual agents | |
| US9646145B2 (en) | Method and system for dynamically assignable user interface | |
| US20190116094A1 (en) | Method and system for transferable customized contextual user interfaces | |
| CN107632706B (en) | Application data processing method and system of multi-modal virtual human | |
| EP3436896A1 (en) | Digital assistant experience based on presence detection | |
| US20240078004A1 (en) | Authoring context aware policies with real-time feedforward validation in extended reality | |
| US20240071013A1 (en) | Defining and modifying context aware policies with an editing tool in extended reality systems | |
| Schipor et al. | Sapiens: Towards software architecture to support peripheral interaction in smart environments | |
| US20250371811A1 (en) | Extended reality based digital assistant interactions | |
| US20220308660A1 (en) | Augmented reality based controls for intelligent virtual assistants | |
| Scargill et al. | Environmental, user, and social context-aware augmented reality for supporting personal development and change | |
| US20250301556A1 (en) | Techniques for illuminating a physical space | |
| US20250110637A1 (en) | User interfaces for performing operations | |
| WO2025265153A9 (en) | Providing indications of interactive user interfaces | |
| WO2025265153A2 (en) | Providing indications of interactive user interfaces | |
| US20260050322A1 (en) | User interfaces and techniques for presenting content | |
| CN119497846A (en) | Out-of-process effects for electronic devices | |
| WO2025260106A2 (en) | Techniques for outputting content | |
| US20240071378A1 (en) | Authoring context aware policies through natural language and demonstrations | |
| WO2025072337A1 (en) | User interfaces and techniques for presenting content | |
| WO2025072373A1 (en) | User interfaces and techniques for moving a computer system | |
| US20240402891A1 (en) | Techniques for placing user interface objects |