[go: up one dir, main page]

WO2018057268A4 - Image data for enhanced user interactions - Google Patents

Image data for enhanced user interactions Download PDF

Info

Publication number
WO2018057268A4
WO2018057268A4 PCT/US2017/049760 US2017049760W WO2018057268A4 WO 2018057268 A4 WO2018057268 A4 WO 2018057268A4 US 2017049760 W US2017049760 W US 2017049760W WO 2018057268 A4 WO2018057268 A4 WO 2018057268A4
Authority
WO
WIPO (PCT)
Prior art keywords
content
image data
avatar
message
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2017/049760
Other languages
French (fr)
Other versions
WO2018057268A1 (en
Inventor
Marek Bereza
Lukas Robert Tom GIRLING
Joseph A. MALIA
Jeffrey Traer Bernstein
William D. LINDMEIER
Mark HAUENSTEIN
Adi Berenson
Amir Hoffnung
Julian Missig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201770418A external-priority patent/DK179978B1/en
Priority to AU2017330208A priority Critical patent/AU2017330208B2/en
Priority to EP21166287.9A priority patent/EP3920052A1/en
Priority to CN201780053143.0A priority patent/CN109691074A/en
Priority to JP2019511975A priority patent/JP6824552B2/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to EP17853654.6A priority patent/EP3485392B1/en
Priority to KR1020217015473A priority patent/KR102403090B1/en
Priority to KR1020197005136A priority patent/KR102257353B1/en
Publication of WO2018057268A1 publication Critical patent/WO2018057268A1/en
Publication of WO2018057268A4 publication Critical patent/WO2018057268A4/en
Anticipated expiration legal-status Critical
Priority to AU2020201721A priority patent/AU2020201721B2/en
Priority to AU2021250944A priority patent/AU2021250944B2/en
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data.

Claims

AMENDED CLAIMS received by the International Bureau on 17 March 2018 (17.03.2018) What is claimed is:
1. A method comprising:
at an electronic device with a display, wherein the electronic device is associated with a first user:
receiving a first message from a second user, wherein the first message includes first content;
receiving first status data for the second user, wherein the first status data is:
associated with the first message,
based on a first biometric characteristic of the second user detected at the time the second user composed and/or sent the first message, and
separate from the first content;
displaying concurrently, on the display, the first message, including the first content, and a first avatar, wherein the first avatar is based on the first status data and the displayed first avatar is adjacent to the displayed first message;
after displaying the first message and the first avatar, receiving a second message from the second user, wherein the second message includes second content;
receiving second status data for the second user, wherein the second status is associated with the second message and separate from the second content; and
while maintaining the display of the first message and the first avatar, displaying, on the display, the second message, including the second content, and a second avatar, wherein the displayed second avatar is adjacent to the displayed second message, the second avatar is based on the second status data, and the first avatar and the second avatar are different.
2. The method of claim 1, further comprising:
displaying contact information for a set of users that includes contact information for the second user, wherein the second avatar is displayed with the contact information for the second user.
3. The method of any of claims 1 or 2 further comprising: receiving a first avatar model for the second user;
generating the first avatar based on the first avatar model and first status data; and generating the second avatar based on the first avatar model and the second status data.
4. The method of any of claims 1 or 2 further comprising:
receiving a second avatar model for the second user;
generating an updated first avatar based on the second avatar model and first status data; generating an updated second avatar based on the second avatar model and the second status data; and
displaying the updated first avatar instead of the first avatar with the first message including the first content.
5. (Cancelled)
6. The method of any of claims 1 or 2 further comprising:
selecting one or more characteristics for the first avatar based on the first status data.
7. The method of any of claims 1 or 2 further comprising:
selecting one or more characteristics for the second avatar based on the second status data, wherein the second status data is based on a second biometric characteristic.
8. The method of any of claims 1 or 2, wherein the first avatar is an animated avatar.
9. The method of any of claims 1 or 2, wherein the first status data is based on an optical image or a depth image of the second user.
10. The method of any of claims 1 or 2, further comprising:
mapping the first status data on to a predefined avatar model to create the first avatar.
11. The method of any of claims 1 or 2, wherein the first status data represents an emotion of the second user.
12. (Cancelled)
13. The method of any of claims 1 or 2, wherein the second status data is based on a detected expression of the second user at the time the second user composed and/or sent the second message.
14. The method of any of claims 1 or 2, further comprising:
receiving, from the first user and on the electronic device, third content for a third message; generating third status data for the first user;
associating the third status data with the third message;
sending the third message to the second user; and
sending the third status data to the second user.
15. The method of claim 13 further comprising:
concurrently displaying the third message including the third content and a third avatar, wherein the third avatar is based on the third status data, and the third message and third avatar are displayed concurrently with the second message and second avatar.
16. The method of any of claims 1 or 2, wherein the first avatar and second avatar represent the physical appearance of the second user.
17. The method of any of claims 1 or 2, wherein displaying the first message and the first avatar includes displaying the first message as a text bubble coming from a mouth of the first avatar.
18. The method of any of claims 1 or 2, wherein the second user is associated with a source electronic device that sends the first message and the second message.
19. An electronic device, comprising:
a display; one or more processors;
one or more input devices;
a memory; and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a first message from a second user, wherein the first message includes first content;
receiving first status data for the second user, wherein the first status data is: associated with the first message,
based on a first biometric characteristic of the second user detected at the time the second user composed and/or sent the first message, and
separate from the first content;
displaying concurrently, on the display, the first message, including the first content, and a first avatar, wherein the first avatar is based on the first status data and the displayed first avatar is adjacent to the displayed first message;
after displaying the first message and the first avatar, receiving a second message from the second user, wherein the second message includes second content;
receiving second status data for the second user, wherein the second status is associated with the second message and separate from the second content; and
while maintaining the display of the first message and the first avatar, displaying, on the display, the second message, including the second content, and a second avatar, wherein the displayed second avatar is adjacent to the displayed second message, the second avatar is based on the second status data, and the first avatar and the second avatar are different.
20. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to:
receive a first message from a second user, wherein the first message includes first content;
receive first status data for the second user, wherein the first status data is^ associated with the first message,
based on a first biometric characteristic of the second user detected at the time the second user composed and/or sent the first message, and
separate from the first content;
display concurrently, on the display, the first message, including the first content, and a first avatar, wherein the first avatar is based on the first status data and the displayed first avatar is adjacent to the displayed first message;
after displaying the first message and the first avatar, receive a second message from the second user, wherein the second message includes second content;
receive second status data for the second user, wherein the second status is associated with the second message and separate from the second content; and
while maintaining the display of the first message and the first avatar, display, on the display, the second message, including the second content, and a second avatar, wherein the displayed second avatar is adjacent to the displayed second message, the second avatar is based on the second status data, and the first avatar and the second avatar are different.
21. An electronic device, comprising:
a display;
one or more input devices;
means for receiving a first message from a second user, wherein the first message includes first content;
means for receiving first status data for the second user, wherein the first status data is: associated with the first message,
based on a first biometric characteristic of the second user detected at the time the second user composed and/or sent the first message, and
separate from the first content;
means for displaying concurrently, on the display, the first message, including the first content, and a first avatar, wherein the first avatar is based on the first status data and the displayed first avatar is adjacent to the displayed first message;
means for receiving a second message from the second user, after displaying the first message and the first avatar, wherein the second message includes second content; means for receiving second status data for the second user, wherein the second status is associated with the second message and separate from the second content; and
means for displaying, while maintaining the display of the first message and the first avatar, on the display, the second message, including the second content, and a second avatar, wherein the displayed second avatar is adjacent to the displayed second message, the second avatar is based on the second status data, and the first avatar and the second avatar are different.
22. An electronic device, comprising:
a display;
one or more processors;
memory; and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-4, 6-11, and 13-18.
23. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to perform any of the methods of claims 1-4, 6-11, and 13-18.
24. An electronic device, comprising:
a display;
means for performing any of the methods of claims 1-4, 6-11, and 13-18.
25. A method, comprising:
at an electronic device with one or more image sensors, memory, and a display:
capturing first image data from one or more image sensors of the electronic device, wherein the first image data includes first optical image data of an object from a first perspective;
capturing second image data from the one or more image sensors of the electronic device, wherein the second image data includes second optical image light data of the object from a second perspective that is different from the first perspective; selecting an algorithm based on the change in perspective from the first perspective to the second perspective;
based on the algorithm, determining additional image data that is needed to continue the 3D modeling of the object; and
displaying, on the display, visual feedback that provides instructions for capturing the additional image data determined based on the selected algorithm.
26. The method of claim 25 further comprising:
receiving third data, wherein the third data includes third optical image data of the object from a third perspective;
selecting an updated algorithm based on the third perspective, wherein the updated algorithm is different than the algorithm;
based on the updated algorithm, determining updated additional image data that is needed to continue the 3D modeling of the object, wherein the updated additional image data is different than the additional image data; and
displaying, on the display, visual feedback that provides updated instructions for capturing the updated additional image data, wherein the update instructions are different than the instructions displayed prior to selecting the update algorithm.
27. The method of claim 26 further comprising:
building a 3D model of the object based on the first image data, the second image data, the third image data, and the updated additional image data using the selected updated algorithm.
28. The method of any of claims 26 or 27 further comprising:
sending at least a portion of the first image data to a remote server; and
receiving an indication from the remote server that the third data is available for the object.
29. The method of one of claims 25-27, wherein displaying, on the display, visual feedback that provides updated instructions for capturing the updated additional image data includes: in accordance with a determination that a first algorithm has been selected, displaying a first set of instructions; and in accordance with a determination that a second algorithm, different from the first algorithm, has been selected, the visual feedback includes a second set of instructions different than the first set of instructions.
30. The method of any of claims 25-27, wherein the first image data includes first depth image data of the object from the first perspective.
31. The method of any of claims 25-27, further comprising:
obtaining first position data for the first perspective.
32. The method of claim 31 , wherein selecting the algorithm is also based on the first position data.
33. The method of any of claims 25-27further comprising:
capturing second position data for the second perspective, wherein the second image data includes second depth image data of the object from the second perspective and selecting the algorithm is also based on the second position data.
34. The method of any of claims 25-27, further comprising:
building a 3D model of the object based on the first image data, the second image data, and the additional image data using the selected algorithm; and
storing, in the memory, the 3D model.
35. The method of any of claims 25-27, wherein selecting the algorithm includes selecting a scan-based algorithm based on the change from the first perspective to the second perspective indicating that the first image data and the second image data are from a scan of the object.
36. The method of any of claims 25-27, wherein selecting the algorithm includes selecting a discrete-image -based algorithm based on the change from the first perspective to the second perspective indicating that the first perspective and the second perspective are for discrete images.
37. The method of any of claims 25-27, further comprising:
identifying a support in the first image data that is touching the object; and
building a 3D model of the object based on the first image data and the second image data using the selected algorithm, wherein the 3D model does not include the support touching the first object.
38. The method of any of claims 25-27, further comprising:
displaying on a display of the electronic device a first window that includes a live image of the object; and
displaying on the display a second window that includes an image of a model of the object, wherein the model is based on the first image data and the second image data.
39. An electronic device, comprising:
a display;
one or more processors;
one or more input devices;
a memory; and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
capturing first image data from one or more image sensors of the electronic device, wherein the first image data includes first optical image data of an object from a first perspective;
capturing second image data from the one or more image sensors of the electronic device, wherein the second image data includes second optical image light data of the object from a second perspective that is different from the first perspective;
selecting an algorithm based on the change in perspective from the first perspective to the second perspective;
based on the algorithm, determining additional image data that is needed to continue the 3D modeling of the object; and
130 displaying, on the display, visual feedback that provides instructions for capturing the additional image data determined based on the selected algorithm.
40. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to:
capture first image data from one or more image sensors of the electronic device, wherein the first image data includes first optical image data of an object from a first perspective;
capture second image data from the one or more image sensors of the electronic device, wherein the second image data includes second optical image light data of the object from a second perspective that is different from the first perspective;
select an algorithm based on the change in perspective from the first perspective to the second perspective;
based on the algorithm, determine additional image data that is needed to continue the 3D modeling of the object; and
display, on the display, visual feedback that provides instructions for capturing the additional image data determined based on the selected algorithm.
41. An electronic device, comprising:
a display;
one or more input devices;
means for capturing first image data from one or more image sensors of the electronic device, wherein the first image data includes first optical image data of an object from a first perspective; means for capturing second image data from the one or more image sensors of the electronic device, wherein the second image data includes second optical image light data of the object from a second perspective that is different from the first perspective;
means for selecting an algorithm based on the change in perspective from the first perspective to the second perspective;
means for, based on the algorithm, determining additional image data that is needed to continue the 3D modeling of the object; and
131 means for displaying, on the display, visual feedback that provides instructions for capturing the additional image data determined based on the selected algorithm.
42. An electronic device, comprising:
a display;
one or more processors;
memory; and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 25-38.
43. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to perform any of the methods of claims 25-38.
44. An electronic device, comprising:
a display;
means for performing any of the methods of claims 25-38.
45. A method comprising:
at an electronic device with a display and one or more image sensors;
displaying, on the display, content in an application, wherein the content is displayed while the application is in a first configuration;
while displaying the content, capturing image data ( from the one or more image sensors of the electronic device;
after capturing the image data, receiving a request to navigate away from the content; and in response to receiving a request to navigate away from the content:
in accordance with a determination that a first set of content-lock criteria have been met, preventing navigation away from the content while maintaining display of the content, wherein the first set of content-lock criteria includes a first criterion that is met when the captured image data indicates that an unauthorized user is using the device; and
in accordance with a determination that the first set of content-lock criteria have not been met, navigating away from the content in accordance with the request.
46. The method of claim 45 further comprising:
in accordance with a determination that the first set of content-lock criteria is no longer met, allowing navigation away from the content.
47. The method of any of claims 45 or 46, wherein the first set of lock-criteria includes a second criterion that is met when the captured image data indicates that an authorized user of the electronic device is not using the device.
48. The method of any of claims 45 or 46, wherein the first set of lock-criteria includes a third criterion that is met when the captured image data indicates that the unauthorized user is present and an authorized user is not present.
49. The method of any of claims 45 or 46, wherein the first set of lock-criteria is met when the captured image data indicates that the unauthorized user is present without regard to whether or not an authorized user is present.
50. The method of any of claims 45 or 46, further comprising:
in accordance with a determination that a second set of content-lock criteria has been met, disabling at least one function of the electronic device.
51. The method of claim 50, wherein the first set of lock-criteria and the second set of lock- criteria are different.
52. The method of any of claims 45 or 46, further comprising: in accordance with a determination that a third set of content- lock criteria has been met, switching the application to a second configuration that limits operation of the application as compared to the first configuration.
53. The method of any of claims 45 or 46, further comprising:
in accordance with the determination that a fourth set of content-lock criteria have been met, locking other functionality of the electronic device while continuing to display the content in the application.
54. The method of any of claims 45 or 46, further comprising:
in accordance with the determination that a fifth set of content- lock criteria have been met, preventing the display of a notification related to a communication received at the electronic device.
55. The method of claim 54, wherein:
the fifth set of lock-criteria includes a fourth criterion that is met when the captured image data indicates that an unauthorized user is using the electronic device and the fifth set of lock-criteria is met if the fourth criterion is met; and
the first set of lock-criteria includes a fifth criteria that is met when the captured image data indicates the absence of an authorized user.
56. The method of claim 55 further comprising:
in accordance with the fourth criterion being met, preventing navigation between applications on the electronic device; and
in accordance with the fifth criterion being met, preventing navigation within the application.
57. The method of any of claims 45 or 46, further comprising:
determining whether the captured image data indicates the presence of an unauthorized user of the electronic device.
134
58. The method of any of claims 45 or 46, wherein the image data includes optical data and depth data, and wherein determining whether the first set of content-lock criteria have been met is based on the optical data and the depth data.
59. The method of any of claims 45 or 46, wherein navigating away from the content includes translating currently displayed content.
60. The method of any of claims 45 or 46, wherein navigating away from the content includes switching between content items in an application.
61. The method of any of claims 45 or 46, wherein navigating away from the content includes switching applications or closing the application to display the home screen.
62. The method of any of claims 45 or 46, further comprising:
receiving unlock information associated with an authorized user of the electronic device;
determining whether the unlock information is authentic; and
in accordance with a determination that the unlock information is authentic, enabling navigation away from the content.
63. An electronic device, comprising:
a display;
one or more processors;
one or more input devices;
a memory; and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, content in an application, wherein the content is displayed while the application is in a first configuration;
while displaying the content, capturing image data ( from the one or more image sensors of the electronic device;
135 after capturing the image data, receiving a request to navigate away from the content; and in response to receiving a request to navigate away from the content:
in accordance with a determination that a first set of content-lock criteria have been met, preventing navigation away from the content while maintaining display of the content, wherein the first set of content-lock criteria includes a first criterion that is met when the captured image data indicates that an unauthorized user is using the device; and
in accordance with a determination that the first set of content-lock criteria have not been met, navigating away from the content in accordance with the request.
64. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to:
display, on the display, content in an application, wherein the content is displayed while the application is in a first configuration;
while displaying the content, capture image data ( from the one or more image sensors of the electronic device;
after capturing the image data, receive a request to navigate away from the content; and in response to receiving a request to navigate away from the content:
in accordance with a determination that a first set of content-lock criteria have been met, prevent navigation away from the content while maintaining display of the content, wherein the first set of content-lock criteria includes a first criterion that is met when the captured image data indicates that an unauthorized user is using the device; and
in accordance with a determination that the first set of content-lock criteria have not been met, navigate away from the content in accordance with the request.
65. An electronic device, comprising:
a display;
one or more input devices;
means for displaying, on the display, content in an application, wherein the content is displayed while the application is in a first configuration;
136 while displaying the content, means for capturing image data ( from the one or more image sensors of the electronic device;
means for, after capturing the image data, receiving a request to navigate away from the content; and
in response to receiving a request to navigate away from the content:
means for, in accordance with a determination that a first set of content-lock criteria have been met, preventing navigation away from the content while maintaining display of the content, wherein the first set of content-lock criteria includes a first criterion that is met when the captured image data indicates that an unauthorized user is using the device; and
means for, in accordance with a determination that the first set of content-lock criteria have not been met, navigating away from the content in accordance with the request.
66. An electronic device, comprising:
a display;
one or more processors;
memory; and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 45-62.
67. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more input devices, cause the device to perform any of the methods of claims 45-62.
68. An electronic device, comprising:
a display;
means for performing any of the methods of claims 45-62.
137
PCT/US2017/049760 2016-09-23 2017-08-31 Image data for enhanced user interactions Ceased WO2018057268A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
KR1020217015473A KR102403090B1 (en) 2016-09-23 2017-08-31 Image data for enhanced user interactions
KR1020197005136A KR102257353B1 (en) 2016-09-23 2017-08-31 Image data for enhanced user interactions
EP21166287.9A EP3920052A1 (en) 2016-09-23 2017-08-31 Image data for enhanced user interactions
CN201780053143.0A CN109691074A (en) 2016-09-23 2017-08-31 Image data for enhanced user interaction
JP2019511975A JP6824552B2 (en) 2016-09-23 2017-08-31 Image data for extended user interaction
AU2017330208A AU2017330208B2 (en) 2016-09-23 2017-08-31 Image data for enhanced user interactions
EP17853654.6A EP3485392B1 (en) 2016-09-23 2017-08-31 Image data for enhanced user interactions
AU2020201721A AU2020201721B2 (en) 2016-09-23 2020-03-09 Image data for enhanced user interactions
AU2021250944A AU2021250944B2 (en) 2016-09-23 2021-10-15 Image data for enhanced user interactions

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662399226P 2016-09-23 2016-09-23
US62/399,226 2016-09-23
DKPA201770418 2017-05-31
DKPA201770418A DK179978B1 (en) 2016-09-23 2017-05-31 IMAGE DATA FOR ENHANCED USER INTERACTIONS

Publications (2)

Publication Number Publication Date
WO2018057268A1 WO2018057268A1 (en) 2018-03-29
WO2018057268A4 true WO2018057268A4 (en) 2018-05-31

Family

ID=61689723

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/049760 Ceased WO2018057268A1 (en) 2016-09-23 2017-08-31 Image data for enhanced user interactions

Country Status (1)

Country Link
WO (1) WO2018057268A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9979890B2 (en) 2015-04-23 2018-05-22 Apple Inc. Digital viewfinder user interface for multiple cameras
US9912860B2 (en) 2016-06-12 2018-03-06 Apple Inc. User interface for camera effects
DK180859B1 (en) 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US10375313B1 (en) 2018-05-07 2019-08-06 Apple Inc. Creative camera
DK201870623A1 (en) 2018-09-11 2020-04-15 Apple Inc. User interfaces for simulated depth effects
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
JP7286968B2 (en) * 2019-01-09 2023-06-06 凸版印刷株式会社 Imaging support device and imaging support method
US10674072B1 (en) 2019-05-06 2020-06-02 Apple Inc. User interfaces for capturing and managing visual media
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
US11039074B1 (en) 2020-06-01 2021-06-15 Apple Inc. User interfaces for managing media
US11212449B1 (en) 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
US11539876B2 (en) 2021-04-30 2022-12-27 Apple Inc. User interfaces for altering visual media
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US12112024B2 (en) 2021-06-01 2024-10-08 Apple Inc. User interfaces for managing media styles
US12506953B2 (en) 2021-12-03 2025-12-23 Apple Inc. Device, methods, and graphical user interfaces for capturing and displaying media
US12495204B2 (en) 2023-05-05 2025-12-09 Apple Inc. User interfaces for controlling media capture settings

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3859005A (en) 1973-08-13 1975-01-07 Albert L Huebner Erosion reduction in wet turbines
US4826405A (en) 1985-10-15 1989-05-02 Aeroquip Corporation Fan blade fabrication system
KR100595911B1 (en) 1998-01-26 2006-07-07 웨인 웨스터만 Method and apparatus for integrating manual input
US7688306B2 (en) 2000-10-02 2010-03-30 Apple Inc. Methods and apparatuses for operating a portable device based on an accelerometer
US7218226B2 (en) 2004-03-01 2007-05-15 Apple Inc. Acceleration-based theft detection system for portable electronic devices
US6677932B1 (en) 2001-01-28 2004-01-13 Finger Works, Inc. System and method for recognizing touch typing under limited tactile feedback conditions
US6570557B1 (en) 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
US7657849B2 (en) 2005-12-23 2010-02-02 Apple Inc. Unlocking a device by performing gestures on an unlock image
US8600120B2 (en) 2008-01-03 2013-12-03 Apple Inc. Personal computing device control using face detection and recognition
US8694899B2 (en) * 2010-06-01 2014-04-08 Apple Inc. Avatars reflecting user states
WO2013169849A2 (en) 2012-05-09 2013-11-14 Industries Llc Yknots Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US20140013422A1 (en) 2012-07-03 2014-01-09 Scott Janus Continuous Multi-factor Authentication
KR102001913B1 (en) 2012-09-27 2019-07-19 엘지전자 주식회사 Mobile Terminal and Operating Method for the Same
US20140157153A1 (en) * 2012-12-05 2014-06-05 Jenny Yuen Select User Avatar on Detected Emotion
HK1212064A1 (en) 2012-12-29 2016-06-03 苹果公司 Device, method, and graphical user interface for transitioning between touch input to display output relationships
CN105518699A (en) 2014-06-27 2016-04-20 微软技术许可有限责任公司 Data protection based on user and gesture recognition

Also Published As

Publication number Publication date
WO2018057268A1 (en) 2018-03-29

Similar Documents

Publication Publication Date Title
WO2018057268A4 (en) Image data for enhanced user interactions
JP5901151B2 (en) How to select objects in a virtual environment
CA2951782C (en) Contextual device locking/unlocking
KR102305240B1 (en) Persistent user identification
US10585473B2 (en) Visual gestures
KR102292455B1 (en) Remote expert system
US9978174B2 (en) Remote sensor access and queuing
US9965039B2 (en) Device and method for displaying user interface of virtual input device based on motion recognition
US20150187137A1 (en) Physical object discovery
JP2016521882A5 (en)
EP2775374B1 (en) User interface and method
US10254847B2 (en) Device interaction with spatially aware gestures
WO2015102854A1 (en) Assigning virtual user interface to physical object
JP2016514865A (en) Real-world analysis visualization
US20190042188A1 (en) Information processing device, information processing method, and program
KR20110018343A (en) Generate message to be sent
US20120098967A1 (en) 3d image monitoring system and method implemented by portable electronic device
JP2011172205A (en) Video information processing apparatus and method
JP2019045212A5 (en)
JP2015152940A (en) Presentation control device, presentation control method, and program
CN115668103A (en) System, method, apparatus and computer program product for connecting a user to a persistent AR environment
KR102104136B1 (en) Augmented reality overlay for control devices
CN102854972A (en) Appreciation area prompt method and system
US20130106757A1 (en) First response and second response
US10324611B2 (en) Computer-readable non-transitory storage medium having stored therein information processing program, information processing system,information processing method, and information processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17853654

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197005136

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017853654

Country of ref document: EP

Effective date: 20190215

ENP Entry into the national phase

Ref document number: 2019511975

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017330208

Country of ref document: AU

Date of ref document: 20170831

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE