[go: up one dir, main page]

US20180210738A1 - Contextual user interface based on changes in environment - Google Patents

Contextual user interface based on changes in environment Download PDF

Info

Publication number
US20180210738A1
US20180210738A1 US15/599,398 US201715599398A US2018210738A1 US 20180210738 A1 US20180210738 A1 US 20180210738A1 US 201715599398 A US201715599398 A US 201715599398A US 2018210738 A1 US2018210738 A1 US 2018210738A1
Authority
US
United States
Prior art keywords
context
user
environment
privacy
gui
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/599,398
Inventor
Manuel Roman
Mara Clair Segal
Dwipal Desai
Andrew E. Rubin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Essential Products Inc
Original Assignee
Essential Products Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Essential Products Inc filed Critical Essential Products Inc
Priority to US15/599,398 priority Critical patent/US20180210738A1/en
Assigned to Essential Products, Inc. reassignment Essential Products, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DESAI, DWIPAL, ROMAN, MANUEL, RUBIN, ANDREW E., SEGAL, MARA CLAIR
Publication of US20180210738A1 publication Critical patent/US20180210738A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F9/4443
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2111Location-sensitive, e.g. geographical location, GPS
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • This disclosure relates to user interfaces, and in particular a user interface that is adaptive based on the context of the environment.
  • the Internet of Things allows for the internetworking of devices to exchange data among themselves to enable sophisticated functionality.
  • devices configured for home automation can exchange data to allow for the control and automation of lighting, air conditioning systems, security, etc.
  • this can also include home assistant devices providing an intelligent personal assistant to respond to speech.
  • a home assistant device can include a microphone array to receive voice input and provide the corresponding voice data to a server for analysis to provide an answer to a question asked by a user.
  • the server can provide that answer to the home assistant device, which can provide the answer as voice output using a speaker.
  • the user can provide a voice command to the home assistant device to control another device in the home, for example, a command to turn a light bulb on or off.
  • the user and the home assistant device can interact with each other using voice, and the interaction can be supplemented by a server outside of the home providing the answers.
  • homes can have different users interacting with the home assistant device within different contextual environments (e.g., from different locations and at different times) within the home.
  • a home assistant device including: a display screen; a microphone; one or more processors; and memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to: determine that first speech has been spoken in a vicinity of the home assistant device using the microphone; determine a first context of an environment of the home assistant device, the first context of the environment including one or more of a location of a user providing the first speech, a time of the first speech, a user identity corresponding to the user providing the first speech, a skill level with interacting with the home assistant device of the user providing the first speech, a schedule of the user providing the first speech, or characteristics of the first speech; display a first graphical user interface (GUI) for the assistant device on the display screen to provide a response regarding the first speech, the first GUI based on the first context of the environment and content of the first speech; determine that second speech has been spoken in the vicinity of the home assistant device using the microphone, the first speech and
  • GUI graphical user interface
  • Some of the subject matter described herein also includes a method for providing a contextual user interface, including: determining, by a processor, that a first speech has been spoken; determining, by the processor, a first context of an environment corresponding to the first speech; providing, by the processor, a first user interface based on the first context of the environment and content of the first speech; determining, by the processor, that a second speech has been spoken, the second speech spoken at a different time than the first speech; determining, by the processor, a second context of the environment corresponding to the second speech, the first context and the second context being different; and providing, by the processor, a second user interface based on the second context of the environment and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different.
  • the first context is based on one or both of audio or visual determinations of a surrounding environment of an assistant device that the first speech and the second speech is directed.
  • the first context includes a first interaction corresponding to the first speech at a first distance
  • the second context includes a second interaction corresponding to the second speech at a second distance, the first distance and the second distance being different.
  • the first context includes a first user providing the first speech
  • the second context includes a second user providing the second speech, the first user and the second user being different.
  • the first user is associated with a first skill level with interacting with an assistant device
  • the second user is associated with a second skill level with interacting with the assistant device, the first skill level and the second skill level being different, the first context based on the first skill level, and the second context based on the second skill level.
  • the first context and the second context include one or more of a user interacting with an assistant device, people in the environment around the assistant device, a time of an interaction with the assistant device, a location of a user interacting with the assistant device, or a skill level of a user interacting with the assistant device.
  • the method includes: determining, by the processor, a change in the environment; and generating, by the processor, a third user interface based on one or more of the first context or the second context in response to the change in the environment to maintain privacy expectations of one or more users present in the environment.
  • Some of the subject matter described herein also includes an electronic device, including: one or more processors; and memory storing instructions, wherein the processor is configured to execute the instructions such that the processor and memory are configured to: determine that a first speech has been spoken; determine a first context of an environment corresponding to the first speech; generate a first user interface based on the first context of the environment and content of the first speech; determine that a second speech has been spoken, the second speech spoken at a different time than the first speech; determine a second context of the environment corresponding to the second speech, the first context and the second context being different; and generate a second user interface based on the second context of the environment and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different.
  • the first context is based on one or both of audio or visual determinations of a surrounding environment of an assistant device that the first speech and the second speech is directed.
  • the first context includes a first interaction corresponding to the first speech at a first distance
  • the second context includes a second interaction corresponding to the second speech at a second distance, the first distance and the second distance being different.
  • the first context includes a first user providing the first speech
  • the second context includes a second user providing the second speech, the first user and the second user being different.
  • the first user is associated with a first skill level with interacting with an assistant device
  • the second user is associated with a second skill level with interacting with the assistant device, the first skill level and the second skill level being different, the first context based on the first skill level, and the second context based on the second skill level.
  • the first context and the second context include one or more of a user interacting with an assistant device, people in the environment around the assistant device, a time of an interaction with the assistant device, a location of a user interacting with the assistant device, or a skill level of a user interacting with the assistant device.
  • the processor is configured to execute the instructions such that the processor and memory are configured to: determine a change in the environment; and generate a third user interface based on one or more of the first context or the second context in response to the change in the environment to maintain privacy expectations of one or more users present in the environment.
  • Some of the subject matter described herein also includes a computer program product, comprising one or more non-transitory computer-readable media having computer program instructions stored therein, the computer program instructions being configured such that, when executed by one or more computing devices, the computer program instructions cause the one or more computing devices to: determine that a first speech has been spoken; determine a first context of an environment corresponding to the first speech; generate a first user interface based on the first context of the environment and content of the first speech; determine that a second speech has been spoken, the second speech spoken at a different time than the first speech; determine a second context of the environment corresponding to the second speech, the first context and the second context being different; and generate a second user interface based on the second context of the environment and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different.
  • the first context is based on one or both of audio or visual determinations of a surrounding environment of an assistant device that the first speech and the second speech is directed.
  • the first context includes a first interaction corresponding to the first speech at a first distance
  • the second context includes a second interaction corresponding to the second speech at a second distance, the first distance and the second distance being different.
  • the first context includes a first user providing the first speech
  • the second context includes a second user providing the second speech, the first user and the second user being different.
  • the first user is associated with a first skill level with interacting with an assistant device
  • the second user is associated with a second skill level with interacting with the assistant device, the first skill level and the second skill level being different, the first context based on the first skill level, and the second context based on the second skill level.
  • the first context and the second context include one or more of a user interacting with an assistant device, people in the environment around the assistant device, a time of an interaction with the assistant device, a location of a user interacting with the assistant device, or a skill level of a user interacting with the assistant device.
  • the computer program instructions cause the one or more computing devices to: determine a change in the environment; and generate a third user interface based on one or more of the first context or the second context in response to the change in the environment to maintain privacy expectations of one or more users present in the environment.
  • Some of the subject matter described herein also includes an electronic device including: a display screen; one or more processors; and memory storing instructions, wherein the processor is configured to execute the instructions such that the processor and memory are configured to: determine a first occurrence of a first activity within an environment of the electronic device; determine a first context of the environment of the electronic device; and display a first graphical user interface (GUI) on the display screen based on the first context of the environment and the first activity.
  • GUI graphical user interface
  • the first activity is one or more of speech spoken within the environment, or noise generated by an object within the environment.
  • the processor is configured to execute the instructions such that the processor and memory are configured to: determine a second occurrence of a second activity within the environment of the electronic device; determine a second context of the environment of the electronic device, the first context and the second context being different; and display a second graphical user interface (GUI) on the display screen based on the second context of the environment and the second activity, first content of the first GUI being different than second content of the second GUI.
  • GUI graphical user interface
  • the first activity and the second activity are similar, and the first content of the first GUI is different than the second content of the second GUI based on differences between the first context of the environment and the second context of the environment.
  • the first content includes a first number of graphical representations of information or access to functionality provided by the electronic device
  • the second content includes a second number of graphical representation of information or access to functionality provided by the electronic device, the first number and the second number being different.
  • the first content includes a first graphical representation of an item providing information or access to functionality provided by the electronic device at a first size
  • the second content includes a second graphical representation of the item at a second size, the first size and the second size being different.
  • the first activity is speech spoken within the environment of the electronic device, the first context of the environment including one or more of a location of a user providing the speech, a time of the speech, a user identity corresponding to the user providing the speech, a skill level with interacting with the home assistant device of the user providing the speech, a schedule of the user providing the speech, or characteristics of the speech.
  • the first GUI includes content responding to the first activity based on the first context.
  • FIG. 1 illustrates an example of an assistant device providing a user interface based on the context of the environment.
  • FIG. 2 illustrates an example of a block diagram providing a user interface based on the context of the environment.
  • FIG. 3 illustrates an example of a block diagram determining the context of the environment of an assistant device.
  • FIG. 4 illustrates another example of an assistant device providing a user interface based on the context of the environment.
  • FIG. 5 illustrates an example of an assistant device.
  • FIG. 6 illustrates an example of a block diagram for adjusting a user interface to maintain privacy expectations.
  • the user interface of the home assistant device (e.g., a graphical user interface (GUI) generated for display on a display screen of the home assistant device) can be different based on a combination of contextual factors of the surrounding environment including the person interacting with the home assistant device, the people in the surrounding environment, the time, the location of the home assistant device within the home, the location of the person interacting with the home assistant device, the presence of strangers, interests of the users, etc.
  • different content e.g., information, graphical icons providing access to functionality of the home assistant device, etc.
  • content e.g., information, graphical icons providing access to functionality of the home assistant device, etc.
  • the same content can be displayed differently. For example, different languages, visual effects, etc. can be provided based on the context of the environment. In another example, two different users (or even the same user at different times) might ask the same question to the home assistant device. Based on differences within the context of the environment when the question is asked, the user interface can provide the same answers to the question differently.
  • FIG. 1 illustrates an example of an assistant device providing a user interface based on the context of the environment.
  • home assistant device 110 can include a microphone (e.g., a microphone array) to receive voice input from users and a speaker to provide audio output in the form of a voice (or other types of audio) to respond to the user.
  • home assistant device 110 can include a display screen to provide visual feedback to users by generating a graphical user interface (GUI) providing content for display.
  • GUI graphical user interface
  • a user can ask home assistant device 110 a question and a response to that question can be provided on the display screen.
  • Additional visual components such as light emitting diodes (LEDs), can also be included.
  • LEDs light emitting diodes
  • the user interface can include audio, voice, display screens, lighting, and other audio or visual components.
  • camera 115 can also be included for home assistant device 110 to receive visual input of its surrounding environment. Camera 115 can be physically integrated (e.g., physically coupled with) with home assistant device 110 or camera 115 can be a separate component of a home's wireless network that can provide video data to home assistant device 110 .
  • home assistant device 110 can be in a particular location of the home, for example, the kitchen. Different users might interact with home assistant device from different locations within the home (e.g., the kitchen or the living room) and at different times. Additionally, the different users might be interested in different features, functionalities, or information provided by home assistant device 110 . These different contextual factors of the environment of home assistant device 110 can result in the user interface of home assistant device 110 to be changed. Because the user interface can provide content such as features, functionalities, information, etc., this can result in different content being displayed on the display screen. That is, different combinations of contextual factors of the environment can result in a different user interface of home assistant device 110 , resulting in an adaptive user interface based on context of the environment.
  • user 130 a can be in the kitchen (i.e., in the same room or within close proximity with home assistant device 110 ) at 11:39 PM in the evening.
  • Home assistant device 110 can recognize user 130 a , for example, using video input from camera 115 to visually verify user 130 a .
  • home assistant device 110 can recognize user 130 a through speech recognition as user 130 a speaks either to home assistant device 110 , to other people, or even himself.
  • User 130 a can also have had previous interactions with home assistant device 110 , and therefore, home assistant device 110 can remember the likes or preferences, expectations, schedule, etc. of user 130 a .
  • user interface 120 a can be generated for user 130 a to interact with home assistant device 110 based on the current context of the environment indicating the user, time, and location that the user is speaking from.
  • user 130 b can be in the living room at 8:30 AM of the same home as home assistant device 110 . Because the user, time, and location of the user are different, home assistant device 110 can generate a different user interface 120 b providing a different GUI having different content as depicted in FIG. 1 . As a result, user interface 120 b can be different from user interface 120 a because they are provided, or generated, in response to different contextual environments when users 130 a and 130 b speak. This can occur even if the content of the speech provided by users 130 a and 130 b is similar, or even the same.
  • both users 130 a and 130 b ask the same or similar question (e.g., their speech includes similar or same content such as asking for a list of new restaurants that have opened nearby)
  • the user interface (to respond to the question) that is provided by home assistant device 110 can be different because of the different context of the environments when the speech was spoken.
  • user interface 120 a because user interface 120 a was generated in the evening, it can have different colors, brightness, or other visual characteristics than display 120 b . This might be done because the user interface should not be too disruptive in different lighting situations.
  • a light sensor e.g., a photodiode
  • Home assistant device 110 can then adjust the brightness of the display screen based on the determined lighting situation in the environment.
  • the user interfaces 120 a and 120 b can be different to take that into account.
  • the size of some of the content (e.g., items A-G which can be buttons, icons, text, etc.) of a GUI provided as user interface 120 a can be relatively small.
  • some of the content of user interface 120 b can be larger so that they can be more easily seen from a distance. For example, in FIG.
  • icons A and F have different sizes among the different user interfaces 120 a and 120 b . That is, content such as the items of the user interfaces that provide access to the same functionality or provide an indication to the same type of information can be different sizes because the contextual environments are different. For example, if users 130 a and 130 b request a listing of new, nearby restaurants, icons A-G might represent a list of some of the identified restaurants.
  • the playback of audio can be at a volume based on the distance that a user is from home assistant device 110 . For example, a user that is farther away can result in the playback of audio that is at a higher volume than if a user is closer to home assistant device 110 .
  • User interfaces 120 a and 120 b can also be different in other ways. For example, the location of content, the number of content, etc. as depicted in FIG. 1 can also be different due to the different contextual environments.
  • FIG. 2 illustrates an example of a block diagram providing a user interface based on the context of the environment.
  • speech can be determined to have been spoken.
  • a microphone of home assistant device 110 can pick up speech spoken within the environment. That speech can be converted into voice data and analyzed by a processor of home assistant device 110 to determine that speech has been received.
  • the context of the surrounding environment or vicinity around home assistant device 110 can be determined.
  • home assistant device 110 can determine any of the aforementioned details regarding the environment in the physical space around home assistant device 110 including time, user, prior interactions with the user, locations of the user and home assistant device 110 , etc. Any of the details discussed below can also be determined.
  • the user interface can be provided or generated based on the determined context and content of the speech. For example, this can include generating a GUI with content related to the content of the speech and provided at various sizes, colors, etc. on a display screen of home assistant device 110 based on the context.
  • the user interface can also include playback of audio (e.g., sounds), turning on various lighting effects (e.g., LEDs), etc. For example, different GUIs with different audio effects can be provided.
  • home assistant device 110 can pick up more speech at a different time. However, if the context of the environment is different, then a different user interface than that generated at block 210 can be generated. Thus, even if the content of the speech at the two different times was the same, the user interfaces generated can be different if the context of the environment was different.
  • FIG. 3 illustrates an example of a block diagram determining the context of the environment of an assistant device.
  • the location of the speech can be determined at block 305
  • the time of the speech can be determined at block 310
  • the user providing speech can be determined at block 315 to determine the context of the environment.
  • home assistant device 110 can determine the skill level of a user as they interact more with the user interface. If the user uses more functionality, more complicated functionality, requests significant amount of detail regarding functionality, etc. then the user can be identified by home assistant device 110 as a more sophisticated user. By contrast, if another user tends to ask the same repetitive tasks or questions of home assistant device 110 then the user can be identified as a less sophisticated user. If the user tends to use less complicated functionality, less functionality, or does not request significant detail, then the user can also be identified as a less sophisticated user. In FIG.
  • user 130 a can be a more sophisticated user indicating that the user has a relatively high skill level in using home assistant device 110 , and therefore, more functionality (or content) can be provided on user interface 120 a (i.e., items A-G are provided).
  • user 130 b can be a less sophisticated user indicating that the user has a relatively lower skill level (than user 130 a ), and therefore, less content can be provided on user interface 120 b (i.e., fewer items A, C, D, and F are provided).
  • the same number of content of user interfaces might be provided, but different content corresponding to different functionalities or features might be displayed based on the skill level of the user.
  • different content can be provided in a user interface of home assistant device 110 .
  • the user interface can include other visual components other than displaying content as part of a GUI on a display screen.
  • this can include lighting, for example, LEDs or other types of lights which can be activated by being turned on, glow, flicker, display a particular color, etc. to provide an indication to a user of a situation.
  • home assistant device 110 can determine a user's schedule at block 325 and provide an indication as to when the user should be leaving the home so that they can maintain that schedule without any tardiness.
  • this can result in a ring around the display screen that can be different colors (e.g., implemented with LEDs or other types of lighting), however in other implementations the ring can be part of the display screen itself.
  • the ring can be a color corresponding to the traffic or commute status for the user to go to their next expected location, such as the workplace in the morning or a coffee meeting scheduled on their calendar. If the ring is set to a green color, then this can indicate to the user that the traffic is relatively light. By contrast, a red color can indicate that the traffic is relatively heavy.
  • This type of user interface can provide a user with information while they are far away from home assistant device 110 because the colors can be easily seen from a distance.
  • the ring can also indicate whether the user needs to leave soon or immediately if they want to make the next appointment on their schedule. For example, the intensity or brightness of the color can be increased, the ring can be blinking, etc.
  • the user interface can also display on the display screen a route to the location of the next event on their schedule, provide a time estimate, etc. As a result, if the user decides that they want more detail and walks closer to home assistant device 110 , information can be readily displayed and available.
  • home assistant device 105 can determine that the user is walking closer after the ring has been activated and then process information and display the additional information on the display screen so that information is available when they are closer.
  • the user interface can also include audio sounds for playback.
  • user interface 120 a in FIG. 1 might play back one type of audio sound when user 130 a interacts with it, for example, selecting one of the items A-G, requesting user interface 120 a to change (e.g., provide new content), etc.
  • user interface 120 b might play back different sounds for the same interactions by user 130 b because of the different context of the environment.
  • Characteristics regarding the speech received by home assistant device 110 can also be determined at block 330 .
  • home assistant device 110 can determine the volume, speed, accent, language, tone, etc. of speech and use that as a contextual factor in providing a user interface.
  • content of the user interface may be updated faster than if the user was speaking slowly, for example, by updating the GUI of the user interface sooner.
  • the user interface might provide content differently than if the user's speech is determined to be relatively free of stress or frustration. As an example, if the user is stressed or frustrated, then the amount of content provided on the user interface can be reduced in comparison with the user not being stressed or frustrated.
  • the user interface can include the playback of music. For example, calming music can be played back using the speaker of home assistant device 110 .
  • the lighting of home assistant device 110 can be different based on what is provided on the user interface. For example, different types of content can result in different brightness, colors, etc.
  • FIG. 4 illustrates another example of an assistant device providing a user interface based on the context of the environment.
  • users 130 a , 130 b , and 130 c are within the home environment of home assistant device 110 . These different users can be identified and the user interface 120 c in FIG. 4 can be generated to take into account privacy concerns of the various users.
  • user 130 a might want some content to be provided on a user interface if he is alone, but might not want that content to be displayed if others are within the home.
  • user 130 b also might not want some content to be provided.
  • user 130 a might find it acceptable to have the content provided on the user interface even if the presence of user 130 b is detected because user 130 b is a member of the same household.
  • user 130 a might want that content to not be displayed if strangers or guests are in the home.
  • User 130 c can be a stranger or newcomer into the home environment and has never interacted with home assistant device 110 and therefore, is unrecognized by home assistant device 110 .
  • Home assistant device 110 can recognize the different users or persons within the home and generate user interface 120 c based on the users 130 a - c .
  • home assistant device 110 can take some details of user interfaces 120 a and 120 b (e.g., user interfaces normally for users 130 a and 130 b , respectively) and generate user interface 120 c in FIG. 4 based on those other user interfaces. That is, user interface 120 c can be generated based on how user interfaces would be generated for users 130 a and 130 b . In FIG. 4 , this results in some content of user interface 120 b having a relatively large size (e.g., as in user interface 120 b ), but less content than either user interfaces 120 a or 120 b .
  • content that would mutually exist in user interfaces 120 a and 120 b can be provided within user interface 120 c , but content that is only on one of user interfaces 120 a and 120 b might not be provided because it might only appeal to a single user or those users might have different privacy expectations.
  • item B as depicted in user interface 120 a in FIG. 1 might not appear because it is not provided within user interface 120 b in FIG. 1 .
  • the user interface can also be adapted to take into account an unrecognized user. For example, upon detection of an unrecognized user, some content might be removed from a user interface. When the unrecognized user leaves, this can be detected, and therefore, home assistant device 110 can then provide the removed content back with the user interface. As a result, the user's privacy expectations can be maintained when guests are nearby.
  • Other types of changes in context of the environment other than detection of strangers or guests can include determining differences in time. For example, a user might find it acceptable to display some content on the GUI late at night or early in the morning, but might not want that content displayed during the daytime because the likelihood of others seeing that content might be higher.
  • Another example can include activities of persons within the environment. For example, if several people in the environment are discussing a particular topic, a social gathering is taking place, etc. then perhaps a user's privacy expectations can be elevated and, therefore, some of the content that would otherwise be displayed can be removed.
  • a user's privacy expectations can be set by that user or learned by home assistant device 110 over time, or a combination of both.
  • the user can indicate that certain content should not be displayed when unrecognized persons are in the environment.
  • the user might remove content from the GUI and home assistant device 110 can identify the context in the environment when the user removed the content to determine the user's privacy expectations.
  • FIG. 6 illustrates an example of a block diagram for adjusting a user interface to maintain privacy expectations.
  • the context of the environment can be determined. For example, the presence of persons including recognized users and/or strangers, the time, activities being performed in the environment, etc. can be determined.
  • privacy expectations for a user based on the context can be determined. For example, if a user is within the environment, a GUI providing various content can be provided. However, if strangers or guests are detected within the environment, the user might not want certain content displayed on the GUI due to an increase in privacy concerns resulting in higher privacy expectations for that content.
  • the GUI can be adjusted or modified based on the privacy expectations. For example, the content can be removed due to the increase in privacy expectations while the stranger or guest is present within the environment.
  • the GUI can be modified again to include the content that was previously removed.
  • the GUI can be adapted.
  • audio adaptations can also be performed based on the context situations described above.
  • the type of voice, accent, volume, etc. can also be adjusted for different user interfaces using the techniques described herein.
  • noise from objects such as television or radio, a doorbell ringing, a door opening, glass shattering, etc. can also be detected occurrences of activity other than speech.
  • the content of the user interface can also be changed based on whether or not it is determined that a user is looking at home assistant device 110 or speaking to home assistant device 110 .
  • the display screen of home assistant device 110 might be turned off, but can turn on when it is determined that a user is looking at it.
  • FIG. 5 illustrates an example of an assistant device.
  • home assistant device 110 can be an electronic device with one or more processors 605 (e.g., circuits) and memory 610 for storing instructions that can be executed by processors 605 to implement contextual user interface 630 providing the techniques described herein.
  • Home assistant device 105 can also include microphone 620 (e.g., one or more microphones that can implement a microphone array) to convert sounds into electrical signals, and therefore, speech into data that can be processed using processors 605 and stored in memory 610 .
  • Speaker 615 can be used to provide audio output.
  • display 625 can display a GUI implemented by processors 605 and memory 610 to provide visual feedback.
  • Memory 610 can be a non-transitory computer-readable storage media.
  • Home assistant device 110 can also include various other hardware, such as cameras, antennas, etc. to implement the techniques disclosed herein.
  • programmable circuitry e.g., one or more microprocessors
  • Special-purpose hardwired circuitry may be in the form of, for example, one or more application specific integrated circuits (ASICs), complex programmable logic devices (CPLDs), field programmable gate arrays (FPGAs), structured ASICs, etc.
  • ASICs application specific integrated circuits
  • CPLDs complex programmable logic devices
  • FPGAs field programmable gate arrays
  • structured ASICs etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Bioethics (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Privacy control for a contextual user interface based on environment is described. An assistant device can determine the context of its environment and, based on the context, determine a privacy expectation for a user. The assistant device can then generate a graphical user interface (GUI) based on the context and privacy expectation.

Description

    CLAIM FOR PRIORITY
  • This application claims priority to U.S. Provisional Patent Application No. 62/448,912 (Attorney Docket No. 119306-8040.US00), entitled “Contextual User Interface Based on Environment,” by Segal et al., and filed on Jan. 20, 2017. This application also claims priority to U.S. Provisional Patent Application No. 62/486,359 (Attorney Docket No. 119306-8061.US00), entitled “Contextual User Interface Based on Environment,” by Roman et al., and filed on Apr. 17, 2017. This application also claims priority to U.S. Provisional Patent Application No. 62/486,365 (Attorney Docket No. 119306-8062.US00), entitled “Contextual User Interface Based on Changes in Environment,” by Roman et al., and filed on Apr. 17, 2017. The content of the above-identified applications are incorporated herein by reference in their entirety.
  • TECHNICAL FIELD
  • This disclosure relates to user interfaces, and in particular a user interface that is adaptive based on the context of the environment.
  • BACKGROUND
  • The Internet of Things (IoT) allows for the internetworking of devices to exchange data among themselves to enable sophisticated functionality. For example, devices configured for home automation can exchange data to allow for the control and automation of lighting, air conditioning systems, security, etc. In the smart home environment, this can also include home assistant devices providing an intelligent personal assistant to respond to speech. For example, a home assistant device can include a microphone array to receive voice input and provide the corresponding voice data to a server for analysis to provide an answer to a question asked by a user. The server can provide that answer to the home assistant device, which can provide the answer as voice output using a speaker. As another example, the user can provide a voice command to the home assistant device to control another device in the home, for example, a command to turn a light bulb on or off. As such, the user and the home assistant device can interact with each other using voice, and the interaction can be supplemented by a server outside of the home providing the answers. However, homes can have different users interacting with the home assistant device within different contextual environments (e.g., from different locations and at different times) within the home.
  • SUMMARY
  • Some of the subject matter described herein includes a home assistant device, including: a display screen; a microphone; one or more processors; and memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to: determine that first speech has been spoken in a vicinity of the home assistant device using the microphone; determine a first context of an environment of the home assistant device, the first context of the environment including one or more of a location of a user providing the first speech, a time of the first speech, a user identity corresponding to the user providing the first speech, a skill level with interacting with the home assistant device of the user providing the first speech, a schedule of the user providing the first speech, or characteristics of the first speech; display a first graphical user interface (GUI) for the assistant device on the display screen to provide a response regarding the first speech, the first GUI based on the first context of the environment and content of the first speech; determine that second speech has been spoken in the vicinity of the home assistant device using the microphone, the first speech and the second speech including the same content; determine a second context of an environment of the home assistant device, the second context of the environment including one or more of a location of a user providing the second speech, a time of the second speech, a user identity corresponding to the user providing the second speech, a skill level with interacting with the home assistant device of the user providing the second speech, a schedule of the user providing the second speech, or characteristics of the second speech, the first context and the second context being different; and display a second GUI for the assistant device on the display screen to provide a response regarding the second speech, the second GUI based on the second context of the environment and content of the second speech, the first GUI and the second GUI providing different content.
  • Some of the subject matter described herein also includes a method for providing a contextual user interface, including: determining, by a processor, that a first speech has been spoken; determining, by the processor, a first context of an environment corresponding to the first speech; providing, by the processor, a first user interface based on the first context of the environment and content of the first speech; determining, by the processor, that a second speech has been spoken, the second speech spoken at a different time than the first speech; determining, by the processor, a second context of the environment corresponding to the second speech, the first context and the second context being different; and providing, by the processor, a second user interface based on the second context of the environment and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different.
  • In some implementations, the first context is based on one or both of audio or visual determinations of a surrounding environment of an assistant device that the first speech and the second speech is directed.
  • In some implementations, the first context includes a first interaction corresponding to the first speech at a first distance, wherein the second context includes a second interaction corresponding to the second speech at a second distance, the first distance and the second distance being different.
  • In some implementations, the first context includes a first user providing the first speech, the second context includes a second user providing the second speech, the first user and the second user being different.
  • In some implementations, the first user is associated with a first skill level with interacting with an assistant device, the second user is associated with a second skill level with interacting with the assistant device, the first skill level and the second skill level being different, the first context based on the first skill level, and the second context based on the second skill level.
  • In some implementations, the first context and the second context include one or more of a user interacting with an assistant device, people in the environment around the assistant device, a time of an interaction with the assistant device, a location of a user interacting with the assistant device, or a skill level of a user interacting with the assistant device.
  • In some implementations, the method includes: determining, by the processor, a change in the environment; and generating, by the processor, a third user interface based on one or more of the first context or the second context in response to the change in the environment to maintain privacy expectations of one or more users present in the environment.
  • Some of the subject matter described herein also includes an electronic device, including: one or more processors; and memory storing instructions, wherein the processor is configured to execute the instructions such that the processor and memory are configured to: determine that a first speech has been spoken; determine a first context of an environment corresponding to the first speech; generate a first user interface based on the first context of the environment and content of the first speech; determine that a second speech has been spoken, the second speech spoken at a different time than the first speech; determine a second context of the environment corresponding to the second speech, the first context and the second context being different; and generate a second user interface based on the second context of the environment and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different.
  • In some implementations, the first context is based on one or both of audio or visual determinations of a surrounding environment of an assistant device that the first speech and the second speech is directed.
  • In some implementations, the first context includes a first interaction corresponding to the first speech at a first distance, wherein the second context includes a second interaction corresponding to the second speech at a second distance, the first distance and the second distance being different.
  • In some implementations, the first context includes a first user providing the first speech, the second context includes a second user providing the second speech, the first user and the second user being different.
  • In some implementations, the first user is associated with a first skill level with interacting with an assistant device, the second user is associated with a second skill level with interacting with the assistant device, the first skill level and the second skill level being different, the first context based on the first skill level, and the second context based on the second skill level.
  • In some implementations, the first context and the second context include one or more of a user interacting with an assistant device, people in the environment around the assistant device, a time of an interaction with the assistant device, a location of a user interacting with the assistant device, or a skill level of a user interacting with the assistant device.
  • In some implementations, the processor is configured to execute the instructions such that the processor and memory are configured to: determine a change in the environment; and generate a third user interface based on one or more of the first context or the second context in response to the change in the environment to maintain privacy expectations of one or more users present in the environment.
  • Some of the subject matter described herein also includes a computer program product, comprising one or more non-transitory computer-readable media having computer program instructions stored therein, the computer program instructions being configured such that, when executed by one or more computing devices, the computer program instructions cause the one or more computing devices to: determine that a first speech has been spoken; determine a first context of an environment corresponding to the first speech; generate a first user interface based on the first context of the environment and content of the first speech; determine that a second speech has been spoken, the second speech spoken at a different time than the first speech; determine a second context of the environment corresponding to the second speech, the first context and the second context being different; and generate a second user interface based on the second context of the environment and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different.
  • In some implementations, the first context is based on one or both of audio or visual determinations of a surrounding environment of an assistant device that the first speech and the second speech is directed.
  • In some implementations, the first context includes a first interaction corresponding to the first speech at a first distance, wherein the second context includes a second interaction corresponding to the second speech at a second distance, the first distance and the second distance being different.
  • In some implementations, the first context includes a first user providing the first speech, the second context includes a second user providing the second speech, the first user and the second user being different.
  • In some implementations, the first user is associated with a first skill level with interacting with an assistant device, the second user is associated with a second skill level with interacting with the assistant device, the first skill level and the second skill level being different, the first context based on the first skill level, and the second context based on the second skill level.
  • In some implementations, the first context and the second context include one or more of a user interacting with an assistant device, people in the environment around the assistant device, a time of an interaction with the assistant device, a location of a user interacting with the assistant device, or a skill level of a user interacting with the assistant device.
  • In some implementations, the computer program instructions cause the one or more computing devices to: determine a change in the environment; and generate a third user interface based on one or more of the first context or the second context in response to the change in the environment to maintain privacy expectations of one or more users present in the environment.
  • Some of the subject matter described herein also includes an electronic device including: a display screen; one or more processors; and memory storing instructions, wherein the processor is configured to execute the instructions such that the processor and memory are configured to: determine a first occurrence of a first activity within an environment of the electronic device; determine a first context of the environment of the electronic device; and display a first graphical user interface (GUI) on the display screen based on the first context of the environment and the first activity.
  • In some implementations, the first activity is one or more of speech spoken within the environment, or noise generated by an object within the environment.
  • In some implementations, the processor is configured to execute the instructions such that the processor and memory are configured to: determine a second occurrence of a second activity within the environment of the electronic device; determine a second context of the environment of the electronic device, the first context and the second context being different; and display a second graphical user interface (GUI) on the display screen based on the second context of the environment and the second activity, first content of the first GUI being different than second content of the second GUI.
  • In some implementations, the first activity and the second activity are similar, and the first content of the first GUI is different than the second content of the second GUI based on differences between the first context of the environment and the second context of the environment.
  • In some implementations, the first content includes a first number of graphical representations of information or access to functionality provided by the electronic device, the second content includes a second number of graphical representation of information or access to functionality provided by the electronic device, the first number and the second number being different.
  • In some implementations, the first content includes a first graphical representation of an item providing information or access to functionality provided by the electronic device at a first size, the second content includes a second graphical representation of the item at a second size, the first size and the second size being different.
  • In some implementations, the first activity is speech spoken within the environment of the electronic device, the first context of the environment including one or more of a location of a user providing the speech, a time of the speech, a user identity corresponding to the user providing the speech, a skill level with interacting with the home assistant device of the user providing the speech, a schedule of the user providing the speech, or characteristics of the speech.
  • In some implementations, the first GUI includes content responding to the first activity based on the first context.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of an assistant device providing a user interface based on the context of the environment.
  • FIG. 2 illustrates an example of a block diagram providing a user interface based on the context of the environment.
  • FIG. 3 illustrates an example of a block diagram determining the context of the environment of an assistant device.
  • FIG. 4 illustrates another example of an assistant device providing a user interface based on the context of the environment.
  • FIG. 5 illustrates an example of an assistant device.
  • FIG. 6 illustrates an example of a block diagram for adjusting a user interface to maintain privacy expectations.
  • DETAILED DESCRIPTION
  • This disclosure describes devices and techniques for providing a user interface for a home assistant device based on the context, or characteristics, of its surrounding environment. In one example, the user interface of the home assistant device (e.g., a graphical user interface (GUI) generated for display on a display screen of the home assistant device) can be different based on a combination of contextual factors of the surrounding environment including the person interacting with the home assistant device, the people in the surrounding environment, the time, the location of the home assistant device within the home, the location of the person interacting with the home assistant device, the presence of strangers, interests of the users, etc. As a result, based on the contextual factors, different content (e.g., information, graphical icons providing access to functionality of the home assistant device, etc.) can be displayed by the home assistant device.
  • Additionally, the same content can be displayed differently. For example, different languages, visual effects, etc. can be provided based on the context of the environment. In another example, two different users (or even the same user at different times) might ask the same question to the home assistant device. Based on differences within the context of the environment when the question is asked, the user interface can provide the same answers to the question differently.
  • In more detail, FIG. 1 illustrates an example of an assistant device providing a user interface based on the context of the environment. In FIG. 1, home assistant device 110 can include a microphone (e.g., a microphone array) to receive voice input from users and a speaker to provide audio output in the form of a voice (or other types of audio) to respond to the user. Additionally, home assistant device 110 can include a display screen to provide visual feedback to users by generating a graphical user interface (GUI) providing content for display. For example, a user can ask home assistant device 110 a question and a response to that question can be provided on the display screen. Additional visual components, such as light emitting diodes (LEDs), can also be included. As a result, the user interface can include audio, voice, display screens, lighting, and other audio or visual components. In some implementations, camera 115 can also be included for home assistant device 110 to receive visual input of its surrounding environment. Camera 115 can be physically integrated (e.g., physically coupled with) with home assistant device 110 or camera 115 can be a separate component of a home's wireless network that can provide video data to home assistant device 110.
  • In FIG. 1, home assistant device 110 can be in a particular location of the home, for example, the kitchen. Different users might interact with home assistant device from different locations within the home (e.g., the kitchen or the living room) and at different times. Additionally, the different users might be interested in different features, functionalities, or information provided by home assistant device 110. These different contextual factors of the environment of home assistant device 110 can result in the user interface of home assistant device 110 to be changed. Because the user interface can provide content such as features, functionalities, information, etc., this can result in different content being displayed on the display screen. That is, different combinations of contextual factors of the environment can result in a different user interface of home assistant device 110, resulting in an adaptive user interface based on context of the environment.
  • For example, in FIG. 1, user 130 a can be in the kitchen (i.e., in the same room or within close proximity with home assistant device 110) at 11:39 PM in the evening. Home assistant device 110 can recognize user 130 a, for example, using video input from camera 115 to visually verify user 130 a. In another example, home assistant device 110 can recognize user 130 a through speech recognition as user 130 a speaks either to home assistant device 110, to other people, or even himself. User 130 a can also have had previous interactions with home assistant device 110, and therefore, home assistant device 110 can remember the likes or preferences, expectations, schedule, etc. of user 130 a. As a result, user interface 120 a can be generated for user 130 a to interact with home assistant device 110 based on the current context of the environment indicating the user, time, and location that the user is speaking from.
  • By contrast, user 130 b can be in the living room at 8:30 AM of the same home as home assistant device 110. Because the user, time, and location of the user are different, home assistant device 110 can generate a different user interface 120 b providing a different GUI having different content as depicted in FIG. 1. As a result, user interface 120 b can be different from user interface 120 a because they are provided, or generated, in response to different contextual environments when users 130 a and 130 b speak. This can occur even if the content of the speech provided by users 130 a and 130 b is similar, or even the same. For example, if both users 130 a and 130 b ask the same or similar question (e.g., their speech includes similar or same content such as asking for a list of new restaurants that have opened nearby), the user interface (to respond to the question) that is provided by home assistant device 110 can be different because of the different context of the environments when the speech was spoken.
  • In another example, because user interface 120 a was generated in the evening, it can have different colors, brightness, or other visual characteristics than display 120 b. This might be done because the user interface should not be too disruptive in different lighting situations. For example, a light sensor (e.g., a photodiode) can be used to determine that a room is dark. Home assistant device 110 can then adjust the brightness of the display screen based on the determined lighting situation in the environment.
  • Additionally, because users 130 a and 130 b are in different rooms and, therefore, at different distances from home assistant device 110, the user interfaces 120 a and 120 b can be different to take that into account. For example, because user 130 a in FIG. 1 is in the kitchen, he may be relatively close to home assistant device 110 and, therefore, the size of some of the content (e.g., items A-G which can be buttons, icons, text, etc.) of a GUI provided as user interface 120 a can be relatively small. By contrast, because user 130 b is in the living room (i.e., farther away from home assistant device 110 than user 130 a), some of the content of user interface 120 b can be larger so that they can be more easily seen from a distance. For example, in FIG. 1, icons A and F have different sizes among the different user interfaces 120 a and 120 b. That is, content such as the items of the user interfaces that provide access to the same functionality or provide an indication to the same type of information can be different sizes because the contextual environments are different. For example, if users 130 a and 130 b request a listing of new, nearby restaurants, icons A-G might represent a list of some of the identified restaurants. Additionally, the playback of audio can be at a volume based on the distance that a user is from home assistant device 110. For example, a user that is farther away can result in the playback of audio that is at a higher volume than if a user is closer to home assistant device 110.
  • User interfaces 120 a and 120 b can also be different in other ways. For example, the location of content, the number of content, etc. as depicted in FIG. 1 can also be different due to the different contextual environments.
  • FIG. 2 illustrates an example of a block diagram providing a user interface based on the context of the environment. In FIG. 2, at block 203, speech can be determined to have been spoken. For example, a microphone of home assistant device 110 can pick up speech spoken within the environment. That speech can be converted into voice data and analyzed by a processor of home assistant device 110 to determine that speech has been received. At block 205, the context of the surrounding environment or vicinity around home assistant device 110 can be determined. For example, home assistant device 110 can determine any of the aforementioned details regarding the environment in the physical space around home assistant device 110 including time, user, prior interactions with the user, locations of the user and home assistant device 110, etc. Any of the details discussed below can also be determined. At block 210, the user interface can be provided or generated based on the determined context and content of the speech. For example, this can include generating a GUI with content related to the content of the speech and provided at various sizes, colors, etc. on a display screen of home assistant device 110 based on the context. In some implementations, the user interface can also include playback of audio (e.g., sounds), turning on various lighting effects (e.g., LEDs), etc. For example, different GUIs with different audio effects can be provided.
  • Next, home assistant device 110 can pick up more speech at a different time. However, if the context of the environment is different, then a different user interface than that generated at block 210 can be generated. Thus, even if the content of the speech at the two different times was the same, the user interfaces generated can be different if the context of the environment was different.
  • FIG. 3 illustrates an example of a block diagram determining the context of the environment of an assistant device. In FIG. 3, as previously discussed, the location of the speech can be determined at block 305, the time of the speech can be determined at block 310, and the user providing speech can be determined at block 315 to determine the context of the environment.
  • Other details can include the skill level of the user at block 320. For example, home assistant device 110 can determine the skill level of a user as they interact more with the user interface. If the user uses more functionality, more complicated functionality, requests significant amount of detail regarding functionality, etc. then the user can be identified by home assistant device 110 as a more sophisticated user. By contrast, if another user tends to ask the same repetitive tasks or questions of home assistant device 110 then the user can be identified as a less sophisticated user. If the user tends to use less complicated functionality, less functionality, or does not request significant detail, then the user can also be identified as a less sophisticated user. In FIG. 1, user 130 a can be a more sophisticated user indicating that the user has a relatively high skill level in using home assistant device 110, and therefore, more functionality (or content) can be provided on user interface 120 a (i.e., items A-G are provided). By contrast, user 130 b can be a less sophisticated user indicating that the user has a relatively lower skill level (than user 130 a), and therefore, less content can be provided on user interface 120 b (i.e., fewer items A, C, D, and F are provided). In some implementations, the same number of content of user interfaces might be provided, but different content corresponding to different functionalities or features might be displayed based on the skill level of the user. Thus, different content can be provided in a user interface of home assistant device 110.
  • As previously discussed, the user interface can include other visual components other than displaying content as part of a GUI on a display screen. In FIG. 1, this can include lighting, for example, LEDs or other types of lights which can be activated by being turned on, glow, flicker, display a particular color, etc. to provide an indication to a user of a situation. For example, home assistant device 110 can determine a user's schedule at block 325 and provide an indication as to when the user should be leaving the home so that they can maintain that schedule without any tardiness. In FIG. 1, this can result in a ring around the display screen that can be different colors (e.g., implemented with LEDs or other types of lighting), however in other implementations the ring can be part of the display screen itself.
  • In one example, the ring can be a color corresponding to the traffic or commute status for the user to go to their next expected location, such as the workplace in the morning or a coffee meeting scheduled on their calendar. If the ring is set to a green color, then this can indicate to the user that the traffic is relatively light. By contrast, a red color can indicate that the traffic is relatively heavy. This type of user interface can provide a user with information while they are far away from home assistant device 110 because the colors can be easily seen from a distance. In some implementations, the ring can also indicate whether the user needs to leave soon or immediately if they want to make the next appointment on their schedule. For example, the intensity or brightness of the color can be increased, the ring can be blinking, etc. This can provide further detail from a distance for a user. In some implementations, the user interface can also display on the display screen a route to the location of the next event on their schedule, provide a time estimate, etc. As a result, if the user decides that they want more detail and walks closer to home assistant device 110, information can be readily displayed and available. In some implementations, home assistant device 105 can determine that the user is walking closer after the ring has been activated and then process information and display the additional information on the display screen so that information is available when they are closer.
  • The user interface can also include audio sounds for playback. For example, user interface 120 a in FIG. 1 might play back one type of audio sound when user 130 a interacts with it, for example, selecting one of the items A-G, requesting user interface 120 a to change (e.g., provide new content), etc. By contrast, user interface 120 b might play back different sounds for the same interactions by user 130 b because of the different context of the environment.
  • Characteristics regarding the speech received by home assistant device 110 can also be determined at block 330. For example, home assistant device 110 can determine the volume, speed, accent, language, tone, etc. of speech and use that as a contextual factor in providing a user interface. In one example, if a user is speaking quickly (e.g., at a speed or rate determined to be within a words per minute range corresponding to speaking quickly), then content of the user interface may be updated faster than if the user was speaking slowly, for example, by updating the GUI of the user interface sooner. In another example, if the user's speech is determined to be indicative of stress or frustration, then the user interface might provide content differently than if the user's speech is determined to be relatively free of stress or frustration. As an example, if the user is stressed or frustrated, then the amount of content provided on the user interface can be reduced in comparison with the user not being stressed or frustrated.
  • In some implementations, if the user is determined to be stressed or frustrated, then the user interface can include the playback of music. For example, calming music can be played back using the speaker of home assistant device 110.
  • In some implementations, the lighting of home assistant device 110 can be different based on what is provided on the user interface. For example, different types of content can result in different brightness, colors, etc.
  • The user interface can also be changed to account for privacy expectations of a user when the context of the environment changes (i.e., the conditions or characteristics of the environment change). FIG. 4 illustrates another example of an assistant device providing a user interface based on the context of the environment. In FIG. 4, users 130 a, 130 b, and 130 c are within the home environment of home assistant device 110. These different users can be identified and the user interface 120 c in FIG. 4 can be generated to take into account privacy concerns of the various users.
  • For example, user 130 a might want some content to be provided on a user interface if he is alone, but might not want that content to be displayed if others are within the home. Likewise, user 130 b also might not want some content to be provided. In some implementations, user 130 a might find it acceptable to have the content provided on the user interface even if the presence of user 130 b is detected because user 130 b is a member of the same household. However, user 130 a might want that content to not be displayed if strangers or guests are in the home. User 130 c can be a stranger or newcomer into the home environment and has never interacted with home assistant device 110 and therefore, is unrecognized by home assistant device 110.
  • Home assistant device 110 can recognize the different users or persons within the home and generate user interface 120 c based on the users 130 a-c. For example, home assistant device 110 can take some details of user interfaces 120 a and 120 b (e.g., user interfaces normally for users 130 a and 130 b, respectively) and generate user interface 120 c in FIG. 4 based on those other user interfaces. That is, user interface 120 c can be generated based on how user interfaces would be generated for users 130 a and 130 b. In FIG. 4, this results in some content of user interface 120 b having a relatively large size (e.g., as in user interface 120 b), but less content than either user interfaces 120 a or 120 b. In some implementations, content that would mutually exist in user interfaces 120 a and 120 b can be provided within user interface 120 c, but content that is only on one of user interfaces 120 a and 120 b might not be provided because it might only appeal to a single user or those users might have different privacy expectations. For example, item B as depicted in user interface 120 a in FIG. 1 might not appear because it is not provided within user interface 120 b in FIG. 1.
  • In some implementations, upon detection of user 130 c (i.e., a stranger or guest in the environment), the user interface can also be adapted to take into account an unrecognized user. For example, upon detection of an unrecognized user, some content might be removed from a user interface. When the unrecognized user leaves, this can be detected, and therefore, home assistant device 110 can then provide the removed content back with the user interface. As a result, the user's privacy expectations can be maintained when guests are nearby.
  • Other types of changes in context of the environment other than detection of strangers or guests can include determining differences in time. For example, a user might find it acceptable to display some content on the GUI late at night or early in the morning, but might not want that content displayed during the daytime because the likelihood of others seeing that content might be higher. Another example can include activities of persons within the environment. For example, if several people in the environment are discussing a particular topic, a social gathering is taking place, etc. then perhaps a user's privacy expectations can be elevated and, therefore, some of the content that would otherwise be displayed can be removed.
  • In some implementations, a user's privacy expectations can be set by that user or learned by home assistant device 110 over time, or a combination of both. For example, the user can indicate that certain content should not be displayed when unrecognized persons are in the environment. As another example, the user might remove content from the GUI and home assistant device 110 can identify the context in the environment when the user removed the content to determine the user's privacy expectations.
  • FIG. 6 illustrates an example of a block diagram for adjusting a user interface to maintain privacy expectations. In FIG. 6, at block 605, the context of the environment can be determined. For example, the presence of persons including recognized users and/or strangers, the time, activities being performed in the environment, etc. can be determined. At block 607, privacy expectations for a user based on the context can be determined. For example, if a user is within the environment, a GUI providing various content can be provided. However, if strangers or guests are detected within the environment, the user might not want certain content displayed on the GUI due to an increase in privacy concerns resulting in higher privacy expectations for that content. Thus, at block 610, the GUI can be adjusted or modified based on the privacy expectations. For example, the content can be removed due to the increase in privacy expectations while the stranger or guest is present within the environment.
  • When the stranger or guest leaves, this can be determined as a change in the context of the environment and, therefore, also a change in the privacy expectations for the user. Because the user might be the only person within the environment, the GUI can be modified again to include the content that was previously removed. Thus, if the context of the environment changes and, therefore, the user for whom the GUI is provided has a change in privacy expectations, then the GUI can be adapted.
  • Many of the examples disclosed herein discuss visual adaptations for the user interface. However, audio adaptations can also be performed based on the context situations described above. For example, the type of voice, accent, volume, etc. can also be adjusted for different user interfaces using the techniques described herein.
  • Many of the examples disclosed herein discuss speech being recognized. However, other types of audio can also be used with the techniques. For example, noise from objects such as television or radio, a doorbell ringing, a door opening, glass shattering, etc. can also be detected occurrences of activity other than speech.
  • In some implementations, the content of the user interface can also be changed based on whether or not it is determined that a user is looking at home assistant device 110 or speaking to home assistant device 110. For example, the display screen of home assistant device 110 might be turned off, but can turn on when it is determined that a user is looking at it.
  • Many of the aforementioned examples discuss a home environment. In other examples, the devices and techniques discussed herein can also be set up in an office, public facility, etc.
  • FIG. 5 illustrates an example of an assistant device. In FIG. 5, home assistant device 110 can be an electronic device with one or more processors 605 (e.g., circuits) and memory 610 for storing instructions that can be executed by processors 605 to implement contextual user interface 630 providing the techniques described herein. Home assistant device 105 can also include microphone 620 (e.g., one or more microphones that can implement a microphone array) to convert sounds into electrical signals, and therefore, speech into data that can be processed using processors 605 and stored in memory 610. Speaker 615 can be used to provide audio output. Additionally, display 625 can display a GUI implemented by processors 605 and memory 610 to provide visual feedback. Memory 610 can be a non-transitory computer-readable storage media. Home assistant device 110 can also include various other hardware, such as cameras, antennas, etc. to implement the techniques disclosed herein. Thus, the examples described herein can be implemented with programmable circuitry (e.g., one or more microprocessors) programmed with software and/or firmware, or entirely in special-purpose hardwired (non-programmable) circuitry, or in a combination of such forms. Special-purpose hardwired circuitry may be in the form of, for example, one or more application specific integrated circuits (ASICs), complex programmable logic devices (CPLDs), field programmable gate arrays (FPGAs), structured ASICs, etc.
  • Those skilled in the art will appreciate that the logic and process steps illustrated in the various flow diagrams discussed herein may be altered in a variety of ways. For example, the order of the logic may be rearranged, sub-steps may be performed in parallel, illustrated logic may be omitted, other logic may be included, etc. One will recognize that certain steps may be consolidated into a single step and that actions represented by a single step may be alternatively represented as a collection of substeps. The figures are designed to make the disclosed concepts more comprehensible to a human reader. Those skilled in the art will appreciate that actual data structures used to store this information may differ from the figures and/or tables shown, in that they, for example, may be organized in a different manner; may contain more or less information than shown; may be compressed, scrambled and/or encrypted; etc.
  • From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications can be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims.

Claims (22)

1. A home assistant device, comprising:
a display screen;
one or more processors; and
memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to:
determine a first context of an environment of the home assistant device, the first context including a presence of people including a first user and a second user in the environment;
determine a first privacy expectation of the first user based on the first context;
determine a second privacy expectation of the second user based on the first context;
determine that a graphical user interface (GUI) for the first user based on the first privacy expectation includes a first content item and a second content item, and that a GUI for the second user based on the second privacy expectation includes the first content item and not the second content item;
generate a graphical user interface (GUI) including the first content item and not the second content item, the first content item providing information or access to functionality of the home assistant device based on the first context and the determination that the GUI for the second user includes the first content item and not the second content item the first privacy expectations;
determine a second context of the environment, the second context including the second user no longer within the environment;
determine a second privacy expectation of the first user based on the second context, the first privacy expectation and the second privacy expectation being different, the second privacy expectation representing a decrease in privacy concerns of the first user than the first privacy expectation based on the differences between the first context and the second context; and
modify the GUI based on the second privacy expectation, modifying the GUI including adding the first content item to from the GUI due to the decrease in privacy concerns represented by the second privacy expectation.
2. A method for privacy control of a contextual user interface, comprising:
determine a first context of an environment of an assistant device, the first context indicating that a first user and a second user are within the environment;
determine, by a processor of the assistant device, a first privacy expectation of the first user based on the first context;
determine a second privacy expectation of the second user based on the first context;
determine similarities of content for a graphical user interface (GUI) generated for the first user based on the first context if the second user is not within the environment and content for a GUI generated for the second user based on the first context if the first user is not within the environment;
generate a graphical user interface (GUI) including the similar content providing information or access to functionality of the assistant device.
3. The method of claim 2, further comprising:
determine a second context of the environment, the second context being different than the first context;
determine a third privacy expectation of the first user based on the second context, the first privacy expectation and the third privacy expectation being different based on differences between the first context and the second context; and
modify the GUI based on the third privacy expectation.
4. The method of claim 3, wherein modifying the GUI includes removing content providing information or access to functionality of the assistant device.
5. The method of claim 3, the third privacy expectation representing an increase in privacy concerns of the first user in comparison with the first privacy expectation based on the differences between the first context and the second context.
6. The method of claim 5, wherein the increase in privacy concerns is based on the second context indicating a presence of a stranger or guest in the environment, the first context not indicating the presence of a stranger or guest in the environment.
7. The method of claim 6, further comprising:
determining that the environment no longer includes the presence of the stranger or guest; and
modifying the GUI based on the determination that the environment no longer includes the presence of the stranger or guest.
8. The method claim 2, wherein the first context includes one or more of a time, a presence of people in the environment, or activities performed by the people in the environment.
9. An electronic device, comprising:
one or more processors; and
memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to:
determine a first context of an environment of the electronic device, the first context indicating that a first user and a second user are within the environment;
determine, by a processor of the electronic device, a first privacy expectation of the first user based on the first context;
determine a second privacy expectation of the second user based on the first context;
determine similarities of content for a graphical user interface (GUI) generated for the first user based on the first context if the second user is not within the environment and content for a GUI generated for the second user based on the first context if the first user is not within the environment;
generate a graphical user interface (GUI) including the similar content providing information or access to functionality of the electronic device.
10. The electronic device of claim 9, the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to:
determine a second context of the environment, the second context being different than the first context;
determine a third privacy expectation of the first user based on the second context, the first privacy expectation and the third privacy expectation being different based on differences between the first context and the second context; and
modify the GUI based on the third privacy expectation.
11. The electronic device of claim 10, wherein modifying the GUI includes removing content providing information or access to functionality of the electronic device.
12. The electronic device of claim 10, the third privacy expectation representing an increase in privacy concerns of the first user in comparison with the first privacy expectation based on the differences between the first context and the second context.
13. The electronic device of claim 12, wherein the increase in privacy concerns is based on the second context indicating a presence of a stranger or guest in the environment, the first context not indicating the presence of a stranger or guest in the environment.
14. The electronic device of claim 13, the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to:
determining that the environment no longer includes the presence of the stranger or guest; and
modifying the GUI based on the determination that the environment no longer includes the presence of the stranger or guest.
15. The electronic device claim 9, wherein the first context includes one or more of a time, a presence of people in the environment, or activities performed by the people in the environment.
16. A computer program product, comprising one or more non-transitory computer-readable media having computer program instructions stored therein, the computer program instructions being configured such that, when executed by one or more computing devices, the computer program instructions cause the one or more computing devices to:
determine a first context of an environment of the computing device, the first context indicating that a first user and a second user are within the environment;
determine, by a processor of the electronic device, a first privacy expectation of the first user based on the first context;
determine a second privacy expectation of the second user based on the first context;
determine similarities of content for a graphical user interface (GUI) generated for the first user based on the first context if the second user is not within the environment and content for a GUI generated for the second user based on the first context if the first user is not within the environment;
generate a graphical user interface (GUI) including the similar content providing information or access to functionality of the computing device.
17. The computer program product of claim 16, wherein the computer program instructions cause the one or more computing devices to:
determine a second context of the environment, the second context being different than the first context;
determine a third privacy expectation of the first user based on the second context, the first privacy expectation and the third privacy expectation being different based on differences between the first context and the second context; and
modify the GUI based on the third privacy expectation.
18. The computer program product of claim 17, wherein modifying the GUI includes removing content providing information or access to functionality of the computing device.
19. The computer program product of claim 17, the third privacy expectation representing an increase in privacy concerns of the first user in comparison with the first privacy expectation based on the differences between the first context and the second context.
20. The computer program product of claim 19, wherein the increase in privacy concerns is based on the second context indicating a presence of a stranger or guest in the environment, the first context not indicating the presence of a stranger or guest in the environment.
21. The computer program product of claim 20, wherein the computer program instructions cause the one or more computing devices to:
determining that the environment no longer includes the presence of the stranger or guest; and
modifying the GUI based on the determination that the environment no longer includes the presence of the stranger or guest.
22. The computer program product claim 16, wherein the first context includes one or more of a time, a presence of people in the environment, or activities performed by the people in the environment.
US15/599,398 2017-01-20 2017-05-18 Contextual user interface based on changes in environment Abandoned US20180210738A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/599,398 US20180210738A1 (en) 2017-01-20 2017-05-18 Contextual user interface based on changes in environment

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762448912P 2017-01-20 2017-01-20
US201762486365P 2017-04-17 2017-04-17
US201762486359P 2017-04-17 2017-04-17
US15/599,398 US20180210738A1 (en) 2017-01-20 2017-05-18 Contextual user interface based on changes in environment

Publications (1)

Publication Number Publication Date
US20180210738A1 true US20180210738A1 (en) 2018-07-26

Family

ID=62906430

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/587,201 Expired - Fee Related US10359993B2 (en) 2017-01-20 2017-05-04 Contextual user interface based on environment
US15/599,398 Abandoned US20180210738A1 (en) 2017-01-20 2017-05-18 Contextual user interface based on changes in environment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/587,201 Expired - Fee Related US10359993B2 (en) 2017-01-20 2017-05-04 Contextual user interface based on environment

Country Status (4)

Country Link
US (2) US10359993B2 (en)
DE (1) DE112017006882T5 (en)
TW (1) TW201828025A (en)
WO (1) WO2018136109A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190189120A1 (en) * 2017-12-08 2019-06-20 Samsung Electronics Co., Ltd. Method for providing artificial intelligence service during phone call and electronic device thereof
US20200150982A1 (en) * 2018-11-12 2020-05-14 International Business Machines Corporation Determination and inititation of a computing interface for computer-initiated task response
US20220093091A1 (en) * 2020-09-21 2022-03-24 International Business Machines Corporation Modification of voice commands based on sensitivity
US12499885B2 (en) * 2017-03-10 2025-12-16 Amazon Technologies, Inc. Voice-based parameter assignment for voice-capturing devices

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11803352B2 (en) * 2018-02-23 2023-10-31 Sony Corporation Information processing apparatus and information processing method
EP3791259A1 (en) 2018-12-07 2021-03-17 Google LLC Conditionally assigning various automated assistant function(s) to interaction with a peripheral assistant control device
US11171901B2 (en) * 2019-04-17 2021-11-09 Jvckenwood Corporation Chat server, chat system, and non-transitory computer readable storage medium for supplying images and chat data
US12542136B2 (en) 2021-08-25 2026-02-03 Google Llc Dynamically configuring a warm word button with assistant commands

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100031323A1 (en) * 2006-03-03 2010-02-04 Barracuda Networks, Inc. Network Interface Device

Family Cites Families (153)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6052556A (en) 1996-09-27 2000-04-18 Sharp Laboratories Of America Interactivity enhancement apparatus for consumer electronics products
US6141003A (en) 1997-03-18 2000-10-31 Microsoft Corporation Channel bar user interface for an entertainment system
US7080322B2 (en) 1998-12-18 2006-07-18 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US6332163B1 (en) 1999-09-01 2001-12-18 Accenture, Llp Method for providing communication services over a computer network system
AU2248501A (en) 1999-12-17 2001-06-25 Promo Vu Interactive promotional information communicating system
US6754904B1 (en) 1999-12-30 2004-06-22 America Online, Inc. Informing network users of television programming viewed by other network users
US7620703B1 (en) 2000-08-10 2009-11-17 Koninklijke Philips Electronics N.V. Topical service provides context information for a home network
WO2002021831A2 (en) 2000-09-08 2002-03-14 Kargo, Inc. Video interaction
US20020078453A1 (en) 2000-12-15 2002-06-20 Hanchang Kuo Hub pages for set top box startup screen
US7676822B2 (en) 2001-01-11 2010-03-09 Thomson Licensing Automatic on-screen display of auxiliary information
US20020162120A1 (en) 2001-04-25 2002-10-31 Slade Mitchell Apparatus and method to provide supplemental content from an interactive television system to a remote device
US6714778B2 (en) 2001-05-15 2004-03-30 Nokia Corporation Context sensitive web services
US6968334B2 (en) 2001-05-15 2005-11-22 Nokia Corporation Method and business process to maintain privacy in distributed recommendation systems
US7340438B2 (en) 2001-05-21 2008-03-04 Nokia Corporation Method and apparatus for managing and enforcing user privacy
WO2003054654A2 (en) 2001-12-21 2003-07-03 Nokia Corporation Location-based novelty index value and recommendation system and method
US9374451B2 (en) 2002-02-04 2016-06-21 Nokia Technologies Oy System and method for multimodal short-cuts to digital services
US7899915B2 (en) 2002-05-10 2011-03-01 Richard Reisman Method and apparatus for browsing using multiple coordinated device sets
US7069259B2 (en) 2002-06-28 2006-06-27 Microsoft Corporation Multi-attribute specification of preferences about people, priorities and privacy for guiding messaging and communications
AU2003284021A1 (en) 2002-10-11 2004-05-04 Solutions For Progress Benefits qualification analysis and application method and computer readable code
US20050043095A1 (en) 2003-08-20 2005-02-24 Larson Lee A. Apparatus and method for games requiring display of individual player information
US7458093B2 (en) 2003-08-29 2008-11-25 Yahoo! Inc. System and method for presenting fantasy sports content with broadcast content
KR100651729B1 (en) 2003-11-14 2006-12-06 한국전자통신연구원 System and method for multi-modal context-sensitive applications in home network environment
US7349758B2 (en) 2003-12-18 2008-03-25 Matsushita Electric Industrial Co., Ltd. Interactive personalized robot for home use
US7942743B2 (en) 2004-01-20 2011-05-17 Nintendo Co., Ltd. Game apparatus and storage medium storing game program
US10417298B2 (en) 2004-12-02 2019-09-17 Insignio Technologies, Inc. Personalized content processing and delivery system and media
KR100763900B1 (en) 2004-08-28 2007-10-05 삼성전자주식회사 Television program recording / playback method based on user's gaze information and device therefor
US7461343B2 (en) 2004-11-08 2008-12-02 Lawrence Kates Touch-screen remote control for multimedia equipment
US20060184800A1 (en) * 2005-02-16 2006-08-17 Outland Research, Llc Method and apparatus for using age and/or gender recognition techniques to customize a user interface
US20060223637A1 (en) 2005-03-31 2006-10-05 Outland Research, Llc Video game system combining gaming simulation with remote robot control and remote robot feedback
US20070015580A1 (en) 2005-07-18 2007-01-18 Hunter Wesley K Mobile terminals for supplementing game module resources and methods and computer program products for operating the same
US9614964B2 (en) 2005-08-19 2017-04-04 Nextstep, Inc. Consumer electronic registration, control and support concierge device and method
US9177338B2 (en) 2005-12-29 2015-11-03 Oncircle, Inc. Software, systems, and methods for processing digital bearer instruments
DE102006002265B4 (en) 2006-01-17 2010-01-21 Palm, Inc. (n.d.Ges. d. Staates Delaware), Sunnyvale Method and system for broadcast-based broadcasting of a video signal
US7898393B2 (en) 2006-07-14 2011-03-01 Skybox Scoreboards Inc. Wall-mounted scoreboard
JP2009545921A (en) 2006-07-31 2009-12-24 ユナイテッド ビデオ プロパティーズ, インコーポレイテッド System and method for providing a media guidance planner
US8253770B2 (en) 2007-05-31 2012-08-28 Eastman Kodak Company Residential video communication system
US8154583B2 (en) 2007-05-31 2012-04-10 Eastman Kodak Company Eye gazing imaging for video communications
US9848157B2 (en) 2007-08-28 2017-12-19 Cable Television Laboratories, Inc. Method of automatically switching television channels
US9351048B1 (en) 2007-08-29 2016-05-24 The Directv Group, Inc. Method and system for assigning a channel to data in a data stream
US8875212B2 (en) 2008-04-15 2014-10-28 Shlomo Selim Rakib Systems and methods for remote control of interactive video
US7950030B2 (en) 2007-11-28 2011-05-24 Sony Corporation TV remote control signal log
US20090150340A1 (en) 2007-12-05 2009-06-11 Motorola, Inc. Method and apparatus for content item recommendation
US7930343B2 (en) * 2008-05-16 2011-04-19 Honeywell International Inc. Scalable user interface system
JP4712911B2 (en) 2008-05-30 2011-06-29 株式会社カプコン GAME PROGRAM AND GAME DEVICE
US20100006684A1 (en) 2008-07-10 2010-01-14 Robert Edward Burton Spiral shear wood cutter
US8626863B2 (en) 2008-10-28 2014-01-07 Trion Worlds, Inc. Persistent synthetic environment message notification
US8469819B2 (en) 2009-06-04 2013-06-25 Michael Parker McMain Game apparatus and game control method for controlling and representing magical ability and power of a player character in an action power control program
US20100313239A1 (en) 2009-06-09 2010-12-09 International Business Machines Corporation Automated access control for rendered output
US20150304605A1 (en) 2009-12-07 2015-10-22 Anthony Hartman Interactive video system
US20110137727A1 (en) 2009-12-07 2011-06-09 Rovi Technologies Corporation Systems and methods for determining proximity of media objects in a 3d media environment
US8358383B2 (en) 2009-12-09 2013-01-22 Wills Christopher R Dynamic television menu creation
US8754931B2 (en) 2010-01-08 2014-06-17 Kopin Corporation Video eyewear for smart phone games
US8913009B2 (en) 2010-02-03 2014-12-16 Nintendo Co., Ltd. Spatially-correlated multi-display human-machine interface
US8954452B2 (en) 2010-02-04 2015-02-10 Nokia Corporation Method and apparatus for characterizing user behavior patterns from user interaction history
US8863008B2 (en) 2010-02-17 2014-10-14 International Business Machines Corporation Automatic removal of sensitive information from a computer screen
US8724639B2 (en) 2010-02-26 2014-05-13 Mohamed K. Mahmoud Smart home hub
US8826322B2 (en) 2010-05-17 2014-09-02 Amazon Technologies, Inc. Selective content presentation engine
US8591334B2 (en) 2010-06-03 2013-11-26 Ol2, Inc. Graphical user interface, system and method for implementing a game controller on a touch-screen device
US8359020B2 (en) 2010-08-06 2013-01-22 Google Inc. Automatically monitoring for voice input based on context
US20120086630A1 (en) 2010-10-12 2012-04-12 Sony Computer Entertainment Inc. Using a portable gaming device to record or modify a game or application in real-time running on a home gaming system
EP2628296A4 (en) 2010-10-14 2015-01-07 Fourthwall Media Inc Systems and methods for providing companion services to customer premises equipment using an ip-based infrastructure
US8776121B2 (en) 2010-11-03 2014-07-08 Google Inc. Social aspects of media guides
US20120117593A1 (en) 2010-11-08 2012-05-10 Yang Pan System and Method of Delivering Advertisements to a Mobile Communication Device
KR20130119444A (en) 2010-11-10 2013-10-31 톰슨 라이센싱 Gateway remote control system and method of operation
JP5193275B2 (en) 2010-12-01 2013-05-08 株式会社コナミデジタルエンタテインメント Information processing apparatus, information processing apparatus control method, and program
US8782269B2 (en) 2010-12-22 2014-07-15 Verizon Patent And Licensing Inc. Auto-discovery of home and out-of-franchise networks
EP2497552A3 (en) 2011-03-09 2013-08-21 Sony Computer Entertainment Inc. Information processing apparatus
US8847777B2 (en) 2011-03-25 2014-09-30 Apple Inc. Voltage supply droop detector
US20140020134A1 (en) * 2011-03-30 2014-01-16 Inplanta Innovations Inc. Fruit-specific promoter
US8882595B2 (en) 2011-04-29 2014-11-11 2343127 Ontartio Inc. Systems and methods of importing virtual objects using barcodes
US20120284618A1 (en) 2011-05-06 2012-11-08 Microsoft Corporation Document based contextual communication
EP2695049A1 (en) 2011-05-10 2014-02-12 NDS Limited Adaptive presentation of content
US8668588B2 (en) 2011-05-27 2014-03-11 Zynga Inc. Target map area of an online social game
JP5890969B2 (en) 2011-06-03 2016-03-22 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
JP5755943B2 (en) 2011-06-03 2015-07-29 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
JP5911221B2 (en) 2011-07-01 2016-04-27 株式会社スクウェア・エニックス Content related information display system
US20140223464A1 (en) 2011-08-15 2014-08-07 Comigo Ltd. Methods and systems for creating and managing multi participant sessions
US10142121B2 (en) 2011-12-07 2018-11-27 Comcast Cable Communications, Llc Providing synchronous content and supplemental experiences
JP2015509237A (en) 2012-01-08 2015-03-26 テクニジョン インコーポレイテッド Method and system for dynamically assignable user interface
CN104272236B (en) 2012-02-27 2018-06-01 诺基亚技术有限公司 Device and associated method
WO2013157015A2 (en) 2012-04-16 2013-10-24 Chunilal Rathod Yogesh A method and system for display dynamic & accessible actions with unique identifiers and activities.
JP6050023B2 (en) 2012-04-26 2016-12-21 任天堂株式会社 GAME SYSTEM, GAME PROCESSING METHOD, GAME PROGRAM, AND GAME DEVICE
TWI594186B (en) 2012-05-16 2017-08-01 緯創資通股份有限公司 Method for virtual channel management, method for obtaining digital content with virtual channel and web-based multimedia reproduction system with virtual channel
US9946887B2 (en) 2012-06-04 2018-04-17 Nokia Technologies Oy Method and apparatus for determining privacy policy based on data and associated values
US9697502B2 (en) 2012-06-27 2017-07-04 International Business Machines Corporation Enforcing e-Meeting attendee guidelines
US20140038708A1 (en) 2012-07-31 2014-02-06 Cbs Interactive Inc. Virtual viewpoint management system
US20140049487A1 (en) 2012-08-17 2014-02-20 Qualcomm Incorporated Interactive user interface for clothing displays
US9699485B2 (en) 2012-08-31 2017-07-04 Facebook, Inc. Sharing television and video programming through social networking
US9424840B1 (en) 2012-08-31 2016-08-23 Amazon Technologies, Inc. Speech recognition platforms
US20140120961A1 (en) 2012-10-26 2014-05-01 Lookout, Inc. System and method for secure message composition of security messages
KR101533064B1 (en) 2012-11-01 2015-07-01 주식회사 케이티 Mobile device displaying customized interface for contents and method of using the same
US9589149B2 (en) 2012-11-30 2017-03-07 Microsoft Technology Licensing, Llc Combining personalization and privacy locally on devices
US9542060B1 (en) 2012-12-13 2017-01-10 Amazon Technologies, Inc. User interface for access of content
US20140170979A1 (en) * 2012-12-17 2014-06-19 Qualcomm Incorporated Contextual power saving in bluetooth audio
US20140181715A1 (en) 2012-12-26 2014-06-26 Microsoft Corporation Dynamic user interfaces adapted to inferred user contexts
US9654358B2 (en) * 2013-01-15 2017-05-16 International Business Machines Corporation Managing user privileges for computer resources in a networked computing environment
US9721587B2 (en) 2013-01-24 2017-08-01 Microsoft Technology Licensing, Llc Visual feedback for speech recognition system
US9460715B2 (en) 2013-03-04 2016-10-04 Amazon Technologies, Inc. Identification using audio signatures and additional characteristics
US9247309B2 (en) 2013-03-14 2016-01-26 Google Inc. Methods, systems, and media for presenting mobile content corresponding to media content
US9866924B2 (en) 2013-03-14 2018-01-09 Immersion Corporation Systems and methods for enhanced television interaction
WO2014145976A1 (en) 2013-03-15 2014-09-18 Troxler Robert E Systems and methods for identifying and separately presenting different portions of multimedia content
US9721086B2 (en) 2013-03-15 2017-08-01 Advanced Elemental Technologies, Inc. Methods and systems for secure and reliable identity-based computing
US20140282746A1 (en) 2013-03-15 2014-09-18 Miiicasa Taiwan Inc. Method and system for managing channel indexed content and electronic device implemented with said system
US20140354531A1 (en) 2013-05-31 2014-12-04 Hewlett-Packard Development Company, L.P. Graphical user interface
HK1223708A1 (en) * 2013-06-09 2017-08-04 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10137363B2 (en) 2013-06-20 2018-11-27 Uday Parshionikar Gesture based user interfaces, apparatuses and control systems
US9104886B1 (en) * 2013-06-26 2015-08-11 Amazon Technologies, Inc. Automated privacy control
JP5657751B1 (en) * 2013-07-04 2015-01-21 ファナック株式会社 Conveying device that sucks and conveys objects
US9989154B2 (en) * 2013-07-30 2018-06-05 Hayward Industries, Inc. Butterfly valve handle
WO2015015251A1 (en) 2013-08-01 2015-02-05 Yogesh Chunilal Rathod Presenting plurality types of interfaces and functions for conducting various activities
US20150046425A1 (en) * 2013-08-06 2015-02-12 Hsiu-Ping Lin Methods and systems for searching software applications
US9853934B2 (en) 2013-08-23 2017-12-26 Facebook, Inc. Platform show pages
FR3011141B1 (en) 2013-09-24 2017-04-28 Voltalis CONTROLLING CONTROLS OF ELECTRICAL EQUIPMENT THAT CAN BE CONTROLLED BY INFRARED CONTROL SIGNALS
US9594890B2 (en) 2013-09-25 2017-03-14 Intel Corporation Identity-based content access control
US9536106B2 (en) 2013-10-08 2017-01-03 D.R. Systems, Inc. System and method for the display of restricted information on private displays
US20150150140A1 (en) 2013-11-26 2015-05-28 Nokia Corporation Method and apparatus for determining shapes for devices based on privacy policy
US9225688B2 (en) 2013-12-03 2015-12-29 Nokia Technologies Oy Method and apparatus for providing privacy adaptation based on receiver context
US20170017501A1 (en) * 2013-12-16 2017-01-19 Nuance Communications, Inc. Systems and methods for providing a virtual assistant
US9830044B2 (en) * 2013-12-31 2017-11-28 Next It Corporation Virtual assistant team customization
US9986296B2 (en) 2014-01-07 2018-05-29 Oath Inc. Interaction with multiple connected devices
JP2017508188A (en) * 2014-01-28 2017-03-23 シンプル エモーション, インコーポレイテッドSimple Emotion, Inc. A method for adaptive spoken dialogue
JP2017104145A (en) 2014-03-07 2017-06-15 株式会社ソニー・インタラクティブエンタテインメント Game system, display control method, display control program, and recording medium
CN104918122B (en) 2014-03-14 2018-09-07 北京四达时代软件技术股份有限公司 In the method and device of family's network sharing and control plurality of devices
US9946985B2 (en) * 2014-04-15 2018-04-17 Kofax, Inc. Touchless mobile applications and context-sensitive workflows
WO2015178707A1 (en) * 2014-05-22 2015-11-26 Samsung Electronics Co., Ltd. Display device and method for controlling the same
US9715875B2 (en) * 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9729591B2 (en) 2014-06-24 2017-08-08 Yahoo Holdings, Inc. Gestures for sharing content between multiple devices
US10192039B2 (en) 2014-06-27 2019-01-29 Microsoft Technology Licensing, Llc System for context-based data protection
KR102194923B1 (en) 2014-07-22 2020-12-24 엘지전자 주식회사 The Apparatus and Method for Display Device
JP6545255B2 (en) * 2014-08-02 2019-07-17 アップル インコーポレイテッドApple Inc. Context-specific user interface
US9473803B2 (en) 2014-08-08 2016-10-18 TCL Research America Inc. Personalized channel recommendation method and system
US9548066B2 (en) 2014-08-11 2017-01-17 Amazon Technologies, Inc. Voice application architecture
US20160085430A1 (en) 2014-09-24 2016-03-24 Microsoft Corporation Adapting user interface to interaction criteria and component properties
US9671862B2 (en) 2014-10-15 2017-06-06 Wipro Limited System and method for recommending content to a user based on user's interest
US20170011602A1 (en) 2014-12-11 2017-01-12 Elwha Llc Wearable haptic feedback devices and methods of fabricating wearable haptic feedback devices
US10223093B2 (en) 2014-12-12 2019-03-05 Pcms Holdings, Inc. Method and system for context-based control over access to personal data
US9826277B2 (en) 2015-01-23 2017-11-21 TCL Research America Inc. Method and system for collaborative and scalable information presentation
US9259651B1 (en) 2015-02-13 2016-02-16 Jumo, Inc. System and method for providing relevant notifications via an action figure
US20160263477A1 (en) 2015-03-10 2016-09-15 LyteShot Inc. Systems and methods for interactive gaming with non-player engagement
US20160277052A1 (en) 2015-03-19 2016-09-22 Kerloss Sadek Modular Portable Power Device
US10068460B2 (en) 2015-06-04 2018-09-04 ESCO Technologies, LLC Interactive media device
DE102016113802A1 (en) * 2015-07-28 2017-02-02 Steering Solutions Ip Holding Corporation Power assist system with spindle nut, brake booster actuator and method
US20170031575A1 (en) * 2015-07-28 2017-02-02 Microsoft Technology Licensing, Llc Tailored computing experience based on contextual signals
US10530720B2 (en) 2015-08-27 2020-01-07 Mcafee, Llc Contextual privacy engine for notifications
US10152811B2 (en) * 2015-08-27 2018-12-11 Fluke Corporation Edge enhancement for thermal-visible combined images and cameras
US9729925B2 (en) 2015-08-31 2017-08-08 Opentv, Inc. Automatically loading user profile to show recently watched channels
WO2017053437A1 (en) 2015-09-25 2017-03-30 Pcms Holdings, Inc. Context module based personal data protection
US9924236B2 (en) 2015-11-05 2018-03-20 Echostar Technologies L.L.C. Informational banner customization and overlay with other channels
KR102423588B1 (en) * 2015-12-28 2022-07-22 삼성전자주식회사 Information providing method and device
US20170185920A1 (en) * 2015-12-29 2017-06-29 Cognitive Scale, Inc. Method for Monitoring Interactions to Generate a Cognitive Persona
US10341352B2 (en) 2016-02-06 2019-07-02 Maximilian Ralph Peter von Liechtenstein Gaze initiated interaction technique
US20170289766A1 (en) * 2016-03-29 2017-10-05 Microsoft Technology Licensing, Llc Digital Assistant Experience based on Presence Detection
US20180054688A1 (en) * 2016-08-22 2018-02-22 Dolby Laboratories Licensing Corporation Personal Audio Lifestyle Analytics and Behavior Modification Feedback

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100031323A1 (en) * 2006-03-03 2010-02-04 Barracuda Networks, Inc. Network Interface Device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12499885B2 (en) * 2017-03-10 2025-12-16 Amazon Technologies, Inc. Voice-based parameter assignment for voice-capturing devices
US20190189120A1 (en) * 2017-12-08 2019-06-20 Samsung Electronics Co., Ltd. Method for providing artificial intelligence service during phone call and electronic device thereof
US20200150982A1 (en) * 2018-11-12 2020-05-14 International Business Machines Corporation Determination and inititation of a computing interface for computer-initiated task response
US11226835B2 (en) * 2018-11-12 2022-01-18 International Business Machines Corporation Determination and initiation of a computing interface for computer-initiated task response
US11226833B2 (en) * 2018-11-12 2022-01-18 International Business Machines Corporation Determination and initiation of a computing interface for computer-initiated task response
US20220093091A1 (en) * 2020-09-21 2022-03-24 International Business Machines Corporation Modification of voice commands based on sensitivity
US11657811B2 (en) * 2020-09-21 2023-05-23 International Business Machines Corporation Modification of voice commands based on sensitivity

Also Published As

Publication number Publication date
WO2018136109A1 (en) 2018-07-26
US20180210700A1 (en) 2018-07-26
TW201828025A (en) 2018-08-01
US10359993B2 (en) 2019-07-23
DE112017006882T5 (en) 2019-10-24

Similar Documents

Publication Publication Date Title
US20180210738A1 (en) Contextual user interface based on changes in environment
US11972678B2 (en) Server-provided visual output at a voice interface device
US11424947B2 (en) Grouping electronic devices to coordinate action based on context awareness
DE102017129939B4 (en) Conversation-aware proactive notifications for a voice interface device
US10367652B2 (en) Smart home automation systems and methods
US10958457B1 (en) Device control based on parsed meeting information
US10204623B2 (en) Privacy control in a connected environment
US10073681B2 (en) Home device application programming interface
US11194998B2 (en) Multi-user intelligent assistance
JP2020532757A (en) Intercom-type communication using multiple computing devices
US20180211658A1 (en) Ambient assistant device
US20180323991A1 (en) Initializing machine-curated scenes
US20180213290A1 (en) Contextual user interface based on media playback
US20180322300A1 (en) Secure machine-curated scenes
US20180213286A1 (en) Contextual user interface based on shared activities
US11315544B2 (en) Cognitive modification of verbal communications from an interactive computing device
US20200213261A1 (en) Selecting a modality for providing a message based on a mode of operation of output devices
US20200410317A1 (en) System and method for adjusting presentation features of a social robot
WO2023219649A1 (en) Context-based user interface
Benassi Systems and Methods for Adjusting Lighting to Improve Image Quality

Legal Events

Date Code Title Description
AS Assignment

Owner name: ESSENTIAL PRODUCTS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROMAN, MANUEL;SEGAL, MARA CLAIR;DESAI, DWIPAL;AND OTHERS;REEL/FRAME:043014/0300

Effective date: 20170623

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION